Yue “Sophie” Wang of Clemson University wins top research award
Yue “Sophie” Wang of Clemson University has received her second high-profile research award in less than a year and is launching a project that will help the Air Force target some of the challenges it faces in using teams of unmanned vehicles and other robots to carry out missions under human supervision.
Wang, an assistant professor of mechanical engineering, has received $360,000 as part of the Air Force Office of Scientific Research Young Investigator Research Program.
It’s one of the nation’s most prestigious awards for an early-career researcher.
“I feel very happy and honored,” she said. “Our hard work is paying off.”
The primary focus of Wang’s research will be on helping the Air Force with high-priority missions that include intelligence-gathering, surveillance and reconnaissance. For example, the research could allow pilots to control swarms of drones from their cockpits or help airmen and airwomen operate explosive-detection robots.
Her work could also be extended to other applications, such as logistics support, flight management and rescue.
The research will build on previous work Wang has done with robots. She won a $500,000 award in 2015 through the National Science Foundation’s Faculty Early Career Development Program, often called the NSF CAREER award.
Melur “Ram” Ramasubramanian, the chair of the Department of Mechanical Engineering, congratulated Wang on her success.
“Dr. Wang has won two prominent awards in less than a year,” he said. “Both are well deserved honors. It is indisputable that she is one of the nation’s top young researchers, and we’re fortunate to have her here at Clemson.”
For her latest research, Wang and her team will experiment with a team of Khepera robots that are about the size of a large coffee mug and scoot around on small wheels. The research will help the autonomous robots plan their motions and decide when they should seek help from their human operators.
“We will synthesize trust-based, human-robot collaboration protocols that are scalable, adaptable, and guaranteed to be correct,” Wang said.
With its research, the Wang team is helping robots and humans take advantage of each others’ strengths so that they can work together more effectively.
Robots are good for dull, dirty, dangerous missions that require them to loiter for long periods of time. They can also process large amounts of data.
Humans are better at making high-level decisions and for performing tasks in complex, rapidly changing situations. A simple task for a human, such as telling the difference between a tiger and a cat, can be difficult for a robot.
Wang’s plan for the Air Force project is broken up into three areas.
In the first, she will develop a model that estimates how much trust humans have in robots in real time while the robots are in use.
Trust is a critical issue for robots to operate autonomously. When people don’t trust robots, they are likely to do the work on their own.
If researchers can develop an accurate way of measuring trust, it would help autonomous robots know when they should ask for human help. That’s critical in managing workload when one person is controlling multiple robots.
The “probabilistic” and dynamic trust model the Wang team will develop for multi-robot teams will take into account the uncertainties in human perceptions and cognition and the environments where the robots are operating.
“We can now provide a more accurate estimation of humans’ latent trust, which is hard to measure,” Wang said. “We do not see much work in this area currently existing in the literature.”
In the second area of the research, the Wang team wants to use computer logic to communicate task requirements to several robots at a time. The method could be used on a large scale and adapted to a multi-robot team, she said.
“For example, let’s say I have three robots– A, B and C,” Wang said. “In my previous work, I only considered whether I trust robot A. But now I also want to consider, even if I trust robot A, what if I also trusted B and C? What is the negotiation between all these robots? If my trust in robot A decreases, will that affect my trust in another robot? How does this trust change affect the motion planning and task allocation in a multi-robot team?
“You only need to consider one local robot and its interaction with its neighbors. Based on this, you can boil the large-size problem down into a smaller-size problem, which can save you computational resources.
“One of the most important advantages of the method we’re using here is that once you’ve designed a set of motion planning for these robots, they are guaranteed to be correct and satisfy the overall task requirement.”
The work is what researchers call “multi-robot symbolic motion planning” and involves model checking methods using a computer language made up of symbols called temporal logic.
In the third part of the research, the Wang team will develop a “switch” that will help determine when the robot motion should be planned manually and when it can be operated autonomously. Whether the switch is triggered will be based on the level of trust the human operator has in the robot.
The idea is to help robots complete tasks more efficiently while guaranteeing task completion.
A robot that plays it safe could be doing things inefficiently, but a more efficient robot could take more risks with results that are difficult to verify and predict.
“We will investigate how to monitor and check the real-time behavior of the robots under manual motion planning and develop trust-triggered autonomous switching aids for trade-offs between efficiency and safety,” Wang wrote in the grant proposal.
Anand Gramopadhye, dean of the College of Engineering and Science, said landing the two awards in less than a year is an accomplishment rarely attained by a single researcher.
“Congratulations to Dr. Wang,” he said. “The two awards are a reflection of her hard work and ability in conducting exceptional research. Both awards are richly deserved.”