Hummingbird robot using AI to go soon where drones can’t
press release
https://www.purdue.edu/newsroom/releases/2019/Q2/hummingbird-robot-uses-ai-to-soon-go-where-drones-cant.html
WEST LAFAYETTE, Ind. — What can fly like a bird and hover like an insect?
Your friendly neighborhood hummingbirds. If drones had this combo, they would be able to maneuver better through collapsed buildings and other cluttered spaces to find trapped victims.
Purdue University researchers have engineered flying robots that behave like hummingbirds, trained by machine learning algorithms based on various techniques the bird uses naturally every day.
This means that after learning from a simulation, the robot “knows” how to move around on its own like a hummingbird would, such as discerning when to perform an escape maneuver.
Artificial intelligence, combined with flexible flapping wings, also allows the robot to teach itself new tricks. Even though the robot can’t see yet, for example, it senses by touching surfaces. Each touch alters an electrical current, which the researchers realized they could track.
“The robot can essentially create a map without seeing its surroundings. This could be helpful in a situation when the robot might be searching for victims in a dark place – and it means one less sensor to add when we do give the robot the ability to see,” said Xinyan Deng, an associate professor of mechanical engineering at Purdue.
The researchers will present their work on May 20 at the 2019 IEEE International Conference on Robotics and Automation in Montreal.
A YouTube video is available at https://www.youtube.com/watch?v=jhl892dHqfA&feature=youtu.be.
Drones can’t be made infinitely smaller, due to the way conventional aerodynamics work. They wouldn’t be able to generate enough lift to support their weight.
But hummingbirds don’t use conventional aerodynamics – and their wings are resilient. “The physics is simply different; the aerodynamics is inherently unsteady, with high angles of attack and high lift. This makes it possible for smaller, flying animals to exist, and also possible for us to scale down flapping wing robots,” Deng said.
Researchers have been trying for years to decode hummingbird flight so that robots can fly where larger aircraft can’t. In 2011, the company AeroVironment, commissioned by DARPA, an agency within the U.S. Department of Defense, built a robotic hummingbird that was heavier than a real one but not as fast, with helicopter-like flight controls and limited maneuverability. It required a human to be behind a remote control at all times.
Deng’s group and her collaborators studied hummingbirds themselves for multiple summers in Montana. They documented key hummingbird maneuvers, such as making a rapid 180-degree turn, and translated them to computer algorithms that the robot could learn from when hooked up to a simulation.
Further study on the physics of insects and hummingbirds allowed Purdue researchers to build robots smaller than hummingbirds – and even as small as insects – without compromising the way they fly. The smaller the size, the greater the wing flapping frequency, and the more efficiently they fly, Deng says.
The robots have 3D-printed bodies, wings made of carbon fiber and laser-cut membranes. The researchers have built one hummingbird robot weighing 12 grams – the weight of the average adult magnificent hummingbird – and another insect-sized robot weighing 1 gram. The hummingbird robot can lift more than its own weight, up to 27 grams.
Designing their robots with higher lift gives the researchers more wiggle room to eventually add a battery and sensing technology, such as a camera or GPS. Currently, the robot needs to be tethered to an energy source while it flies – but that won’t be for much longer, the researchers say.
The robots could fly silently just as a real hummingbird does, making them more ideal for covert operations. And they stay steady through turbulence, which the researchers demonstrated by testing the dynamically scaled wings in an oil tank.
The robot requires only two motors and can control each wing independently of the other, which is how flying animals perform highly agile maneuvers in nature.
“An actual hummingbird has multiple groups of muscles to do power and steering strokes, but a robot should be as light as possible, so that you have maximum performance on minimal weight,” Deng said.
Robotic hummingbirds wouldn’t only help with search-and-rescue missions, but also would allow biologists to study hummingbirds more reliably in their natural environment through the senses of a realistic robot.
“We learned from biology to build the robot, and now biological discoveries can happen with extra help from robots,” Deng said.
Simulations of the technology are available open-source at https://github.com/purdue-biorobotics/flappy.
Early stages of the work, including the Montana hummingbird experiments in collaboration with Bret Tobalske’s group at the University of Montana, were financially supported by the National Science Foundation.
This work aligns with Purdue's Giant Leaps celebration, acknowledging the university’s global advancements made in AI, algorithms and automation as part of Purdue’s 150th anniversary. This is one of the four themes of the yearlong celebration’s Ideas Festival, designed to showcase Purdue as an intellectual center solving real-world issues.
Writer: Kayla Wiles, 765-494-2432, wiles5@purdue.edu
Source: Xinyan Deng, 765-494-1513, xdeng@purdue.edu
Note to Journalists: Links to the paper preprints are available in the abstracts. A YouTube video is available at https://www.youtube.com/watch?v=jhl892dHqfA&feature=youtu.be and other multimedia can be found in a Google Drive folder at https://drive.google.com/open?id=1XrFz3MOj_2jotVjVQOmC5upfD8kWIOnF. Video and photos were prepared by Jared Pike, communications specialist for Purdue University’s School of Mechanical Engineering.
ABSTRACTS
Learning Extreme Hummingbird Maneuvers on Flapping Wing Robots
Fan Fei, Zhan Tu, Jian Zhang, and Xinyan Deng
Purdue University, West Lafayette, IN, USA
https://arxiv.org/abs/1902.09626
Biological studies show that hummingbirds can perform extreme aerobatic maneuvers during fast escape. Given a sudden looming visual stimulus at hover, a hummingbird initiates a fast backward translation coupled with a 180-degree yaw turn, which is followed by instant posture stabilization in just under 10 wingbeats. Consider the wingbeat frequency of 40Hz, this aggressive maneuver is carried out in just 0.2 seconds. Inspired by the hummingbirds’ near-maximal performance during such extreme maneuvers, we developed a flight control strategy and experimentally demonstrated that such maneuverability can be achieved by an at-scale 12- gram hummingbird robot equipped with just two actuators. The proposed hybrid control policy combines model-based nonlinear control with model-free reinforcement learning. We use model-based nonlinear control for nominal flight control, as the dynamic model is relatively accurate for these conditions. However, during extreme maneuver, the modeling error becomes unmanageable. A model-free reinforcement learning policy trained in simulation was optimized to ’destabilize’ the system and maximize the performance during maneuvering. The hybrid policy manifests a maneuver that is close to that observed in hummingbirds. Direct simulation-to-real transfer is achieved, demonstrating the hummingbird-like fast evasive maneuvers on the at-scale hummingbird robot.
Acting is Seeing: Navigating Tight Space Using Flapping Wings
Zhan Tu, Fan Fei, Jian Zhang, and Xinyan Deng
Purdue University, West Lafayette, IN, USA
https://arxiv.org/abs/1902.08688
Wings of flying animals can not only generate lift and control torques but also can sense their surroundings. Such dual functions of sensing and actuation coupled in one element are particularly useful for small sized bio-inspired robotic flyers, whose weight, size, and power are under stringent constraint. In this work, we present the first flapping-wing robot using its flapping wings for environmental perception and navigation in tight space, without the need for any visual feedback. As the test platform, we introduce the Purdue Hummingbird, a flapping-wing robot with 17cm wingspan and 12 grams weight, with a pair of 30-40Hz flapping wings driven by only two actuators. By interpreting the wing loading feedback and its variations, the vehicle can detect the presence of environmental changes such as grounds, walls, stairs, obstacles and wind gust. The instantaneous wing loading can be obtained through the measurements and interpretation of the current feedback by the motors that actuate the wings. The effectiveness of the proposed approach is experimentally demonstrated on several challenging flight tasks without vision: terrain following, wall following and going through a narrow corridor. To ensure flight stability, a robust controller was designed for handling unforeseen disturbances during the flight. Sensing and navigating one’s environment through actuator loading is a promising method for mobile robots, and it can serve as an alternative or complementary method to visual perception.
Flappy Hummingbird: An Open Source Dynamic Simulation of Flapping Wing Robots and Animals
Fan Fei, Zhan Tu, Yilun Yang, Jian Zhang, and Xinyan Deng
Purdue University, West Lafayette, IN, USA
https://arxiv.org/abs/1902.09628
Insects and hummingbirds exhibit extraordinary flight capabilities and can simultaneously master seemingly conflicting goals: stable hovering and aggressive maneuvering, unmatched by small scale man-made vehicles. Flapping Wing Micro Air Vehicles (FWMAVs) hold great promise for closing this performance gap. However, design and control of such systems remain challenging due to various constraints. Here, we present an open source high fidelity dynamic simulation for FWMAVs to serve as a testbed for the design, optimization and flight control of FWMAVs. For simulation validation, we recreated the hummingbird-scale robot developed in our lab in the simulation. System identification was performed to obtain the model parameters. The force generation, open- loop and closed-loop dynamic response between simulated and experimental flights were compared and validated. The unsteady aerodynamics and the highly nonlinear flight dynamics present challenging control problems for conventional and learning control algorithms such as Reinforcement Learning. The interface of the simulation is fully compatible with OpenAI Gym environment. As a benchmark study, we present a linear controller for hovering stabilization and a Deep Reinforcement Learning control policy for goal-directed maneuvering. Finally, we demonstrate direct simulation-to-real transfer of both control policies onto the physical robot, further demonstrating the fidelity of the simulation. Read more...
DARPA to launch competition for AI-powered aircraft dogfighting
https://www.flightglobal.com/news/articles/darpa-to-launch-competition-for-ai-powered-aircraft-458138/
DARPA's press release
https://www.darpa.mil/news-events/2019-05-08
Artificial intelligence has defeated chess grandmasters, Go champions, professional poker players, and, now, world-class human experts in the online strategy games Dota 2 and StarCraft II. No AI currently exists, however, that can outduel a human strapped into a fighter jet in a high-speed, high-G dogfight. As modern warfare evolves to incorporate more human-machine teaming, DARPA seeks to automate air-to-air combat, enabling reaction times at machine speeds and freeing pilots to concentrate on the larger air battle.
Turning aerial dogfighting over to AI is less about dogfighting, which should be rare in the future, and more about giving pilots the confidence that AI and automation can handle a high-end fight. As soon as new human fighter pilots learn to take-off, navigate, and land, they are taught aerial combat maneuvers. Contrary to popular belief, new fighter pilots learn to dogfight because it represents a crucible where pilot performance and trust can be refined. To accelerate the transformation of pilots from aircraft operators to mission battle commanders — who can entrust dynamic air combat tasks to unmanned, semi-autonomous airborne assets from the cockpit — the AI must first prove it can handle the basics.
To pursue this vision, DARPA created the Air Combat Evolution (ACE) program. ACE aims to increase warfighter trust in autonomous combat technology by using human-machine collaborative dogfighting as its initial challenge scenario. DARPA will hold a Proposers Day for interested researchers on May 17, 2019, in Arlington, Virginia.
“Being able to trust autonomy is critical as we move toward a future of warfare involving manned platforms fighting alongside unmanned systems,” said Air Force Lt. Col. Dan Javorsek (Ph.D.), ACE program manager in DARPA’s Strategic Technology Office (STO). “We envision a future in which AI handles the split-second maneuvering during within-visual-range dogfights, keeping pilots safer and more effective as they orchestrate large numbers of unmanned systems into a web of overwhelming combat effects.”
ACE is one of several STO programs designed to enable DARPA’s “mosaic warfare” vision. Mosaic warfare shifts warfighting concepts away from a primary emphasis on highly capable manned systems — with their high costs and lengthy development timelines — to a mix of manned and less-expensive unmanned systems that can be rapidly developed, fielded, and upgraded with the latest technology to address changing threats. Linking together manned aircraft with significantly cheaper unmanned systems creates a “mosaic” where the individual “pieces” can easily be recomposed to create different effects or quickly replaced if destroyed, resulting in a more resilient warfighting capability.
The ACE program will train AI in the rules of aerial dogfighting similar to how new fighter pilots are taught, starting with basic fighter maneuvers in simple, one-on-one scenarios. While highly nonlinear in behavior, dogfights have a clearly defined objective, measureable outcome, and the inherent physical limitations of aircraft dynamics, making them a good test case for advanced tactical automation. Like human pilot combat training, the AI performance expansion will be closely monitored by fighter instructor pilots in the autonomous aircraft, which will help co-evolve tactics with the technology. These subject matter experts will play a key role throughout the program.
“Only after human pilots are confident that the AI algorithms are trustworthy in handling bounded, transparent and predictable behaviors will the aerial engagement scenarios increase in difficulty and realism,” Javorsek said. “Following virtual testing, we plan to demonstrate the dogfighting algorithms on sub-scale aircraft leading ultimately to live, full-scale manned-unmanned team dogfighting with operationally representative aircraft.”
DARPA seeks a broad spectrum of potential proposers for each area of study, including small companies and academics with little previous experience with the Defense Department. To that end, before Phase 1 of the program begins, DARPA will sponsor a stand-alone, limited-scope effort focused on the first technical area: automating individual tactical behavior for one-on-one dogfights. Called the “AlphaDogfight Trials,” this initial solicitation will be issued by AFWERX, an Air Force innovation catalyst with the mission of finding novel solutions to Air Force challenges at startup speed. The AFWERX trials will pit AI dogfighting algorithms against each other in a tournament-style competition.
“Through the AFWERX trials, we intend to tap the top algorithm developers in the air combat simulation and gaming communities,” Javorsek said. “We want them to help lay the foundational AI elements for dogfights, on which we can build as the program progresses.”
AFWERX will announce the trials in the near future on its website: https://www.afwerx.af.mil/.
For ACE Proposers Day registration details, please visit: https://go.usa.gov/xmnMn