Elowan is a "plant-robot hybrid" that uses its own bio-electromechanical signaling to drive itself around toward light sources. From an explanation by researcher Harpreet Sareen and his colleagues at the MIT Media Lab:
In this experimental setup, electrodes are inserted into the regions of interest (stems and ground, leaf and ground). The weak signals are then amplified and sent to the robot to trigger movements to respective directions.
Such symbiotic interplay with the artificial could be extended further with exogenous extensions that provide nutrition, growth frameworks, and new defense mechanisms.
Elowan: A plant-robot hybrid Read the rest
Astronauts on board the International Space Station have switched on CIMON (Crew Interactive Mobile CompanioN), a new AI companion robot built by German space agency DLR, Airbus, and IBM. CIMON is an interface for IBM's WATSON AI system. From Space.com:
Marco Trovatello, a spokesman of the European Space Agency's Astronaut Centre in Cologne, Germany, told Space.com that CIMON could respond within a couple of seconds after a question was asked, no slower than in ground-based tests.
A data link connects CIMON with the Columbus control center in Germany; from there, the signal travels first to the Biotechnology Space Support Center at the Lucerne University in Switzerland, where CIMON's control team is based. Then, the connection is made over the internet to the IBM Cloud in Frankfurt, Germany, Bernd Rattenbacher, the team leader at the ground control centre at Lucerne University, said in the statement...
"CIMON is a technology demonstration of what a future AI-based assistant on the International Space Station or on a future, longer-term exploration mission would look like," Trovatello said. "In the future, an astronaut could ask CIMON to show a procedure for a certain experiment, and CIMON would do that."
Read the rest
After sticking a perfect landing on the Martian surface this afternoon, NASA's InSight robot lander has successfully deployed its solar panels. Tomorrow, InSight will fire up its scientific instruments and get to work collecting data about the planet's interior. From NASA/JPL-Caltech:
Read the rest
NASA's InSight has sent signals to Earth indicating that its solar panels are open and collecting sunlight on the Martian surface. NASA's Mars Odyssey orbiter relayed the signals, which were received on Earth at about 5:30 p.m. PST (8:30 p.m. EST). Solar array deployment ensures the spacecraft can recharge its batteries each day. Odyssey also relayed a pair of images showing InSight's landing site.
"The InSight team can rest a little easier tonight now that we know the spacecraft solar arrays are deployed and recharging the batteries," said Tom Hoffman, InSight's project manager at NASA's Jet Propulsion Laboratory in Pasadena, California, which leads the mission. "It's been a long day for the team. But tomorrow begins an exciting new chapter for InSight: surface operations and the beginning of the instrument deployment phase..."
In the coming days, the mission team will unstow InSight's robotic arm and use the attached camera to snap photos of the ground so that engineers can decide where to place the spacecraft's scientific instruments. It will take two to three months before those instruments are fully deployed and sending back data.
In the meantime, InSight will use its weather sensors and magnetometer to take readings from its landing site at Elysium Planitia — its new home on Mars.
Researchers at Melbourne, Australia's RMIT University devised these bizarre "third arm" chest-mounted"robots to experiment with what they call "playful eating." For science. Video below. From RMIT University's Exertion Games Lab:
In this experience, all three arms (the person’s own two arms and the “third” arm, the robotic arm) are used for feeding oneself and the other person. The robotic arm (third arm) is attached to the body via a vest. We playfully subverted the functioning of the robotic arm so that its final movements (once it has picked up the food), i.e. whether to feed the wearer or the partner, are guided by the facial expressions of the dining partner...
Mapping of the partner’s “more positive” facial expression to the feeding of food to the partner (via the wearer’s third arm) we hoped would elicit joy, laughter, and a sense of sharing based on the knowledge of feeding one another that is associated with positive emotions, however, this could also result in the perception of a loss of agency over what one eats. Through to-and-fro ambiguous movements of the third arm in the air (when sensing a “neutral” facial expression of the dining partner), it gave an opportunity to the diners to express their reactions more vividly, as we know that facial expressions become a key element to engage with a partner while eating.
"Arm-A-Dine: Towards Understanding the Design of Playful Embodied Eating Experiences" (PDF)
More at IEEE Spectrum: "Feed Your Friends With Autonomous Chest-Mounted Robot Arms"
Read the rest
This is the new version of Affetto, the robot child head that's a testbed for synthetic facial expressions. According to the Osaka University researchers who birthed Affeto, their goal is to "offer a path for androids to express greater ranges of emotion, and ultimately have deeper interaction with humans." From Osaka University:
The researchers investigated 116 different facial points on Affetto to measure its three-dimensional movement. Facial points were underpinned by so-called deformation units. Each unit comprises a set of mechanisms that create a distinctive facial contortion, such as lowering or raising of part of a lip or eyelid. Measurements from these were then subjected to a mathematical model to quantify their surface motion patterns.
While the researchers encountered challenges in balancing the applied force and in adjusting the synthetic skin, they were able to employ their system to adjust the deformation units for precise control of Affetto’s facial surface motions.
“Android robot faces have persisted in being a black box problem: they have been implemented but have only been judged in vague and general terms,” study first author Hisashi Ishihara says. “Our precise findings will let us effectively control android facial movements to introduce more nuanced expressions, such as smiling and frowning.”
Read the rest
Robert Sabuda (previously) writes, "It has long been a dream of the Leonardo da Vinci Robot Society to bring one of the Renaissance Master’s creations back to life. 2019 is the 500th anniversary of da Vinci’s death and the Society has chosen to honor his memory and celebrate his life through one of his best known inventions - the Robot Knight. This robot is an early proto-computer android whose read-only programmable memory allowed it to perform many actions. And it was also rumored that the robot’s arm could also perform an extremely complex task…draw a picture!"
Read the rest
Set aside your feeble 3D printers, meatbags. The Grind Master demonstrates the supremacy of fully-automated robot milling of stone, wood and other sculptural materials. Here the Buddha's head is sculpted from foam:
Epic glove ad explains benefits of gloves Read the rest
IEEE Spectrum editor Erico Guizzo and colleagues have blown out their original Robots app into a fantastic catalog of 200 of today's fantastic species of robots. They're cleverly organized into fun categories like "Robots You Can Hug," "Robots That Can Dance," "Space Robots," and "Factory Workers." If they keep it updated, it'll be very helpful for the robot uprising. After all, you can't tell the players without a program!
Robots: Your Guide to the World of Robotics (IEEE Spectrum)
Read the rest
They're figuring out what we really want from a 21st Century deathbot: moves.
When was the last time a human was seen in one of these videos? Perhaps in the next one we'll see a human crawling on all fours over ice, making loud engine noises between terrified whimpers, only for a perfectly stable bipedal robot to lunge in from off-screen and kick it. Then we'll know what has become of the fleshbags at Boston Dynamics. Read the rest
Boston Dynamics has just released this astounding video of their Atlas humanoid robot doing parkour:
The control software uses the whole body including legs, arms and torso, to marshal the energy and strength for jumping over the log and leaping up the steps without breaking its pace. (Step height 40 cm.) Atlas uses computer vision to locate itself with respect to visible markers on the approach to hit the terrain accurately.
Unfortunately the engineers failed to outfit Atlas with a speech synthesizer to yell "Parkour! Parkour! Parkour!" like so.
Read the rest
There's a robot apocalypse coming, but it's likely not going to result in the loss of billions of human lives. Rather, it's our livelihoods that are at stake. For some vocations, signs of a paradigm shift are already here. The HRP-5P humanoid robot is designed to be a drywall-hanging machine. It's slow now, but it's capable. Sooner or later, it'll be fast enough and cheap enough to make skilled construction labor a thing of the past. My Grandfather, who spent the better part of his life building churches, homes and movie theaters, would have shit a brick were he alive to see this.
Have no doubt, no matter what you do for a living, that similar appliances are on their way to make our daily toil a redundancy. Society's going to need to learn to adapt--fast. Read the rest
Yale engineers developed "robotic skins" from elastic sheets integrating sensors and electromechanical actuators. The idea is that most any flexible object could be transformed into a robot. Professor Rebecca Kramer-Bottiglio and her colleagues reported on their project, called OmniSkins, in the journal Science Robotics. From YaleNews:
Read the rest
Placed on a deformable object — a stuffed animal or a foam tube, for instance — the skins animate these objects from their surfaces. The makeshift robots can perform different tasks depending on the properties of the soft objects and how the skins are applied.
“We can take the skins and wrap them around one object to perform a task — locomotion, for example — and then take them off and put them on a different object to perform a different task, such as grasping and moving an object,” she said. “We can then take those same skins off that object and put them on a shirt to make an active wearable device.”
HAL is described as the "world's most advanced" Pediatric Patient Simulator. Hal simulates lifelike emotions through "dynamic facial expressions, movement and speech." Gaumard Scientific's video promises "amazed,
transient pain, crying, and more." [via @3liza]
HAL not only looks like a boy, he behaves like one. He can track a finger with his eyes, answer questions, cry for his mother and experience anaphylactic shock. He can even breathe faster and/or urinate when scared. And he has also been built in a way that allows doctors and nurses in-training to perform a myriad of tests such as taking blood pressure, checking his pulse and monitoring breathing. Trainees can also use real medical equipment such as an EKG machine or a heart or blood pressure monitor—or tools such as a scalpel or breathing tubes—to perform realistic medical procedures.
Here's HAL's ad.
Read the rest
Michael Froomkin writes, "We Robot, now heading into its 8th year, is lots of fun -- and it's also the leading North American conference on robotics law and policy. The 2019 edition will be held at the University of Miami on April 12-13, 2019, preceded by a day of special workshops on April 11. We just today opened the submissions portal for paper and demo proposals. Full details are in the Call for Papers.
Read the rest
A couple of weeks ago Carla and I went to the newly opened Japan House in Hollywood to see a presentation about prototyping of robots. Japan House is a combination gallery, shop, event venue, and restaurant at Hollywood and Highland that "seeks to foster awareness and appreciation for Japan around the world by showcasing the very best of Japanese art, design, gastronomy, innovation, technology, and more."
The current exhibition at Japan House focuses on the work of professor Shunji Yamanaka, who leads the Yamanaka Laboratory at the University of Tokyo. Visitors can pick up and inspect 3D printed prototypes of lifelike robotic creatures that look like insects, lizards, and otherworldly animals. Many of the robots are outfitted with motors and they move in lifelike ways.
Professor Yamanaka gave a presentation of his work that evening. I learned that he was the inventor of the card-activated gate that's used by millions of Japanese every day when they ride the rail system. I used these gates dozens of times when I was in Japan this summer to ride the subway, activating the gates with a stored value card called SUICA. (You can also open the gates with a smart watch, like the Apple Watch.)
Professor Yamanaka also co-created, along with Takayuki Furuta of the Future Robotics Technology Center at Chiba Institute of Technology, a robot called the CanguRo (Spanish for kangaroo). The robot can be ridden like a motorized scooter or it can roll alongside you and carry heavy items. Here's a video of it in action:
“Prototyping in Tokyo” runs until 10 October 2018, Monday - Saturday from 10 am - 8 pm and Sunday from 10 am – 7pm. Read the rest
The Smarter Bridge is a project led by Mix3d, which makes robotic 3D printers that can sinter stainless steel structures and inch their way along the surfaces as they are completed.
Read the rest
When fire ants dig out a new nest underground, a small number are actually doing most of the work while the rest dilly-dally. Apparently this is actually an effective division of labor because it prevents the insects from getting in each other's way. Now, Georgia Tech researchers suggest this approach could be help future robot swarms be more efficient in cramped areas like collapsed buildings or construction sites. From Science News:
(Physicist Daniel) Goldman’s team created computer simulations of two ant colonies digging tunnels. In one, the virtual ants mimicked the real insects’ unequal work split; in the other, all the ants pitched in equally. The colony with fewer heavy lifters was better at keeping tunnel traffic moving; in three hours, that colony dug a tunnel that was about three times longer than the group of ants that all did their fair share.
Goldman’s team then tested the fire ants’ teamwork strategy on autonomous robots. These robots trundled back and forth along a narrow track, scooping up plastic balls at one end and dumping them at the other. Programming the robots to do equal work is “not so bad when you have two or three,” Goldman says, “but when you get four in that little narrow tunnel, forget about it.” The four-bot fleet tended to get stuck in pileups. Programming the robots to share the workload unequally helped avoid these smashups and move material 35 percent faster, the researchers found.
"Collective clog control: Optimizing traffic flow in confined biological and robophysical excavation" (Science)
(image: Stephen Ausmus/Wikipedia) Read the rest