This robot is luckier than I am. I'm usually asked to click every image with a traffic light in it to prove I'm not a robot.
One of the many amazing things about Japan is their abundance of robots, from a robot-staffed hotel to robot waiters to robots that teach English to children. This cool robot, made by the sushi-robot company AUTEC, can make 2400 nigiri rice balls and 200 sushi rolls per hour.
Several years ago, I wrote a feature for Bloomberg Businessweek about soft robotics, "in which steel skeletons and power-hungry motors make way for textiles." The idea is that soft robots, often powered by compressed air in pneumatic "muscles," are more flexible, lighter weight, and much safer for their human workmates. Above is video of automation robotics firm Festo's BionicSoftArm. From their description:
Whether free and flexible movements or defined sequences, thanks to its modular design, the pneumatic lightweight robot can be used for numerous applications. In combination with various adaptive grippers, it can pick up and handle a wide variety of objects and shapes. At the same time, it is completely compliant and poses no danger to the user even in the event of a collision.
Jibo was a "social robot" startup that burned through $76m in venture capital and crowdfunding before having its assets sold to SQN Venture Partners late last year. Read the rest
This is the Android Kannon, a robotic manifestation of the Buddhist bodhisattva associated with mercy. Read the rest
MIT researchers developed a robot that can play Jenga based on a novel approach to machine learning that synthesizes sight and touch. From MIT News:
Read the rest
Alberto Rodriguez, the Walter Henry Gale Career Development Assistant Professor in the Department of Mechanical Engineering at MIT, says the robot demonstrates something that’s been tricky to attain in previous systems: the ability to quickly learn the best way to carry out a task, not just from visual cues, as it is commonly studied today, but also from tactile, physical interactions.
“Unlike in more purely cognitive tasks or games such as chess or Go, playing the game of Jenga also requires mastery of physical skills such as probing, pushing, pulling, placing, and aligning pieces. It requires interactive perception and manipulation, where you have to go and touch the tower to learn how and when to move blocks,” Rodriguez says. “This is very difficult to simulate, so the robot has to learn in the real world, by interacting with the real Jenga tower. The key challenge is to learn from a relatively small number of experiments by exploiting common sense about objects and physics.”
He says the tactile learning system the researchers have developed can be used in applications beyond Jenga, especially in tasks that need careful physical interaction, including separating recyclable objects from landfill trash and assembling consumer products.
“In a cellphone assembly line, in almost every single step, the feeling of a snap-fit, or a threaded screw, is coming from force and touch rather than vision,” Rodriguez says.
The 3D-printed robot above weighs just one milligram and is only 2.5mm at its longest point. Designed by University of Maryland mechanical engineer Ryan St. Pierre and his colleagues, it is likely the smallest walking robot in the world. Video of the microbot scurrying along is below. From IEEE Spectrum:
Like its predecessors, this robot is far too small for traditional motors or electronics. Its legs are controlled by external magnetic fields acting on tiny cubic magnets embedded in the robot’s hips. Rotating magnetic fields cause the magnets to rotate, driving the legs at speeds of up to 150 Hz. With all of the magnets installed into the hips in the same orientation, you get a pronking gait, but other gaits are possible by shifting the magnets around a bit. Top speed is an impressive 37.3 mm/s, or 14.9 body lengths per second, and somewhat surprisingly, the robot seems to be quite durable—it was tested for 1,000,000 actuation cycles “with no signs of visible wear or decreased performance.”
Who first imagined the concept of robots?Read the rest
Most historians of science trace the first automatons to the Middle Ages. But I wondered, was it possible that ideas about creating artificial life were thinkable long before technology made such enterprises possible? Remarkably, as early as the time of Homer, ancient Greek myths were envisioning how to imitate, augment, and surpass nature, by means of biotechne, “life through craft”—what we now call biotechnology. Beings described as fabricated, “made, not born,” appeared in myths about Jason and the Argonauts, the sorceress Medea, the bronze robot Talos, the ingenious craftsman Daedalus, the fire-bringer Prometheus, and Pandora, the female android created by Hephaestus, god of invention. These vivid stories were ancient thought experiments set in an alternate world where technology was marvelously advanced.
Modern sci-fi movies pop up in several chapters. How do they relate to ancient myths?
Some 2,500 years before movies were invented, ancient Greek vase painters illustrated popular stories of the bronze robot warrior Talos, the techno-wizard Medea, and the fembot Pandora dispatched to earth on an evil mission, in ways that seem very “cinematic...” Movies and myths about imagined technology are cultural dreams. Like contemporary science fiction tales, the myths show how the power of imagination allows humans to ponder how artificial life might be created—if only one possessed sublime technology and genius.
Continuing the quest to design robots that could travel through our bodies to deliver drugs and cure disease, researchers at EPFL and ETH Zurich demonstrated tiny shape-shifting microrobots that swim through blood vessels. Made from hydrogel nanocomposites, the microbots can fold into various shapes for easy travel through tight spaces and flowing with dense, viscous, or fast-moving liquids. The microbots are peppered with magnetic nanoparticles so that they can be "steered" with an external magnetic field. From EPFL:
“Our robots have a special composition and structure that allow them to adapt to the characteristics of the fluid they are moving through. For instance, if they encounter a change in viscosity or osmotic concentration, they modify their shape to maintain their speed and maneuverability without losing control of the direction of motion,” says (EPFL researcher Selman) Sakar.
These deformations can be “programmed” in advance so as to maximize performance without the use of cumbersome sensors or actuators. The robots can be either controlled using an electromagnetic field or left to navigate on their own through cavities by utilizing fluid flow. Either way, they will automatically morph into the most efficient shape.
Hyundai developed a "walking car" and "unveiled" it at the CES trade show yesterday. As far as I can tell it's just a mockup video, but the idea is timely. Traditionally, only flimsy and obviously impractical spider-legged robots could handle rough terrain, while two- and four-legged ones were too unstable to get far. Will Boston Dynamics soon have serious competition? Add your thoughts in the comments, increasingly-useless waterlogged meatbags! Read the rest
The host of Life Where I'm From, a YouTube channel about life as a foreigner in Japan, visited a pop-up cafe with robot waiters that are remotely controlled by people with disabilities. The workers control the robots from their homes, and can speak with customers using a microphone.
Read the rest
In December of 2018 I visited a temporary robot cafe, but it's not the type of Japanese robot cafe that comes to mind to most. Rather than a robotic show, this was a cafe where the robot waiters were in fact avatars for people with disabilities, who remotely controlled them from their homes.
This cafe was part of an initiative by The Nippon Foundation, Ory Lab Inc., and ANA Holdings Inc. It allowed people with ALS and severe disabilities to be able to work and interact with the world.
Dawn ver.β is a Tokyo cafe in Akasaka where all the table service is performed by 120 cm tall OriHime-D robots that are piloted by people who are paralyzed and work from home; it was inspired by a fictional cafe in the 2008 anime Time of Eve. Read the rest
State-owned TV network Russia-24 ran a story about an impressive humanoid robot named Boris that wowed attendees at a youth technology conference. Turns out, Boris the Robot was actually a man inside a commercially-available, high-end robot costume. From The Guardian:
A photograph published by MBKh Media, the news agency founded by the Vladimir Putin opponent Mikhail Khodorkovsky, appeared to show the actor in the robot suit ahead of the forum on Tuesday in Yaroslavl, a city about 150 miles north-east of Moscow.
The organisers of the Proyektoria technology forum, held each year for the “future intellectual leaders of Russia”, did not try to pass off the robot as real, the website reported.
But whether by mistake or design, the state television footage did just that. “It’s entirely possible one of these [students] could dedicate himself to robotics,” an anchor reported. “Especially as at the forum they have the opportunity to look at the most modern robots.”
Elowan is a "plant-robot hybrid" that uses its own bio-electromechanical signaling to drive itself around toward light sources. From an explanation by researcher Harpreet Sareen and his colleagues at the MIT Media Lab:
In this experimental setup, electrodes are inserted into the regions of interest (stems and ground, leaf and ground). The weak signals are then amplified and sent to the robot to trigger movements to respective directions.
Such symbiotic interplay with the artificial could be extended further with exogenous extensions that provide nutrition, growth frameworks, and new defense mechanisms.
Astronauts on board the International Space Station have switched on CIMON (Crew Interactive Mobile CompanioN), a new AI companion robot built by German space agency DLR, Airbus, and IBM. CIMON is an interface for IBM's WATSON AI system. From Space.com:
Marco Trovatello, a spokesman of the European Space Agency's Astronaut Centre in Cologne, Germany, told Space.com that CIMON could respond within a couple of seconds after a question was asked, no slower than in ground-based tests.
A data link connects CIMON with the Columbus control center in Germany; from there, the signal travels first to the Biotechnology Space Support Center at the Lucerne University in Switzerland, where CIMON's control team is based. Then, the connection is made over the internet to the IBM Cloud in Frankfurt, Germany, Bernd Rattenbacher, the team leader at the ground control centre at Lucerne University, said in the statement...
"CIMON is a technology demonstration of what a future AI-based assistant on the International Space Station or on a future, longer-term exploration mission would look like," Trovatello said. "In the future, an astronaut could ask CIMON to show a procedure for a certain experiment, and CIMON would do that."
After sticking a perfect landing on the Martian surface this afternoon, NASA's InSight robot lander has successfully deployed its solar panels. Tomorrow, InSight will fire up its scientific instruments and get to work collecting data about the planet's interior. From NASA/JPL-Caltech:
Read the rest
NASA's InSight has sent signals to Earth indicating that its solar panels are open and collecting sunlight on the Martian surface. NASA's Mars Odyssey orbiter relayed the signals, which were received on Earth at about 5:30 p.m. PST (8:30 p.m. EST). Solar array deployment ensures the spacecraft can recharge its batteries each day. Odyssey also relayed a pair of images showing InSight's landing site.
"The InSight team can rest a little easier tonight now that we know the spacecraft solar arrays are deployed and recharging the batteries," said Tom Hoffman, InSight's project manager at NASA's Jet Propulsion Laboratory in Pasadena, California, which leads the mission. "It's been a long day for the team. But tomorrow begins an exciting new chapter for InSight: surface operations and the beginning of the instrument deployment phase..."
In the coming days, the mission team will unstow InSight's robotic arm and use the attached camera to snap photos of the ground so that engineers can decide where to place the spacecraft's scientific instruments. It will take two to three months before those instruments are fully deployed and sending back data.
In the meantime, InSight will use its weather sensors and magnetometer to take readings from its landing site at Elysium Planitia — its new home on Mars.
Researchers at Melbourne, Australia's RMIT University devised these bizarre "third arm" chest-mounted"robots to experiment with what they call "playful eating." For science. Video below. From RMIT University's Exertion Games Lab:
In this experience, all three arms (the person’s own two arms and the “third” arm, the robotic arm) are used for feeding oneself and the other person. The robotic arm (third arm) is attached to the body via a vest. We playfully subverted the functioning of the robotic arm so that its final movements (once it has picked up the food), i.e. whether to feed the wearer or the partner, are guided by the facial expressions of the dining partner...
Mapping of the partner’s “more positive” facial expression to the feeding of food to the partner (via the wearer’s third arm) we hoped would elicit joy, laughter, and a sense of sharing based on the knowledge of feeding one another that is associated with positive emotions, however, this could also result in the perception of a loss of agency over what one eats. Through to-and-fro ambiguous movements of the third arm in the air (when sensing a “neutral” facial expression of the dining partner), it gave an opportunity to the diners to express their reactions more vividly, as we know that facial expressions become a key element to engage with a partner while eating.
More at IEEE Spectrum: "Feed Your Friends With Autonomous Chest-Mounted Robot Arms"