Pushing forward on the vision of "programmable matter," MIT researchers demonstrated a new kind of assembly system based on robots that can collaboratively build complicated structures from small identical pieces. Professor Neil Gershenfeld, graduate student Benjamin Jenett, and their colleagues present their research in a scientific paper titled "Material–Robot System for Assembly of Discrete Cellular Structures." From MIT News:
Read the rest
“What’s at the heart of this is a new kind of robotics, that we call relative robots,” Gershenfeld says. Historically, he explains, there have been two broad categories of robotics — ones made out of expensive custom components that are carefully optimized for particular applications such as factory assembly, and ones made from inexpensive mass-produced modules with much lower performance. The new robots, however, are an alternative to both. They’re much simpler than the former, while much more capable than the latter, and they have the potential to revolutionize the production of large-scale systems, from airplanes to bridges to entire buildings.
According to Gershenfeld, the key difference lies in the relationship between the robotic device and the materials that it is handling and manipulating. With these new kinds of robots, “you can’t separate the robot from the structure — they work together as a system,” he says. For example, while most mobile robots require highly precise navigation systems to keep track of their position, the new assembler robots only need to keep track of where they are in relation to the small subunits, called voxels, that they are currently working on.
OpenAI Inc. demonstrated a one-handed robot solving a Rubik's Cube. Apparently the real breakthrough in this milestone was teaching the system to do the task in simulation. “While the video makes it easy to focus on the physical robot, the magic is mostly happening in simulation, and transferring things learned in simulation to the real world," writes Evan Ackerman in IEEE Spectrum:
The researchers point out that the method they’ve developed here is general purpose, and you can train a real-world robot to do pretty much any task that you can adequately simulate. You don’t need any real-world training at all, as long as your simulations are diverse enough, which is where the automatic domain randomization comes in. The long-term goal is to reduce the task specialization that’s inherent to most robots, which will help them be more useful and adaptable in real-world applications.
Read the rest
This soft, inchworm robot changes shape in response to tiny electrical or temperature changes. The power-efficient robot is made from a specialized "programmable" polymer technology that, according to the University of Toronto researchers, could someday lead to lightweight and safer robots but also enable other kinds of smart materials. From EurekAlert!:
Read the rest
"In situations where humans could be in danger -- a gas leak or a fire -- we could outfit a crawling robot with a sensor to measure the harmful environment," explains Naguib. "In aerospace, we could see smart materials being the key to next-generation aircrafts with wings that morph."
Though he points out it will be some time before the world sees morphed-wing aircrafts, the most immediate impact will be seen in wearable technology.
"We're working to apply this material to garments. These garments would compress or release based on body temperature, which could be therapeutic to athletes," says Naguib. The team is also studying whether smart garments could be beneficial for spinal cord injuries.
"In this case, we've trained it to move like a worm," he says. "But our innovative approach means we could train robots to mimic many movements -- like the wings of a butterfly."
From Catlech's Center for Autonomous Systems and Technology, LEONARDO (LEg ON Aerial Robotic DrOne) is a bipedal robot that's uses dronelike propellers to balance and walk around. Eventually, the propellers will boost LEONARDO's ability to jump. The demo video above was just released. The following is from a February article by Evan Ackerman in IEEE Spectrum:
Read the rest
IEEE Spectrum: Where did the idea for a robot like this come from?
Mory Gharib: For many applications that we’re thinking about for the future, like a flying ambulance project that we have or missions to Mars, there is a huge need for I would say a third party—a robotic partner that can, in very extreme situations, conduct scouting or help people in ways that that either drones or bipedal robots can’t do. That was the whole idea—we need to have a system that basically can defy gravity to go places where other robots cannot. And because this machine is not going to fly in the way that drones do, because it has most of the time its legs are on the ground, it can carry a much heavier battery and payload...
If everything works perfectly, what kinds of capabilities will the robot have?
Soon-Jo Chung: Walking on flat terrain, walking, running, and jumping to overcome small obstacles by using the lift generated by the propellers. And it should be able to in a very soft and stable fashion land after it jumps or flies. The ultimate form of demonstration for us will be to build two of these Leonardo robots and then have them play tennis or badminton.
Spot, the robot dog from Boston Dynamics, is now for sale. Sort of. From IEEE Spectrum:
But don’t pull out your credit card just yet. Spot may cost as much as a luxury car, and it is not really available to consumers. The initial sales, described as an “early adopter program,” is targeting businesses. Boston Dynamics wants to find customers in select industries and help them deploy Spots in real-world scenarios.
“What we’re doing is the productization of Spot,” Boston Dynamics CEO Marc Raibert tells IEEE Spectrum. “It’s really a milestone for us going from robots that work in the lab to these that are hardened for work out in the field.”
Read the rest
Do androids dream of electric sashimi? Read the rest
ETH Zurich engineers demonstrated a system enabling a robot to control a marionette. Although a robotic puppeteer is pretty damn cool, that's not the point of the research.
"Our long term goal is to enable robots to manipulate various types of complex physical systems – clothing, soft parcels in warehouses or stores, flexible sheets and cables in hospitals or on construction sites, plush toys or bedding in our homes, etc – as skillfully as humans do," they write in their technical paper. "We believe the technical framework we have set up for robotic puppeteering will also prove useful in beginning to address this very important grand-challenge."
(via IEEE Spectrum)
Read the rest
The fifth annual World Robot Conference was open to the public in Beijing last Thursday, August 22, and this bionic flying bird based on a herring gull was one of the more spectacular sights.
Other robots on show at the annual event in China included robo-superheroes, and Taiji-playing robots.
Rough cut of video from Reuters is here (no reporter narration).
[via] Read the rest
The term exoskeleton usually brings to mind the hulking Power Loader worn by Sigourney Weaver in Aliens. But Harvard University researchers have developed a much lighter, more minimal exoskeleton that reduces the energy needed to run or walk. One breakthrough in this exosuit design is that it can tell if the wearer is walking or running and adjusts the robotic assistance accordingly.
“After wearing the system for 15 minutes or so, you start to question if it’s really helping at all, because you just feel like you’re walking,” David Perry, a robotics engineer at the Wyss Institute for Biologically Inspired Engineering at Harvard University, told Scientific American. “Once you shut it off, however, your legs suddenly feel heavy, and you realize how much it was helping. It’s a lot like stepping off the end of one of those moving sidewalks at the airport.”
Not surprisingly, the research is sponsored by the Defense Advanced Research Projects Agency (DARPA)’s former Warrior Web program. From Harvard:
(The exosuit) assists the wearer via a cable actuation system. The actuation cables apply a tensile force between the waist belt and thigh wraps to generate an external extension torque at the hip joint that works in concert with the gluteal muscles. The device weighs 5kg in total with more than 90% of its weight located close to the body’s center of mass.
More: "Reducing the metabolic rate of walking and running with a versatile, portable exosuit" (Science)
Read the rest
Aquanaut is an autonomous submarine developed by Houston Mechatronics Inc. that transforms into a humanoid robot -- well, the upper half anyway -- to service underwater oil and gas rigs. IEEE Spectrum's Evan Ackerman took a dive with Aquanaut in a massive swimming pool that NASA uses to help train astronauts for microgravity. From IEEE Spectrum:
Read the rest
The HMI engineers, who often joke that building a Transformer has been one of their long-term career objectives, are convinced that it can be done. Aquanaut has been designed primarily for servicing subsea oil and gas installations. The companies that own and operate this infrastructure spend vast sums of money to inspect and maintain it. They rely on robotic technologies that haven’t fundamentally changed in decades, largely because of the challenge of working in such an extreme environment. For HMI, however, that’s not a problem: Of its 75 employees, over two dozen used to work for NASA. Extreme environments are what they’re best at.
HMI cofounder and chief technology officer Nic Radford spent 14 years working on advanced robotics projects at NASA’s Johnson Space Center, in Houston. “I’ll grant you that getting into space is harder than getting underwater,” he says. “But space is a pristine environment. Underwater, things are extraordinarily dynamic. I haven’t decided yet whether it’s 10 times harder or 50 times harder for robots working underwater than it is in space..."
Aquanaut will not require a tether or a support ship. It will travel in submarine mode to its deepwater destination, where it’ll transform into its humanoid form, unfolding its powerful arms.
Trying to see the world through someone else's eyes is a great way to build empathy and understanding between people. Turns out, this approach -- when taken literally -- also works with robots. Researchers from the University of Bourgogne, University of Trento, and their colleagues used a head-mounted display to put people "inside" a robot and then studied their "likeability and closeness towards the robot."
"We have demonstrated that by 'beaming' a participant into a robot we can change his or her attitude towards the robot," says University of Trento psychologist Francesco Pavani.
"By 'beaming', we mean that we gave the participants the illusion that they were looking through the robot's eyes, moving its head as if it were their head, look in the mirror and see themselves as a robot."
"Unlike exercises in which the participants couldn't t move the robot's head or do that in a coordinated manner with other body movements, in our study the experience of walking in the shoes of a robot led the participants to adopt a friendlier attitude, to perceive them as socially closer."
From the abstract of their scientific paper published in Scientific Reports:
Read the rest
When participant’ and robot’s head movements were correlated, participants felt that they were incorporated into the robot with a sense of agency. Critically, the robot they embodied was judged more likeable and socially closer. Remarkably, we found that the beaming experience with correlated head movements and corresponding sensation of embodiment and social proximity, was independent of robots’ humanoid’s appearance.
Researchers from the University of Chicago and Sony are developing a wearable electrical muscle stimulation system that boosts your physical reaction time without making it feel like you've lost control of your body. The latter is particularly important when considering the development of exoskeletons and other systems that bring us physically closer to machines for augmenting human capabilities. The system essentially zaps your muscles into contracting at precisely the right time while making it seem as if you're still controlling the movement. From IEEE Spectrum:
Read the rest
The typical reaction time for a human is about 250 milliseconds—meaning it takes you about a quarter of a second after you see something to physically react to it. But the researchers explain that "our conscious awareness of intention takes a moment to arise, around 200 ms." In other words, it takes you about 200 milliseconds for your brain to turn sensory input into a decision to do something like move a muscle, and then another 50 or so milliseconds for that muscle to actually start moving. The researchers suggest that this 50-ish millisecond gap between intention and action is a window that they can exploit to make humans react more quickly while still feeling like the action they take is under their control.
The video below shows a series of experiments that demonstrate how reflexes can be usefully accelerated without decreasing the sense of control, or agency, that the user experiences. It turns out that an EMS-driven improvement in reflexes of up to 80 milliseconds is possible while still maintaining the user's sense of agency, which is the difference between success and failure in these particular experiments.
This is Digit, a new bipedal bot from Agility Robotics, out for a stroll in its hometown of Albany, Oregon. Next year, you'll be able to order your own Digit, but the price hasn't been announced yet. From Agility Robotics:
Read the rest
Although still in testing, Digit is strong enough to pick up and stack boxes weighing up to 40 lb (18 kg), as well as durable enough to catch itself during a fall using its arms to decelerate. In addition to the physical changes, the control system for Digit has been overhauled to enable advanced behaviors such as stair climbing and footstep planning, all controlled through a robust API that can be accessed both onboard the robot and via a wireless link... Out-of-the-box, Digit will be up and walking within five minutes, even for users who are not legged locomotion control researchers.
One in 500 people are born with polydactyly, extra fingers or toes. Researchers at University of Freiburg in Germany, Imperial College London and Université de Lausanne / EPFL in Switzerland studied two people with well-formed usable sixth fingers between the thumb and first fingers on both hands to understand how their brains deal with the "extra workload" of controlling those digits. According to Imperial College bioengineer Etienne Burdet, high-resolution functional magnetic resonance imaging (fMRI) revealed that "the polydactyl individual's brains were well adapted to controlling extra workload, and even had dedicated areas for the extra fingers. It's amazing that the brain has the capacity to do this seemingly without borrowing resources from elsewhere." From Imperial College London:
Read the rest
Polydactyl participants also performed better at many tasks than their non-polydactyl counterparts. For instance, they were able to perform some tasks, like tying shoelaces, with only one hand, where two are usually needed... (See video above.)
The international team of authors say the findings might serve as blueprint for the developing artificial limbs and digits to expand our natural movement abilities. For example, giving a surgeon control over an extra robotic arm could enable them to operate without an assistant...
However, (lead author Carsten Mehring of Freiburg University) warned that people with robotic extra limbs may not achieve as good control as observed in the two polydactyl subjects. Any robotic digits or limbs wouldn’t have dedicated bone structure, muscles, tendons or nerves.
In addition, subjects would need to learn to use extra fingers or limbs, much like how an amputee learns how to use a prosthetic arm.
Salto is a single-legged, hopping robot that its UC Berkeley inventors compare to a "hyper-aggressive pogo-stick." Previously, Salto was constrained to a highly-structured indoor environment with a motion caption system. Now though, roboticists Justin Yim and Eric Wang have imbued Salto with the onboard smarts to bounce freely through the world albeit still under human control. From UC Berkeley:
Salto’s single, powerful leg is modeled after those of the galago, or Senegalese bush baby. The small, tree-dwelling primate’s muscles and tendons store energy in a way that gives the spry creature the ability to string together multiple jumps in a matter of seconds. By linking a series of quick jumps, Salto also can navigate complex terrain — like a pile of debris — that might be impossible to cross without jumping or flying.
“Unlike a grasshopper or cricket that winds up and gives one jump, we’re looking at a mechanism where it can jump, jump, jump, jump,” (UC Berkeley robotics professor Ronald) Fearing said. “This allows our robot to jump from location to location, which then gives it the ability to temporarily land on surfaces that we might not be able to perch on.”
From IEEE Spectrum:
Read the rest
...The researchers expect that “higher precision estimation and control can enable jumping on more finely varied surfaces like stairs, furniture, or other outcroppings” as well as “soft substrates like upholstery or natural foliage.”
The researchers tell us that Salto’s hardware is capable enough at this point that aside from potentially upgrading the motor or battery for more jumping power or run time, the focus now will be on new behaviors, although they’re toying with the idea of adding some kind of gripping foot so that Salto can launch from, and land on, tree branches (!).
Researchers from the University of Zurich's Robotics and Perception Group designed an event camera system for drones. In the video above, the fun starts at 1:25. As explained by IEEE Spectrum, "These are sensors that are not good at interpreting a scene visually like a regular camera, but they’re extremely sensitive to motion, responding to changes in a scene on a per-pixel basis in microseconds. A regular camera that detects motion by comparing one frame with another takes milliseconds to do the same thing, which might not seem like much, but for a fast-moving drone it could easily be the difference between crashing into something and avoiding it successfully."
Read the rest
We previously posted about a robot that solved a Rubik's Cube in .637 seconds. Read the rest