Join our Mailing List
body { -webkit-animation-delay: 0.1s; -webkit-animation-name: fontfix; -webkit-animation-duration: 0.1s; -webkit-animation-iteration-count: 1; -webkit-animation-timing-function: linear; } @-webkit-keyframes fontfix { from { opacity: 1; } to { opacity: 1; } } /* ]]> */

A Momentary Flow

Evolving Worldviews

Tag Results

194 posts tagged robots

Japan has launched the world’s first talking robot into space to serve as companion to astronaut Kochi Wakata who will begin his mission in November.
The android took off from the island of Tanegashima in an unmanned rocket also carrying supplies for crew onboard the International Space Station (ISS). Measuring 34cm (13 inches), Kirobo is due to arrive at the ISS on 9 August. It is part of a study to see how machines can lend emotional support to people isolated over long periods. The launch of the H-2B rocket was broadcast online by the Japan Aerospace Exploration Agency (Jaxa). The unmanned rocket is also carrying drinking water, food, clothing and work supplies to the six permanent crew members based at the ISS. (via BBC News - Kirobo is world’s first talking robot sent into space)

Japan has launched the world’s first talking robot into space to serve as companion to astronaut Kochi Wakata who will begin his mission in November.

The android took off from the island of Tanegashima in an unmanned rocket also carrying supplies for crew onboard the International Space Station (ISS). Measuring 34cm (13 inches), Kirobo is due to arrive at the ISS on 9 August. It is part of a study to see how machines can lend emotional support to people isolated over long periods. The launch of the H-2B rocket was broadcast online by the Japan Aerospace Exploration Agency (Jaxa). The unmanned rocket is also carrying drinking water, food, clothing and work supplies to the six permanent crew members based at the ISS. (via BBC News - Kirobo is world’s first talking robot sent into space)

Source BBC

Robotic Skin Lights Up When Touched
Imagine how awesome — or distracting — it would be if human skin lit up every time something pushed on it. Pulsing arteries, mosquitoes, a rude shoulder-check on the sidewalk, or scratching an itch would transform a person into a blinking light show.

Now, scientists at the University of California, Berkeley have designed an electronic skin that does actually does this: Super-thin and flexible, the skin lights up when touched. More pressure produces a brighter light, the team reports July 21 in Nature Materials.
Thinner than a sheet of paper, the skin is made from layers of plastic and a pressure-sensitive rubber. A conductive silver ink, organic LEDs, and thin-film transistors made from semiconductor-enriched carbon nanotubes are sandwiched between the layers. Applying pressure sends a signal through the rubber that ultimately turns on the LEDs, which light up in red, green, yellow or blue.

Instead of using the material to create bodysuits for Burning Man or other illuminated party tricks, scientists suggest that it might be used for smart wallpapers, health-monitoring devices, or in robotics. The type of interactive pressure sensor developed by the Berkeley scientists could also be useful in artificial skin for prosthetic limbs. For years, scientists have been working on developing systems and materials that could be integrated into a functioning, stimulus-responsive skin — something that can sense temperature, pressure, and stretch, and can heal itself. In addition, such a sheath might one day transform an ordinary robot into an interactive machine that’s capable of responding to tiny changes in its environment.
If and when that day comes, we will welcome our touchy-feely glow-bot overlords.
(via Robotic Skin Lights Up When Touched - Wired Science)

Robotic Skin Lights Up When Touched

Imagine how awesome — or distracting — it would be if human skin lit up every time something pushed on it. Pulsing arteries, mosquitoes, a rude shoulder-check on the sidewalk, or scratching an itch would transform a person into a blinking light show.

Now, scientists at the University of California, Berkeley have designed an electronic skin that does actually does this: Super-thin and flexible, the skin lights up when touched. More pressure produces a brighter light, the team reports July 21 in Nature Materials.

Thinner than a sheet of paper, the skin is made from layers of plastic and a pressure-sensitive rubber. A conductive silver ink, organic LEDs, and thin-film transistors made from semiconductor-enriched carbon nanotubes are sandwiched between the layers. Applying pressure sends a signal through the rubber that ultimately turns on the LEDs, which light up in red, green, yellow or blue.

Instead of using the material to create bodysuits for Burning Man or other illuminated party tricks, scientists suggest that it might be used for smart wallpapers, health-monitoring devices, or in robotics. The type of interactive pressure sensor developed by the Berkeley scientists could also be useful in artificial skin for prosthetic limbs. For years, scientists have been working on developing systems and materials that could be integrated into a functioning, stimulus-responsive skin — something that can sense temperature, pressure, and stretch, and can heal itself. In addition, such a sheath might one day transform an ordinary robot into an interactive machine that’s capable of responding to tiny changes in its environment.

If and when that day comes, we will welcome our touchy-feely glow-bot overlords.

(via Robotic Skin Lights Up When Touched - Wired Science)

Aaron Saenz: I, For One, Welcome Our New Robot Overlords

Aaron Saenz is a former physicist, an improvisational actor, and a tech journalist. He currently writes for SingularityHub.com, covering everything from crowd-sourced holographic Japanese pop stars to open source research robotics. He’s also the host of Singularity Hub’s Accelerated Tech News, a new video recap of the week’s top stories in science and technology. Follow his work on Twitter: @adsaenz
www.singularityhub.com/about

@ BAASICS.2: The Future (by BAASICSsf)

US unveils ‘Atlas’ humanoid robot test bed
-
A humanoid robot called Atlas could pave the way for intelligent machines to help in the wake of natural disasters. The two meter tall robot was created as a test bed for a US Defence Advanced Research Projects Agency challenge. The Darpa challenge demands Atlas completes eight tasks that it might have to perform in an emergency. Six teams have until December 2013 to develop software that will help Atlas complete the tasks. Atlas has been developed by the Boston Dynamics robotics firm which has been working on robots that can aid the military. Like a human, Atlas has two arms and legs and gets around by walking. It sees using a stereo laser scanning system and has gripping hands developed by two separate robotics companies. Unlike humans, it has a high speed networking system built-in so it can communicate with its creators and pipe data back from disaster areas. Before now, the teams taking part in the robotic challenge have only worked with virtual versions of Atlas. In the next stage of the competition, algorithms and control programs for the virtual Atlas will be transferred to the real thing. The teams will then have five months to refine Atlas’s abilities before taking part in a series of trials. During those, a tethered version of Atlas will be expected to complete tasks which include driving a car, removing debris blocking doors. climbing a ladder, finding and closing a valve and connecting a fire hose. The best performing teams in the December 2013 trials will win funding to continue refining Atlas so it can perform all eight tasks autonomously during the challenge finals in late 2014. (via BBC News - US unveils ‘Atlas’ humanoid robot test bed)

US unveils ‘Atlas’ humanoid robot test bed

-

A humanoid robot called Atlas could pave the way for intelligent machines to help in the wake of natural disasters. The two meter tall robot was created as a test bed for a US Defence Advanced Research Projects Agency challenge. The Darpa challenge demands Atlas completes eight tasks that it might have to perform in an emergency. Six teams have until December 2013 to develop software that will help Atlas complete the tasks. Atlas has been developed by the Boston Dynamics robotics firm which has been working on robots that can aid the military. Like a human, Atlas has two arms and legs and gets around by walking. It sees using a stereo laser scanning system and has gripping hands developed by two separate robotics companies. Unlike humans, it has a high speed networking system built-in so it can communicate with its creators and pipe data back from disaster areas. Before now, the teams taking part in the robotic challenge have only worked with virtual versions of Atlas. In the next stage of the competition, algorithms and control programs for the virtual Atlas will be transferred to the real thing. The teams will then have five months to refine Atlas’s abilities before taking part in a series of trials. During those, a tethered version of Atlas will be expected to complete tasks which include driving a car, removing debris blocking doors. climbing a ladder, finding and closing a valve and connecting a fire hose. The best performing teams in the December 2013 trials will win funding to continue refining Atlas so it can perform all eight tasks autonomously during the challenge finals in late 2014. (via BBC News - US unveils ‘Atlas’ humanoid robot test bed)

Calculating art: Meet e-David, the painting machine (w/ Video)
Sometime in the future, you will be at an art gallery where you are drawn to a nice-looking tree, or haunting line drawing of a woman’s face, or historical portrait, and you will wonder who is the artist. Eye the lower corner of the canvas and it will tell you, “David.” What you might not realize is that David is a robot—e-David, to be exact. A team at the University of Konstanz in Germany have developed e-David as a robot “artist” that uses software to decide where to add the next brush stroke. After each brush stroke, e-David takes a picture, and its software calculates such moves as where the image needs to be lightened or darkened. At the University of Konstanz, the group said their project objective is to build a robot that can paint, pure and simple. By paint, they do not mean adding a fresh coat to a kitchen ceiling but delivering art.
The school’s Department of Computer and Information Science has a structure where workgroups share a common research topic, and e-David is a project within the topic, “Exploration and visualization of large information spaces.”
The robot is not “person-able,” more like the metallic skeleton of a mythical and very studious canine. They used an industry robot normally used to weld car bodies, and enhanced it with sensors, camera, control computer, and art supplies. They chose the name “David” not because they especially liked that name but because it stands for what they tried to accomplish: Drawing Apparatus for Vivid Image Display.
The computer program provides drawing commands that are executed by the machine. This can be considered as a step above humans painting- by-numbers: Just as one observes sidewalk artists repeatedly tweaking their lines and dabs and brush strokes as they fill an empty canvas, the robot does something similar. The device takes a picture of what it wants to copy. The robot watches itself paint and decides where to add the next stroke, constantly tweaking its moves based on what it’s seeing through a camera pointed at its canvas. (via Calculating art: Meet e-David, the painting machine (w/ Video))

Calculating art: Meet e-David, the painting machine (w/ Video)

Sometime in the future, you will be at an art gallery where you are drawn to a nice-looking tree, or haunting line drawing of a woman’s face, or historical portrait, and you will wonder who is the artist. Eye the lower corner of the canvas and it will tell you, “David.” What you might not realize is that David is a robot—e-David, to be exact. A team at the University of Konstanz in Germany have developed e-David as a robot “artist” that uses software to decide where to add the next brush stroke. After each brush stroke, e-David takes a picture, and its software calculates such moves as where the image needs to be lightened or darkened. At the University of Konstanz, the group said their project objective is to build a robot that can paint, pure and simple. By paint, they do not mean adding a fresh coat to a kitchen ceiling but delivering art.

The school’s Department of Computer and Information Science has a structure where workgroups share a common research topic, and e-David is a project within the topic, “Exploration and visualization of large information spaces.”

The robot is not “person-able,” more like the metallic skeleton of a mythical and very studious canine. They used an industry robot normally used to weld car bodies, and enhanced it with sensors, camera, control computer, and art supplies. They chose the name “David” not because they especially liked that name but because it stands for what they tried to accomplish: Drawing Apparatus for Vivid Image Display.

The computer program provides drawing commands that are executed by the machine. This can be considered as a step above humans painting- by-numbers: Just as one observes sidewalk artists repeatedly tweaking their lines and dabs and brush strokes as they fill an empty canvas, the robot does something similar. The device takes a picture of what it wants to copy. The robot watches itself paint and decides where to add the next stroke, constantly tweaking its moves based on what it’s seeing through a camera pointed at its canvas. (via Calculating art: Meet e-David, the painting machine (w/ Video))

Robots Hallucinate Humans to Aid in Object Recognition
-
Almost exactly a year ago, we posted about how Ashutosh Saxena’s lab at Cornell was teaching robots to use their “imaginations” to try to picture how a human would want a room organized. The research was successful, with algorithms that used hallucinated humans (which are the best sort of humans) to influence the placement of objects performing significantly better than other methods. Cool stuff indeed, and now comes the next step: labeling 3D point-clouds obtained from RGB-D sensors by leveraging contextual hallucinated people.
A significant amount of research has been done investigating the relationships between objects and other objects. It’s called semantic mapping, and it’s very valuable in giving robots what we’d call things like “intuition” or “common sense.” However, being humans, we tend to live human-centered lives, and that means that the majority of our stuff tends to be human-centered too, and keeping this in mind can help to put objects in context. (via Robots Hallucinate Humans to Aid in Object Recognition - IEEE Spectrum)

Robots Hallucinate Humans to Aid in Object Recognition

-

Almost exactly a year ago, we posted about how Ashutosh Saxena’s lab at Cornell was teaching robots to use their “imaginations” to try to picture how a human would want a room organized. The research was successful, with algorithms that used hallucinated humans (which are the best sort of humans) to influence the placement of objects performing significantly better than other methods. Cool stuff indeed, and now comes the next step: labeling 3D point-clouds obtained from RGB-D sensors by leveraging contextual hallucinated people.

A significant amount of research has been done investigating the relationships between objects and other objects. It’s called semantic mapping, and it’s very valuable in giving robots what we’d call things like “intuition” or “common sense.” However, being humans, we tend to live human-centered lives, and that means that the majority of our stuff tends to be human-centered too, and keeping this in mind can help to put objects in context. (via Robots Hallucinate Humans to Aid in Object Recognition - IEEE Spectrum)

The octopus is a natural escape artist. It can squeeze its soft body into impossibly tight spaces and often baffles aquarium workers with its ability to break out of tanks. These abilities could be very useful in an underwater robot, which is why the OCTOPUS Project, a consortium of European robotics labs, is attempting to reverse engineer it in all its tentacled glory. Now researchers from the Foundation for Research and Technology (FORTH), in Hellas, Greece are learning how the robot might use its tentacles to swim. (via Unleash the Kraken! Robot octopus learning to swim)

Japanese robots make a stink about bad breath, body odor
-
Have you got a case of dog breath? How about smelly feet? Friends and family may not tell you, but a couple of new robots will. Built by the Kitakyushu National College of Technology and a group of inventive pranksters calling itself CrazyLabo, the pair of odor-detecting robots are giving people a lesson in hygiene and a few chuckles.
Kaori-chan, a decapitated mannequin head that sits atop a pink box, is the one that smells your breath. Simply blow into her face and don’t expect her to spare your feelings if you could use a mint or two. Responses range from the blunt, “Yuck, you have bad breath!” to an embarrassing, “Emergency! There’s an emergency taking place! That’s beyond the limit of patience!”
The foot-sniffing dog, Shuntaro-kun, is a bit less eloquent but just as clear with his responses. He’ll cuddle up to you if you smell ok, but if you stink he’ll bark, fall down and growl, or play dead. Both robots get their sense of smell from a commercially available odor sensor and grade your aroma on a four point scale. (via Japanese robots make a stink about bad breath, body odor)

Japanese robots make a stink about bad breath, body odor

-

Have you got a case of dog breath? How about smelly feet? Friends and family may not tell you, but a couple of new robots will. Built by the Kitakyushu National College of Technology and a group of inventive pranksters calling itself CrazyLabo, the pair of odor-detecting robots are giving people a lesson in hygiene and a few chuckles.

Kaori-chan, a decapitated mannequin head that sits atop a pink box, is the one that smells your breath. Simply blow into her face and don’t expect her to spare your feelings if you could use a mint or two. Responses range from the blunt, “Yuck, you have bad breath!” to an embarrassing, “Emergency! There’s an emergency taking place! That’s beyond the limit of patience!”

The foot-sniffing dog, Shuntaro-kun, is a bit less eloquent but just as clear with his responses. He’ll cuddle up to you if you smell ok, but if you stink he’ll bark, fall down and growl, or play dead. Both robots get their sense of smell from a commercially available odor sensor and grade your aroma on a four point scale. (via Japanese robots make a stink about bad breath, body odor)

Perfect skin: More touchy-feely robots
-
RoboSKIN will develop and demonstrate a range of new robot capabilities based on the tactile feedback provided by a robotic skin from large areas of the robot body. Up to now, a principled investigation of these topics has been limited by the lack of tactile sensing technologies enabling large scale experimental activities, since so far skin technologies and embedded tactile sensors have been mostly demonstrated only at the prototypal stage. The new capabilities will improve the ability of robots to operate effectively and safely in unconstrained environments and also their ability to communicate and co-operate with each other and with humans.
To support this aim, one side of the RoboSKIN project focuses on the investigation of methods and technologies enabling the implementation of skin sensors that can be used with existing robots. The other side of the project develops new structures for representing and integrating tactile data with existing cognitive architectures in order to support skin-based cognition, behavior and communication. (via Perfect skin: More touchy-feely robots | ZeitNews)

Perfect skin: More touchy-feely robots

-

RoboSKIN will develop and demonstrate a range of new robot capabilities based on the tactile feedback provided by a robotic skin from large areas of the robot body. Up to now, a principled investigation of these topics has been limited by the lack of tactile sensing technologies enabling large scale experimental activities, since so far skin technologies and embedded tactile sensors have been mostly demonstrated only at the prototypal stage. The new capabilities will improve the ability of robots to operate effectively and safely in unconstrained environments and also their ability to communicate and co-operate with each other and with humans.

To support this aim, one side of the RoboSKIN project focuses on the investigation of methods and technologies enabling the implementation of skin sensors that can be used with existing robots. The other side of the project develops new structures for representing and integrating tactile data with existing cognitive architectures in order to support skin-based cognition, behavior and communication. (via Perfect skin: More touchy-feely robots | ZeitNews)

JPL BioSleeve Enables Precise Robot Control Through Hand and Arm Gestures
-
No matter how capable you make a robot, its effectiveness is limited by how well you can control it. And until we’ve got this whole general autonomy thing nailed down (better not hold your breath), that means a lot of teleoperation. JPL has been working on a new gesture-based human interface called BioSleeve, which uses a [insert collective noun for sensors here] of EMG sensors, IMUs, and magnetometers to decode hand and arm gestures and map them to an intuitive robot control system.
BioSleeve is a sort of elastic bandage that covers most of your forearm and includes 16 dry contact electromyography sensors plus a pair of inertial measurement units. The sensors can detect movements of the muscles in your arm, which is where the muscles in your hand live, meaning that the BioSleeve can tell when (and how much) you move your arm, wrist, hand, and individual fingers. This enables you to make gestures and have a robot respond to them, much like existing gesture recognition systems, except that since BioSleeve doesn’t depend on vision or having your hand in close proximity to a sensor, it’s a much easier thing to use for extended periods and in the field (like in cramped spaces like the ISS). Here’s a demo: (via JPL BioSleeve Enables Precise Robot Control Through Hand and Arm Gestures - IEEE Spectrum)

JPL BioSleeve Enables Precise Robot Control Through Hand and Arm Gestures

-

No matter how capable you make a robot, its effectiveness is limited by how well you can control it. And until we’ve got this whole general autonomy thing nailed down (better not hold your breath), that means a lot of teleoperation. JPL has been working on a new gesture-based human interface called BioSleeve, which uses a [insert collective noun for sensors here] of EMG sensors, IMUs, and magnetometers to decode hand and arm gestures and map them to an intuitive robot control system.

BioSleeve is a sort of elastic bandage that covers most of your forearm and includes 16 dry contact electromyography sensors plus a pair of inertial measurement units. The sensors can detect movements of the muscles in your arm, which is where the muscles in your hand live, meaning that the BioSleeve can tell when (and how much) you move your arm, wrist, hand, and individual fingers. This enables you to make gestures and have a robot respond to them, much like existing gesture recognition systems, except that since BioSleeve doesn’t depend on vision or having your hand in close proximity to a sensor, it’s a much easier thing to use for extended periods and in the field (like in cramped spaces like the ISS). Here’s a demo: (via JPL BioSleeve Enables Precise Robot Control Through Hand and Arm Gestures - IEEE Spectrum)