The “Tell me Dave” robot learns when people talk to it
The idea of talking to a machine as if it’s human is becoming reality. Though some robots can already follow verbal instructions, they must first be programmed with software code that allows them to respond in a predetermined way. But wouldn’t it be easier to avoid all of that and just be able to explain what you want a robot to do?
A new research project is focusing on doing just that. At Cornell University, a computer science team designed and built a learning robot as part of their “Tell me Dave” project. The Tell me Dave robot is based on Willow Garage’s PR2 robot, and was created from previous research that includes teaching robots to identify people’s activities by observing their movements, identifying objects and situations and responding based on previous experiences, and using visual and non-visual data to refine a robot’s understanding of objects.
Cornell’s Tell me Dave robot follows spoken instructions to learn new tasks. Image via Gizmag.
Equipped with a 3D camera and computer vision software, the Tell me Dave robot has been taught to associate objects with what they’re used for. For example, it learned that a saucepan can have things poured into it and from it, and that it can be heated on a stove. The robot even knows that the stove has controls to operate it, and can identify other objects in the environment such as the sink’s water faucet and the microwave.
After it’s gained knowledge from scanning its environment, if you tell the robot to “make me a bowl of noodles,” it’s able to assemble the routine that it needs to make noodles from the objects in the kitchen. Then it will put water in the saucepan, put the saucepan on the stovetop, and proceed to cook the noodles. What’s more impressive is that if you rearrange or add utensils, the robot adapts to the available equipment. If you tell it to “boil water,” depending on the objects at hand, it’ll either use the stovetop and saucepan or a bowl and a microwave to complete its task.
So who’s the brains behind all of this? Assistant professor of computer science at Cornell University, Ashutosh Saxena, is training robots to comprehend directions in language from a variety of speakers. But since human language can be vague, Saxena and his colleagues have been helping robots to account for missing information and adapt to the environment by utilizing an algorithm that translates spoken instructions, identifies the key words detected by the robot and then compares these to previous inputs learned.
Check out the video below to see how Cornell University’s Tell me Dave robot whips up an affogato for its human instructor.
Originally written for Electronic Products.