When it comes to getting a quality education, a robot could do far worse than a program at Yale.†Machine learning†researchers at the Ivy-League university recently started teaching robots about the nuances of social interaction. And thereís no better place to start than with possessions.
One of the earliest social constructs that†humans†learn is the idea of ownership. Thatís my bottle. Gimme that teddy bear. I want that candy bar and I will make your life a living hell if you donít buy it for me right now.
Robots, on the other hand, donít have a grain of Veruca Salt in them, because ownership is a human idea. Still, if you want a robot to avoid touching your stuff or interacting with something, you typically have to hard code some sort of limitation. If we want them to assist us, clean up our trash, or†assemble our Ikea furniture†theyíre going to have to understand that some objects are everyoneís and others are off limits.
But nobody has time to teach a robot every single object in the world and program ownership associations for each one. According to the teamís†white paper:
For example, an effective collaborative robot should be able to distinguish and track the permissions of an unowned tool versus a tool that has been temporarily shared by a collaborator. Likewise, a trash-collecting robot should know to discard an empty soda can, but not a cherished photograph, or even an unopened soda can, without having these permissions exhaustively enumerated for every possible object.
The Yale team developed a learning system to train a robot to learn and understand ownership in context. This allows it to develop its own rules, on the fly, based on observing humans and responding to their instructions.
The researchers created four distinct algorithms to power the robotís concept of ownership. The first enables the robot to understand a positive example. If a researcher says ďthatís mineĒ the robot knows it shouldnít touch that object. The second algorithm does the opposite, it letís the machine know an object isnít associated when a person says ďthatís not mine.Ē
Finally, the third and fourth algorithms give the machine the ability to add or subtract rules to its concept of ownership if itís told something has changed. Theoretically, this would allow the robot to process changes in ownership without needing the machine learning equivalent of a software update and reboot.
Robots will only be useful to humans if they can integrate themselves into our lives unobtrusively. If a machine doesnít know how to ďactĒ around humans, or follow social norms, itíll eventually become disruptive.
Nobody wants the cleaning bot to snatch a coffee cup out of their hand because it detected a dirty dish, or to throw away everything on their messy desk because it canít distinguish between clutter and garbage.
The Yale team acknowledges that this work is in its infancy. Despite the fact that the algorithms (which you can get a deeper look at in the†white paper) presented create a robust platform to work from, they only address a very basic framework for the concept of ownership.
Next, the researchers hope to teach robots to understand ownership beyond the capacity of just its own actions. This would include, presumably, prediction algorithms to determine how other people and agents are likely to observe social norms related to ownership.
The†future†will be built by robots but, thanks to researchers like the ones†at Yale, theyíll know it belongs to humans.
Stay updated with all the insights.
Navigate news, 1 email day.
Subscribe to Qrius