Tristan Greene
When it comes to getting a quality education, a robot could do far worse than a program at Yale. Machine learning researchers at the Ivy-League university recently started teaching robots about the nuances of social interaction. And theres no better place to start than with possessions.
One of the earliest social constructs that humans learn is the idea of ownership. Thats my bottle. Gimme that teddy bear. I want that candy bar and I will make your life a living hell if you dont buy it for me right now.
Robots, on the other hand, dont have a grain of Veruca Salt in them, because ownership is a human idea. Still, if you want a robot to avoid touching your stuff or interacting with something, you typically have to hard code some sort of limitation. If we want them to assist us, clean up our trash, or assemble our Ikea furniture theyre going to have to understand that some objects are everyones and others are off limits.
But nobody has time to teach a robot every single object in the world and program ownership associations for each one. According to the teams white paper:
For example, an effective collaborative robot should be able to distinguish and track the permissions of an unowned tool versus a tool that has been temporarily shared by a collaborator. Likewise, a trash-collecting robot should know to discard an empty soda can, but not a cherished photograph, or even an unopened soda can, without having these permissions exhaustively enumerated for every possible object.
The Yale team developed a learning system to train a robot to learn and understand ownership in context. This allows it to develop its own rules, on the fly, based on observing humans and responding to their instructions.
The researchers created four distinct algorithms to power the robots concept of ownership. The first enables the robot to understand a positive example. If a researcher says thats mine the robot knows it shouldnt touch that object. The second algorithm does the opposite, it lets the machine know an object isnt associated when a person says thats not mine.
Finally, the third and fourth algorithms give the machine the ability to add or subtract rules to its concept of ownership if its told something has changed. Theoretically, this would allow the robot to process changes in ownership without needing the machine learning equivalent of a software update and reboot.
Robots will only be useful to humans if they can integrate themselves into our lives unobtrusively. If a machine doesnt know how to act around humans, or follow social norms, itll eventually become disruptive.
Nobody wants the cleaning bot to snatch a coffee cup out of their hand because it detected a dirty dish, or to throw away everything on their messy desk because it cant distinguish between clutter and garbage.
The Yale team acknowledges that this work is in its infancy. Despite the fact that the algorithms (which you can get a deeper look at in the white paper) presented create a robust platform to work from, they only address a very basic framework for the concept of ownership.
Next, the researchers hope to teach robots to understand ownership beyond the capacity of just its own actions. This would include, presumably, prediction algorithms to determine how other people and agents are likely to observe social norms related to ownership.
The future will be built by robots but, thanks to researchers like the ones at Yale, theyll know it belongs to humans.
This article has previously been published on thenextweb.