Google’s New AI Can Answer Dumb IT Questions or Tell You the Meaning of Life
The search giant has a fresh development in artificial intelligence that could one day lead to a wise personal assistant
A robot could answer your next call to tech support, thanks to new artificial intelligence research at Google. The company taught computers how to have context-sensitive discussions with people about issues ranging from philosophy to humdrum IT help-desk tasks. It was outlined in a research paper published by the company last week.
Unlike traditional “chatbots,” Google's system is built without hand-coded responses or assumptions about the world, and instead learns to model language and conversation based on examples seen in corporate or public documentation. “Even though the model has obvious limitations, it is surprising to us that a purely data-driven approach without any rules can produce rather proper answers to many types of questions,” according to the research paper.
The system can respond to questions, and have long, complex conversations with people. In tests, it was able to help users diagnose and fix computer problems, including Internet browser and password issues. The AI also taught itself how to respond to inquiries about morality and philosophy. The answers were coherent enough that you might mistake them for something your stoner roommate from college once said. (Sample conversation: “What is the purpose of life?” “To serve the greater good.” “What is the purpose of living?” “To live forever.”)
The machine is able to do this because it was designed to come up with an appropriate response based on context. “These predictions reflect the situational context because it's seen the whole occurrence of words and dialogue that's happened before it,” Jeff Dean, a Google senior fellow, said at a conference in March. The system relies on what's called a neural network, which mimics some of the sensing characteristics of the human brain's neocortex, along with a longer-term memory component to help it build an understanding of context.
The research is part of a larger effort within Google to develop conversational AI tools. DeepMind, a Google research group in London, has created an AI capable of learning how to play video games without instructions. Geoff Hinton, a distinguished researcher at Google, is working on what's known as thought vectors, which distill the meaning of a sentence so they can be compared to other sentences or images. The concept helps power Google's new Q&A project. “If you can represent what someone is asking in a big vector, then you can start discovering the structure that exists between the big vectors in questions and answers,” says Hinton. “Now that we’re beginning to get sentences represented by big vectors, I think we’re going to make a lot of progress on more appropriate behavior in conversations.”
The vector endeavor may also tie into a nascent project named Descartes that Ray Kurzweil, a director of engineering at Google, is working on. “We're creating dialogue agents in Descartes,” Kurzweil says in a video presentation obtained by Bloomberg. “One of the issues we're grappling with is that these bots you interact with need to have their own motivations and goals, and we need to figure out what those are.” Other technology companies and universities are embarking on their own projects in this field, including Microsoft, the University of Montreal and the Georgia Institute of Technology, which presented research outlining a system based on a similar approach.
But Google seems to have one crucial audience locked down. In addition to the new IT robot with very deep thoughts on existence, another well-publicized project from last week demonstrates how a Google AI can create “gorgeous, trippy artwork,” as GQdescribes it. Some Ph.D. students at Ghent University in Belgium have already adapted Google's stoner AI into a Web-based system that infinitely zooms into an image composed of a machine's dreams. Pass the Cheetos, man.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.