{"id":55,"date":"2024-04-25T08:58:31","date_gmt":"2024-04-25T08:58:31","guid":{"rendered":"http:\/\/localhost:8888\/sawberries\/2024\/04\/25\/to-build-a-better-ai-helper-start-by-modeling-the-irrational-behavior-of-humans\/"},"modified":"2024-04-25T08:58:31","modified_gmt":"2024-04-25T08:58:31","slug":"to-build-a-better-ai-helper-start-by-modeling-the-irrational-behavior-of-humans","status":"publish","type":"post","link":"http:\/\/localhost:8888\/sawberries\/2024\/04\/25\/to-build-a-better-ai-helper-start-by-modeling-the-irrational-behavior-of-humans\/","title":{"rendered":"To build a better AI helper, start by modeling the irrational behavior of humans"},"content":{"rendered":"
To build AI systems that can collaborate effectively with humans, it helps to have a good model of human behavior to start with. But humans tend to behave suboptimally when making decisions.<\/p>\n
This irrationality, which is especially difficult to model, often boils down to computational constraints. A human can\u2019t spend decades thinking about the ideal solution to a single problem.<\/p>\n
Researchers at MIT and the University of Washington developed a way to model the behavior of an agent, whether human or machine, that accounts for the unknown computational constraints that may hamper the agent\u2019s problem-solving abilities.<\/p>\n
Their model can automatically infer an agent\u2019s computational constraints by seeing just a few traces of their previous actions. The result, an agent\u2019s so-called \u201cinference budget,\u201d can be used to predict that agent\u2019s future behavior.<\/p>\n
In a new paper, the researchers demonstrate how their method can be used to infer someone\u2019s navigation goals from prior routes and to predict players\u2019 subsequent moves in chess matches. Their technique matches or outperforms another popular method for modeling this type of decision-making.<\/p>\n
Ultimately, this work could help scientists teach AI systems how humans behave, which could enable these systems to respond better to their human collaborators. Being able to understand a human\u2019s behavior, and then to infer their goals from that behavior, could make an AI assistant much more useful, says Athul Paul Jacob, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique<\/a>.<\/p>\n \u201cIf we know that a human is about to make a mistake, having seen how they have behaved before, the AI agent could step in and offer a better way to do it. Or the agent could adapt to the weaknesses that its human collaborators have. Being able to model human behavior is an important step toward building an AI agent that can actually help that human,\u201d he says.<\/p>\n Jacob wrote the paper with Abhishek Gupta, assistant professor at the University of Washington, and senior author Jacob Andreas, associate professor in EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the International Conference on Learning Representations.<\/p>\n