Standardizing the Evaluation of Digital Managers for Better Interactive Experiences
As Artificial Intelligence (AI) technologies become increasingly integrated with society (e.g., in the form of digital assistants, games, or self-driving cars), it becomes increasingly important to understand how they can ensure that people’s experiences meet certain desirable criteria. In Experience Management, researchers and professionals seek to create automated systems called “experience managers”, which work to improve people’s experiences in designer-specified ways. Unfortunately, the evaluation and comparison of experience managers is currently impaired by the lack of both a common platform to evaluate them and a common language to describe experience management tasks. Experience in other fields such as General Game Playing and AI Planning has shown that having a common platform can stimulate and accelerate research progress. The goal of this project is thus to develop a platform for evaluating and comparing arbitrary experience managers on a wide variety of tasks, to accelerate the essential progress of Experience Management research. We plan to do this by studying existing experience managers, developing a common platform that can support them, and promoting the platform to the research community as a tool for their research. Throughout our work, we will draw lessons from the field of General Game Playing, hoping to reuse some of its existing infrastructure and stimulate new exchanges of ideas between the two fields of research.