Keynote Speakers

Dr. Roy Want
Research Scientist, Google, USA
The Web of Things
In a world of billions of Internet connected smart devices, preferentially discovering things nearby and allowing easy user interaction, creates a powerful filter for users to overcome the scale and complexity of this global system. Merging the virtual World Wide Web with nearby physical devices that are part of the Internet of Things (IoT), will allow anyone with a mobile device (such as a smartphone), to walk up, and with the appropriate authorization, monitor or control anything.
Roy Want received his doctorate from Cambridge University, England in 1988, and is currently a Research Scientist at Google. Previous positions include Sr. Principal Engineer at Intel Corporation, and a Principal Scientist at Xerox PARC. He holds the grade of both ACM and IEEE Fellow. His research interests include mobile and ubiquitous computing, distributed systems, context-aware applications, and electronic identification. He has more than 25 years’ experience working in the field of mobile computing. He served as the Editor-in-chief for IEEE Pervasive Computing from 2006-2009, and he is currently a member of the ACM SIGMOBILE executive committee in the role of Past Chair. To date, he has authored or co-authored more than 75 publications, with 80 issued patents in this area. For more information about Dr. Want's academic and industrial achievements see

Prof. Dr. Elisabeth André
Professor of Computer Science, Augsburg University, Germany
Towards the Creation of Empathic Experiences in Intelligent Environments
Societal challenges, such as assisted living for elderly people, create a high demand for intelligent environments that dynamically adapt to the users' needs and preferences. Traditionally, the users' situative context and activities have been analyzed to create appropriate system responses. In my talk, I will outline the vision of empathic environments that also consider more subtle cues, such as head movements or body postures, to infer information on the users’ emotional and attentive state and respond to them accordingly. I will show how recent advances in the recognition, modelling and generation of human behavioral cues may be exploited for the creation of empathic experiences in intelligent environments. The talk will be illustrated by a number of applications developed in various national and international projects. In the CARE project, we are exploring the idea of a sentient context-aware recommender system that is supposed to stimulate and activitate users during their daily life. In the GLASSISTANT project, we make use of Augmented Reality techniques to create an empathic environment for elderly people and help them cope with negative emotional states, such as anxiety and stress.
Professor Elisabeth André is a Full Professor of Computer Science at Augsburg University, and Chair of the Research Unit Human-Centered Multimedia. She received her Diploma and Doctoral Degrees in Computer Science from Saarland University. Elisabeth Andr e has a long track record in multimodal human-machine interaction, embodied conversational agents, aff ective computing and social signal processing. She is on the editorial board of various renowned international journals, such as Journal of Autonomous Agents and Multi-Agent Systems (JAAMAS), IEEE Transactions on Aff ective Computing (TAC), ACM Transactions on Intelligent Interactive Systems (TIIS), and AI Communications. In 2007 Elisabeth Andr e was nominated Fellow of the Alcatel-Lucent Foundation for Communications Research. In 2010, she was elected a member of the prestigious German Academy of Sciences Leopoldina, the Academy of Europe and AcademiaNet. She is also an ECCAI Fellow (European Coordinating Committee for Arti cial Intelligence).