You are here

Research

The i3 research unit focuses on designing interaction modalities for cutting-edge IT using a highly interdisciplinary approach borrowed from social sciences. The research objectives are at the same time technological—that is, developing innovative technologies—and social—that is, investigating how people use and benefit from these technologies.

Co-design IT with seniors
The team is involved in a number of research activities related to active aging, AAL and e-Inclusion and gained extensive experience in user requirement elicitation, design and evaluation of IT targeting the older population. 

This line of research focuses on analysing patterns of social interactions on Wikis, mainly Wikipedia. We have studied social networks and interactions between contributors in Wikipedia, see the paper "Social networks of Wikipedia" and "Digital libraries and social Web: Insights from Wikipedia users’ activities".

Sensing and reacting to users' interest

Public display systems are an encouraging technology for public and semi-public spaces. Since 2010, i3 has devoted part of its research to investigating how people engage with digital displays in public spaces and to designing novel interfaces and interaction techniques.

In the field of special education, paper is often praised as versatile tool to mediate activities because of its readiness and versatility.
The “Magic Lamp” looks like a standard desktop lamp but can “see” the sheet of paper on the table.

familink: a mobile service to connect families

Within the Mobile Territorial Lab project, the i3 Research Unit goal is to carry out research on personal-data driven mobile services and to investigate how personal "big" data can be exploited to create services that enhance individual awareness and empower communities. 

This research activity aims to design and develop effective learning tools, based on the analyses of users' behaviour, their interactions with the systems and personalized functions to better meet individual preferences and needs.

Personalization technology can effect various aspects of the interaction with users, e.g.: the information selected for presentation; the organization of the overall presentation; the media used to interact with users; the interaction modalities; the flow of the human-system dialogue; the individual vs. group type of adaptation...

The goal of this activity is to model human behavior in different settings and to build systems able to understand human behavior from simple features extracted from the acustical and visual scene analysis. This may open new possibilities for building a next generation of context-aware systems that range from well-being monitoring in domotic systems to automatic coaching in teamwork. Yet, it may also raise new issues such as the users' acceptability of being monitored. From a human-centred perspective, it may be questioned how the system's capabilities, i.e., its perceptual bandwidth, affects the user experience. Alongside the technical issues of building components able to detect behavior in complex environments, we also focus on usability and acceptability of services based on multimodal monitoring of users.