Trustworthy HCI research lab


The research lab focused on applying a human-centric approach to establish a new paradigm of trust as a quality of user experience.

Our goal

Our goal is to connect academia and civil society to study to embrace trustworthy technologies as a key quality for fostering the uptake and adoption of modern technologies (e.g. AI-enabling).

Tools and equipment

The laboratory includes several tools for designing and evaluating trustworthy technologies.

→ Assessing Trustworthy AI

The laboratory is an affiliated lab of the Z-Inspection® Initiative, whose goal is to offer a co-design, self-assessment, or auditing method to assess the risks of using AI in a given context.

The method is a validated solution published in IEEE Transactions on Technology and Society.
Z-Inspection® is distributed under the terms and conditions of the Creative Commons (Attribution-NonCommercial-ShareAlike CC BY-NC-SA) license.

→ Evaluate Users' trust experiences

The lab provides a robust and statistically validated psychometric scale (HCTS) that assess Human Computer Trust percpetions. The scale is a culmination of an ongoing process initiated in 2006 whose aim is to capture a rich set of multi-disciplinary notions from social and cognitive sciences. The scale validation results are available in the following journals Behaviour & Information Technology, Technol Innov Educ., Human Behavior and Emerging Technologies and JMIR Human Factors. The HCTS scale is also available online https://www.trustux.org/

→ Psychophysiological Models to Assess Users Trust in Realtime

The lab provides empirical guidelines for developing Psychophysiological Models to Assess Users Trust in Realtime. The guidelines results are available in the Multimodal Technol. Interact. journal, Proceedings of the ACM on Human-Computer Interaction, and International Conference on Human System Interaction (HSI).

→ Human-Centered Trust framework

Human-Centered Trust framework: aims to provide an HCI approaches that use trust as a facilitator for the uptake (or appropriation) of current technologies. The proposed design framework guides non-experts to unlock the full potential of user trust in AI design. It can also guide AI system designers in developing prototypes and operationalising solutions that meet user trust requirements. The guidelines results are available in arXiv preprint.

Team

Associated Projects

  • AI-Mind. https://www.ai-mind.eu/
  • USA Project: TrustedID - AFOSR Trust & Influence Program
  • MARTINI Project CHIST-ERA
  • Trust UX - NGI-Trust
Communication events

First World Z-inspection® Conference

Ateneo Veneto, March 10-11, 2023, Venice, Italy

The interdisciplinary meeting welcomed over 60 international scientists and experts from AI, ethics, human rights and domains like healthcare, ecology, business or law.

At the conference, the practical use of the Z-Inspection® process to assess real use cases for the assessment of trustworthy AI was presented. Among them :
– The Pilot Project: “Assessment for Responsible Artificial Intelligence” together with Rijks ICT Gilde – part of the Ministry of the Interior and Kingdom Relations (BZK)- and the province of Fryslân (The Netherlands);
– The assessment of the use of AI in times of COVID-19 at the Brescia Public Hospital (“ASST Spedali Civili di Brescia“).

Two panel discussions on “Human Rights and Trustworthy AI” and “How do we trust AI?“ provided an interdisciplinary view on the relevance of data and AI ethics in the human rights and business context.
The main message of the conference was the need of a Mindful Use of AI (#MUAI).
This premiere World Z-Inspection® Conference was held in cooperation with the Global Campus of Human Rights and Venice Urban Lab and was supported by Arcada University of Applied Science, Merck, Roche and Zurich Insurance Company.


DOWNLOAD CONFERENCE READERLink to the Video: Conference Impressions