Artificial Intelligence (AI), defined as a machine that can display and mimic human-like (i.e., cognitive, affective, and physical) intelligence, usually comes in the form of an algorithm, program, software, machine, or system. AI’s exponential growth in the past two decades has led to AI increasingly impacting our daily lives in more ways than one, whether we are aware of this or not.

Both businesses and individuals rely on these technologies when making small and large-scale decisions. However, despite these technologies being dubbed as more accurate and efficient – several studies show that individuals are still resistant to using AI in several domains and oftentimes do not trust its recommendations.

One reason is that these technologies often are “black box”, meaning that the underlying mechanism of how they got to their decision is unexplainable and cannot be understood (Goodman and Flaxman, 2017). The lack of transparency is one of the biggest drivers of aversion toward the adoption of AI. There is an added difficulty, particularly in the public sector and high-stake domains, to completely rely on these models that do not have the ability to explain their internal systems (Bathaee, 2018).

With this challenge is a surge of interest, both by academics and practitioners, to understand how these black box systems work which has led to the creation of “explainable AI” (XAI) (Rudin, 2019; Rudin and Radin, 2019). The goal of XAI is to be able to explain or demonstrate how they arrived at a given output or decision, in a manner that is comprehensible to both its developers and consumers as users.

Transparency – the ultimate solution?

These movements demonstrate that transparency in AI decision-making is an important step towards accountability for all stakeholders. Several questions are raised about how one should define and measure AI transparency and explanation effectiveness. Much of the current research on algorithmic transparency focuses on the explanation’s specificities (e.g., the type of explanation, how much information, textual vs. visual information). However, there is one notable gap – we do not know when transparency matters for consumers.

We address this question in an ongoing research project titled “Opening the Black Box: When Does Transparency in AI Matter for Consumers?” with Bruno Kocher and Andrea Bonezzi. In this project, we look at the different contexts in which transparency in AI is deemed useful and beneficial for consumers as end-users.

More specifically, we look at the impact of transparency depending on whether the outcome is a prediction (where the AI makes the decision, leaving little to no autonomy to the consumer) or a recommendation (where the consumer has higher levels of autonomy being the final decision maker). Our results seem to indicate that when the AI decides for the consumer (e.g., loan application), transparency increases usage intention when the outcome was negative (e.g., being rejected for the loan) but not when it was positive (e.g., being accepted for the loan).

Interestingly, we found the opposite when AI provides recommendations for consumers. Usage intention increased when the recommendation led to a positive outcome (e.g., recommended selling price leads to monetary gain) but decreased when it led to a negative outcome (e.g., recommended selling price leads to monetary loss). We are conducting more studies to understand the mechanism behind these findings.

In addition, we wanted to know whether there was such a thing as too much transparency. We found that providing more information (i.e., increasing transparency) increases consumers’ usage intention but the benefits it brings were marginal. Finally, we do not document algorithm aversion in our results – previous studies found that consumers prefer humans over AI – and we found that transparency increased usage intention regardless of whether it was delivered by a human or AI agent.

By understanding the psychological processes of how consumers evaluate algorithm-provided explanations, we hope to help firms design and deploy XAI more efficiently and help policymakers create regulations of AI that consider consumer preferences and needs.

Auteur(s) de cette contribution :

Niña Sayson
Autres publications

Assistante doctorante à l'Institut de Management de l'Université de Neuchâtel, intéressée par l'impact de l'intelligence artificielle (IA) et de la technologie sur le bien-être. Ses recherches portent sur l'impact de la transparence algorithmique et sur la manière dont les agents intelligents non-humains (par exemple, les agents d'IA) influencent le bien-être des consommateurs et de la société.