Cindy Candrian, Anne Scherer
Volume 134, September 2022, 107308
Being capable of leveraging the potential provided by AI is likely to become a key value driver for organizational efficiency and effectiveness. The circumstances and system design crucially affect efficiency improvements.
Artificial Intelligence (AI) is reshaping the world and presents great opportunities for individuals and businesses. To date, most AI systems that are in wide use directly or indirectly operate under the responsibility of humans, who are in control of the analytical process or outcome. However, a growing number of AI systems go beyond acting as human proxies and operate in a truly autonomous manner. These systems are designed and empowered to make their own decisions fueled by the vast amounts of data they receive, analyze, and interpret (ServiceNow, 2020). Such autonomous AI systems have the capability to surpass human intelligence across various industries and business functions, making them a powerful force for competitive advantage (Schrage, 2017). This technological progress creates entirely new opportunities for humans to delegate decisions to algorithms and artificial agents that no longer require human supervision or direction (Goldbach et al., 2019).
In business practice, Management by Algorithm (MBA) is becoming more commonplace, and many predict that the most successful corporations will be those who delegate critical business decisions to smart algorithms (Schrage, 2017). Autonomous AI can determine entire marketing and capex strategies, identify competitors and target segments, personalize products and prices to customers, and customize communications to individualized preferences (Huang & Rust, 2021). For example, Renaissance Technologies, along with other investment funds, are relying on autonomous algorithms to analyze a situation, author a strategy, and execute it (Schrage, 2017). On an individual level, AI can automate bidding in online auctions (Adomavicius et al., 2009), trading in financial markets (Hendershott et al., 2011), and purchase decisions for customers, as well as automate and augment sales processes and frontline employee tasks (Grewal et al., 2020). Reliance on such new technologies can affect users’ judgments and decisions, influence the magnitude of behavioral biases (Dowling et al., 2020; Herrmann et al., 2015), and thus substantially change and even improve decision making, business strategies (Davenport et al., 2020), and market outcomes (Herrmann et al., 2015). Given the vast applicability and the huge potential, some claim that organizations need to clarify when talented humans must defer to algorithmic judgment and delegate decisions to (autonomous) AI (Schrage, 2017).
Although decision delegation encompasses many areas of people's private and professional lives, empirical evidence suggests that the decision to delegate and transfer power to others is not fully rational and often less than optimal. Indeed, investors often under-delegate to more skilled financial agents (Owens et al., 2014), or executives tend to micromanage subordinates, even if they have more specialized skills (Bartling et al., 2014). The results are inefficient delegation choices, over-monitoring, and under-delegation that can lead to poor(er) decisions and thus give rise to harmful economic consequences.
Given the evolution of computational autonomy and the fact that AI systems increasingly outperform humans in a variety of tasks, it is important to understand how prior findings on decision delegation apply if the counterparty is an artificial rather than a human agent, or put differently, whether non-social agents change or even improve people's delegation choices. Yet, contemporary AI research has mainly focused on participative decision-making and the determinants of individuals' reliance on AI advice, such as its design (e.g., Dietvorst et al., 2018) or the context in which it is applied (e.g., Castelo et al., 2019). As AI can increasingly take on the role of an autonomous decision maker, research on delegation behavior, where individuals transfer their decision-making power and fully entrust AI to make decisions on their behalf (Goldbach et al., 2019), is largely missing. Additionally, while past and current AI research underlines the contextuality of people's trust in AI, it has largely limited its view to the task (e.g., subjective vs. objective, Castelo et al., 2019) or domain (e.g., medical services, Longoni et al., 2019). From the field of delegation research, however, we know that the choice to delegate largely hinges on the potential decision outcome itself. Thus, it is important to understand not only if users are more willing to cede control to AI than to human agents, but also why and how delegation varies for decisions that may spur a high or low desire to maintain control and thus to maintain decision making power.
The present research aims to close these gaps. In particular, we investigate whether people are (more) willing to delegate decisions to AI agents than to human agents. Moreover, we examine if peoples’ choice to delegate a decision to a human or an AI agent differs for two different decision contexts: (1) decisions that spur a high desire for control, as they could lead to potential losses and, conversely, (2) decisions that could lead to gains but entail lower desires for control. In doing so, we are the first to link literature on decision delegation, social risk, and control premium to the increasingly important research on trust in AI.
Managers can introduce AI agents strategically to avoid under-delegation and ease decision making, especially in more aversive situations such as during economic downturns, for risk management, or for projects facing declining economic prospects.
Our findings offer valuable insights for businesses as to the hows and
whys of delegation in new situations where individuals can delegate
their decisions to autonomous AI systems. In order to leverage this
powerful force for competitive advantage, it will be important for
business leaders to understand managers’ willingness to entrust AI to
make decisions on their behalf. We find that people are
willing to forgo potential economic benefits in order to maintain control
over decisions in interactions with human agents but not with AI agents.
Since the economic environment is not static, it is important to under-
stand not only whether people are more willing to cede control to AI
agents, but also how willingness to delegate varies in response to the
decision context (i.e., losses vs. gains). Our results show that the dif-
ference in delegation rates for human versus AI agents increases when
facing potential losses, or—put differently—that AI agents can reduce
the under-delegation that is often most pronounced when encountering
a loss and when circumstances spur a high desire for control. These
findings suggests that managers can introduce AI agents strategically to
avoid under-delegation and ease decision making, especially in more
aversive situations such as during economic downturns, for risk man-
agement, or for projects facing declining economic prospects.
Users’ willingness to pay is higher for AI agents, especially in the domain of losses.
Moreover, our findings suggest that in order to be willing to cede
control to a human agent as compared to an AI agent, the human agent
must offer either a higher performance accuracy and/or lower fee than
the AI agent. Given the same performance accuracy, users’ willingness
to pay is higher for AI agents, especially in the domain of losses.
Considering the higher cost of human labor, these findings are critical
for managers. On the other hand, our studies also show that delegation
to human agents can be increased by offering operational transparency,
hence reducing the social risk and felt uncertainty of delegating to a
social agent. Instead of focusing on the explicability of AI, practitioners
might do better to consider explaining the decision logic of human
agents more clearly so as to reduce perceptions of potentially malign
intentions, thus increasing willingness to delegate to agents.
Accuracy-explainability trade-off: Users’ willingness to pay increases for more accurate AI systems but not for AI systems that offer higher process transparency.
Another related challenge is the amount of procedural transparency
users require when interacting with AI. With the increasing advance-
ment of these systems and large amounts of training data, systems not
only become more accurate but also less explainable (Chui et al., 2018;
Huang & Rust, 2021). The present research demonstrates that while
designing for more accurate AI systems (higher expected value of dele-
gation) increases delegation rates, designing for operational trans-
parency does not affect delegation rates. To put it differently, users’
willingness to pay goes up for more accurate AI systems but not for AI
systems that simply offer higher process transparency. Rather than
communicating the operational process of AI systems, managers should
thus focus on clearly conveying their performance and accuracy.
Ultimately then, being capable of leveraging the potential provided
by AI is likely to become a key value driver from organizational effi-
ciency and effectiveness. Understanding users’ willingness to transfer
decision making authority to AI has potentially significant implications
across many domains in both private and public life. Designers of AI
systems, internal users in an organization, as well as firms and managers
can all greatly benefit from understanding when and why people dele-
gate to AI.
Comments