Conference Agenda

Session
Session 3.17: Human preferences and frequency of interaction with algorithmic advisers
Time:
Wednesday, 27/Aug/2025:
11:30am - 12:00pm

Location: Mikado Conference hall

Meeting hall “Mikado”, which can accommodate up to 50 people

Presentations

Human preferences and frequency of interaction with algorithmic advisers

Prof. Elena Asparouhova1, Prof. Milo Bianchi2, Prof. Debrah Meloso3

1University of Utah, USA; 2Toulouse School of Economics, France; 3TBS Business School, France

Most human decisions are taken intuitively, with a mix of reflection and emotions that is often impossible to disentangle. Economists model decisions explicitly as a mixture of objective and subjective elements: economic agents objectively (mathematically) optimize a subjective value function. Using this model, one can create situations where choice is independent of subjective value and demonstrate that humans often fail at the objective part of decision making. Algorithmic advisers can thus help humans, as they never fail at objective optimization.

However, since decision optimality depends both on correct optimization and on knowledge of the right subjective value function, machines who disregard the taste or “preferences” of the human on whose behalf they act, will make poor decisions. Thus, the performance of algorithmic advisers is crucially affected by the machine’s ability to learn about a particular human’s preferences. But will a human do better at communicating their preferences to a machine than at making their decisions themselves? We know humans fail at common tasks of deciding what to consume or invest in, but will they be less faulty at the even less natural task of communicating their preferences?

In the controlled environment of the economic laboratory – taken online via a platform to recruit a diverse set of participants (Prolific) – we induce a specific type of risk preferences and ask participants to create investment portfolios of a risky and a risk-free asset to maximize this preference, either directly or through a robotic adviser. To induce preferences, participant payoff is a fixed transformation of the probability distribution of risky asset payoffs, the payoff of the risk-free asset, and the participants’ chosen holdings of these two assets. Thus, participants do not face true risk: their payoff depends on the entire distribution of payoffs, not on realized payoff only. By controlling participant “risk preferences”, we can assess if the human-algorithm interaction leads to a correct treatment of the subjective part of decision making. In all experimental treatments, we vary the risk preferences we induce over time, so to see if participants react to and attempt to communicate these changes.

We have one treatment where participants choose portfolios on their own and three treatments where participants are advised by algorithms who elicit their human boss’s risk preference via a test (lottery choice). We ask if portfolio choices with or without the algorithmic adviser are better for the preferences we induce. To refine our question, we vary the frequency at which the algorithm elicits risk preferences from humans. This gives us three treatments with an algorithmic adviser, depending on whether the frequency of elicitation is equal, higher, or lower than the frequency at which we change participants’ risk preferences. We ask whether frequent communication allows for better fine-tuning of communicated preferences or, instead, adds noise due to, for example, a biased perception of past algorithm outcomes by the human.

The experiment, coded in oTree, will be preregistered on the platform AsPredicted and approved by the internal review board (IRB) of the University of Utah.