Online decision-making will soon be affected by AI tools, and companies will bid on predictions of human behavior and motivations, otherwise known as the “intention economy.”
Unlike the attention economy, which seeks to win user attention for ads, the intention economy will focus on selling personal intentions-for example, plans to buy something or political views-to the highest bidder.
Large language models, such as ChatGPT, use behavioral and psychological data to predict and nudge users’ decisions so that actions like purchasing products or booking services may be affected.
Key Background:
A new University of Cambridge study reveals how artificial intelligence tools could be used to nudge online decision-making-including purchases, votes, and even social views. The new study discusses the phenomenon of an “intention economy,” in which AI systems predict and manipulate human behaviors for commercial gain, selling those insights to businesses and advertisers.
The study, conducted by the Leverhulme Centre for the Future of Intelligence (LCFI), envisions a marketplace where companies will bid for accurate predictions of individuals’ intentions. Unlike the current attention economy, which revolves around capturing user attention for advertising, the intention economy focuses on forecasting and shaping future decisions. Researchers warn that, without regulation, this could result in a gold rush for those who aim to influence human intentions for profit.
Lifting her voice, historian of technology Dr. Jonnie Penn said that the intention economy would soon replace the present-day attention economy where such media giants as Facebook and Instagram have profited handsomely from users’ attentions. Here, motivations by people to go on holidays and others relating to political views shall be up for sale to whoever bids the highest price for it. The study brings to light the possible dangers this may pose to major democratic processes, such as elections, a free press, and fair market competition. It calls for caution before these unintended consequences of such a system become widespread.
The research underlines the impact of large language models – such as those in charge of AI tools like ChatGPT – in this upcoming market. LLMs would analyze and predict users’ intentions, thus being used to offer unique advertisements, personalized suggestions, and even steer users towards specific decisions related to their behavioral and psychological data. For example, if a chatbot detects that a user is stressed or has a certain preference, it may recommend booking a movie ticket for him, which is essentially predicting and leading future action.
It further predicts that AI-driven models will generate personalized ads and refine their recommendations based on real-time data, using individual user profiles to maximize the impact
of such interventions. It might even predict and bid on a user’s intent to book a hotel room, flight, or restaurant reservation. Ultimately, the authors suggest that this capacity of AI to nudge users in some highly sophisticated and personalized way may have important implications for privacy, autonomy, and ethical decision-making in the digital age.