Skip to main navigation Skip to search Skip to main content

Economics of Prompting AI

Research output: Working paperDiscussion paperOther research output

13 Downloads (Pure)

Abstract

How should a principal communicate with an AI system whose internal response function is unknown? We develop a model of robust delegation in which a principal assigns tasks to a large language model (LLM) by choosing how to frame each task, focusing on the importance of the task. The AI follows a trained policy that maps the principal's framing signal to computational effort, but the principal does not know this mapping precisely. We show that the optimal framing rule is proportional: the principal should inflate importance by a constant factor relative to the task's true importance, and this inflation property is a strategic complement with AI capability. A real-world relevant experiment with GPT-4o-mini on 324 math problems confirms the model's predictions: proportional framing achieves the highest accuracy among alternative strategies tested, and the accuracy gains from importance inflation are larger for more capable models.
Original languageEnglish
Place of PublicationTilburg
PublisherCentER, Center for Economic Research
Pages1-30
Volume2026-005
Publication statusPublished - 30 Mar 2026

Publication series

NameCentER Discussion Paper
Volume2026-005

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 8 - Decent Work and Economic Growth
    SDG 8 Decent Work and Economic Growth

Keywords

  • AI delegation
  • robust contracting
  • LLMs
  • prompt design
  • mechanism design

Fingerprint

Dive into the research topics of 'Economics of Prompting AI'. Together they form a unique fingerprint.

Cite this