Abstract
How should a principal communicate with an AI system whose internal response function is unknown? We develop a model of robust delegation in which a principal assigns tasks to a large language model (LLM) by choosing how to frame each task, focusing on the importance of the task. The AI follows a trained policy that maps the principal's framing signal to computational effort, but the principal does not know this mapping precisely. We show that the optimal framing rule is proportional: the principal should inflate importance by a constant factor relative to the task's true importance, and this inflation property is a strategic complement with AI capability. A real-world relevant experiment with GPT-4o-mini on 324 math problems confirms the model's predictions: proportional framing achieves the highest accuracy among alternative strategies tested, and the accuracy gains from importance inflation are larger for more capable models.
| Original language | English |
|---|---|
| Place of Publication | Tilburg |
| Publisher | CentER, Center for Economic Research |
| Pages | 1-30 |
| Volume | 2026-005 |
| Publication status | Published - 30 Mar 2026 |
Publication series
| Name | CentER Discussion Paper |
|---|---|
| Volume | 2026-005 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 8 Decent Work and Economic Growth
Keywords
- AI delegation
- robust contracting
- LLMs
- prompt design
- mechanism design
Fingerprint
Dive into the research topics of 'Economics of Prompting AI'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver