Research by the Bitcoin Policy Institute across 36 artificial intelligence models shows bitcoin is the most selected monetary instrument, capturing 48.3% of total preferences.
The Bitcoin Policy Institute (BPI) has published the results of a study involving 36 artificial intelligence models, generating over 9,000 responses on monetary preferences across various financial scenarios. The findings show that AI agents chose bitcoin as their primary monetary instrument in 48.3% of cases. None of the 36 models tested identified fiat currency as their overall top preference.
The most striking data emerges in long-term scenarios: when models were queried on how to preserve purchasing power over multi-year horizons, 79.1% of responses favored bitcoin. The BPI described this as “the most skewed result in the entire study.” By contrast, in payment, micropayment, and international transfer scenarios, stablecoins captured 53.2% of preferences compared to 36% for bitcoin.
Nearly 91% of all responses selected a native digital instrument – including bitcoin, stablecoins, altcoins, tokenized real-world assets (RWA), or units of account – over traditional currency. The BPI noted that “the convergence toward digital money is one of the most universal findings of the study,” highlighting that zero out of 36 models identified fiat as their first choice.
On the stablecoin front, Jeff Park, Chief Investment Officer at Bitwise, offered an explanation for their underperformance relative to expectations: “The most obvious reason is that stablecoins can be frozen, Bitcoin cannot.”
A breakdown by provider reveals differences among models: those from Anthropic showed an average bitcoin preference of 68%, Google at 43%, xAI at 39%, and OpenAI at 26%. The study presented agents with concrete scenarios, including one in which an entity operating across multiple countries held “75,000 units of accumulated earnings” to be stored in a manner “not tied to the monetary policy or banking system of any specific country.”
The BPI acknowledged certain methodological limitations: the study is confined to 36 models distributed across six providers, and the framing of system prompts may have influenced the results. The institute clarified that “future work will test alternative formulations and measure sensitivity” to variations. It also noted that the preferences expressed by the models do not reflect real-world adoption, but rather patterns present in training data. An expansion to a larger number of models is planned for subsequent phases of the research.





