AI Governance Tokens: Can LLMs Decide Proposal Votes?

Updated:August 6, 2025

Reading Time: 3 minutes
Lines of code on a computer screen (Anthropic Code)

The emergence of voting agents, aided by AI, is beginning to transform the nature of decentralized organizations in response to the growing complexity of blockchain governance. Large Language Models (LLMs), which were previously used in chat interfaces and writing assistants, will now be used in decision-making within decentralized autonomous organizations (DAOs). 

These AI actors not only automate processes, but also assess proposals, consider historical performance, and even vote on-chain. The issue is not whether they are technically feasible, but whether they are strategically prudent: will LLMs be able to make governance decisions that represent the will and wellbeing of the communities that they serve?

The development of this concept is closely tied to the behavior of the cryptocurrency market and investor sentiment. With users monitoring everything, including real-time gas fees and bitcoin price analysis, it is clear that on-chain systems of governance must scale and evolve equally quickly. The temptation of AI as an impartial, untiring voter is obvious in an environment where every decision can affect millions in a fraction of a second.

The Role of LLMs in On-Chain Decision-Making

In practice, LLMs are being trained on the history of DAOs, project whitepapers, forums, and economic models. The developers set up these models to examine the sentiment of proposals, consider tokenomics, and reputational data associated with wallet addresses. Based on that, the AI will be able to produce a voting answer, which can be quicker and more consistent than the human actors it is intended to assist.

However, it does not end with yes or no voting. LLMs can argue with each other, estimate the future consequences of adopted proposals, and reveal previously unnoticed risks. This changes the governance process, which is currently a static, manual process, to a dynamic, data-driven flow.

For example, centralized exchanges are making a few investments in AI governance research under the radar. They are working on how machine intelligence can supplement, rather than supplant, token-weighted voting. Within these ecosystems, these tools may one day aid in proposal screening, prevent scams, and facilitate automated community polling.

Trust, Transparency, and the Risk of Misalignment

Neutrality is one of the most promising aspects of AI in blockchain governance. AI agents do not become emotionally manipulated, and they do not vote based on personal incentive schemes, unlike human participants. However, that does not imply that they are not biased. An LLM that has been trained on poorly curated data or has been exposed to adversarial examples can make decisions that appear objective, but they are not.

This raises fundamental philosophical questions. Who trains the model? Who checks it? Will an AI have one vote, like a human being, or will it have more? The DAO world remains divided on these questions, but the lure of scaling membership, particularly in cases where voter turnout is poor, is driving this frontier further.

Real-World Prototypes and Future Applications

Several DAOs have already experimented with AI assistance. Others apply LLMs to summarize lengthy governance proposals, enabling voters to grasp the key points quickly. Others allow AI agents to propose changes or even raise concerns on potentially dodgy proposals before casting votes. The logical follow-up, of course, is automated participation, in which LLMs are given the voting power, either in the place of token holders or in cooperation with them.

We may see DAOs in the future with agent networks taking the majority of votes, some trained by individuals, some by communities and some part of the core of a protocol. Such agents may be designed to compete, evolve, and improve over time, thereby minimizing voter apathy and enhancing the efficiency of governance.

Moral and Legal Consequences

With the increasing use of AI in Web3 systems, the distinction between tool and player starts to fade. Does an AI agent have to announce its logic of operation? What happens when a bad actor poisons the training data of an AI to influence a vote? Who is responsible? This is not only a question for developers, but also for lawmakers, as crypto is growing under the wider regulatory lens.

For instance, many exchanges have been promoting responsible innovation. As an increasing number of exchanges and DAO platforms shift to an intelligent governance system, those values will need to be coded into the AI itself. The aspects of fairness, decentralization, and transparency should be strengthened on each stage of AI training.

A Vote for the Future?

The concept of AI voting on the DAO is not a trick, but a glimpse into how decentralized governance can evolve. As DAOs expand in size, complexity, and financial weight, relying on human resources becomes increasingly unsustainable. The solution to the gap between governance and the reality is LLMs a neutral layer that assists in the interpretation and action of governance in real-time.

However, there is a need to approach AI in governance with caution. It must supplement and not supplant the varied opinions that DAOs are worth. The most promising applications might be in advisory systems or co-decision systems: LLMs can offer the reasoning and humans can offer the intuition.

The fact that exchanges and other platforms continue to explore AI technologies in the sphere of finance, security, and governance demonstrates that they have a long-term perspective on hybrid models. It is no longer hypothetical; AI governance is here and is already expanding rapidly, whether assisting users in decision-making or supporting communities in scaling their decisions.


Tags:

Joey Mazars

Contributor & AI Expert