• Home
  • Blog
  • AI
  • Coral Protocol’s Roman J. Georgio on Building the Infrastructure for the Internet of AI Agents

Coral Protocol’s Roman J. Georgio on Building the Infrastructure for the Internet of AI Agents

Updated:September 4, 2025

Reading Time: 5 minutes
A phone displaying an image undergoing editing

Roman J. Georgio is the CEO and Co-Founder of Coral Protocol, which is redefining how AI agents interact and collaborate. His background in AI innovation includes scaling one of G2’s top 20 fastest-growing startups.

Now, at Coral, he’s spearheading the creation of an infrastructure layer for intelligent agent ecosystems, emphasizing interoperability, graph-based orchestration, privacy, and Web3 integrations such as onchain reputation and agent-to-agent payments. In this interview for AutoGPT, Georgio discusses Coral’s vision and the future of AI agents.

You’ve accrued a wealth of AI experience within a comparatively short space of time, including stints at CAMEL-AI and Eigent AI, where you built multi-agent systems and synthetic data engines for AGI research. How did this career trajectory lead you to Coral Protocol, and what differentiates Coral’s vision for agent workflows from that of incumbent solutions?

My background is more on the founding team side, focused on growth and product. It was a great experience, especially getting to meet and work with a lot of top research and engineering talent and was where I met my co-founder, Caelum.

During that time, I learned a lot about what people are actually building with agents and AI and it became evident that agents were not being reused. Every time someone was building a multi-agent system, they were doing it from scratch, which doesn’t really make sense considering that’s not how most technologies evolve.

An analogy I often give is: if you were building a SaaS product, you wouldn’t build it in binary, right? You’d reuse and leverage other technology already out there. I’d say Coral is less focused on static workflows and more on making something that works in a graph-like way. That’s been reflected in our recent GAIA results.

Scaling the Internet of Agents

Can you describe the key innovations underpinning Coral Protocol and how they address limitations in existing AI agent frameworks, particularly in terms of interoperability?

Sure. We’re looking at this from a few different levels, and there’s a lot to talk about, but I’ll try to keep it to some key highlights.

Firstly, Coral allows any agent, regardless of framework, to communicate with any other agent. There are a few protocols that attempt this, but we’ve created an MCP with a set of tools that lets agents interact in more of a Slack-like communication style, which we call graph-based orchestration.

That leads me to the second point. Coral makes it possible to scale agents without bottlenecks. Instead of one agent being overloaded with responsibility, we distribute tasks. It’s like when you push ChatGPT too far; it doesn’t work well because it’s trying to do everything at once.

Coral’s principle is to split responsibility, and that’s what allowed us to perform so strongly on the GAIA benchmarks. Even small models can compete with large ones when the work is properly divided.

Beyond that, I think Coral handles privacy really well. Because the server deploys each agent, it ensures that only the instance it created can be part of the graph so there’s no accidental mixing of user data. Sessions are scoped to a single user request, so agents don’t persist outside that. 

Privacy keys let agents reference anything that’s meant to persist between sessions, but that’s optional. And deployment into Trusted Execution Environments means we can guarantee privacy and security agents run with infrastructure-enforced firewalls, and images are locked to the exact requested version.

Engineering Ethical AI

In recent discussions on AI safety and fairness, you’ve emphasized predictability over scale and well-defined responsibilities for agents. How does Coral Protocol incorporate these principles to mitigate risks like data misuse or black-box models?

I think black-box models are a great example here. Why rely on one huge model that you can’t see inside of, when you could use something like GAIA where you can observe exactly what the agents are doing and thinking?

On data misuse, it really comes back to what I mentioned above. With Coral, there’s no way for data to accidentally get mixed across frameworks. That’s just not possible in how we’ve built it.

Coral leverages Web3 components such as onchain reputation management and agent-to-agent payments to create a more economically fair and verifiable ecosystem for agents. In your view, what are some other benefits the convergence of Web3 and AI yield that are often overlooked?

I think payments and trust are huge here, exactly as you say. But for us, there are a few other benefits I could go into as well.

For one, it allows for more value to be delivered without as much reliance/trust on small unknown companies. Our payments using Solana escrow means you don’t have to just take our code at face value. With things like Phala or Acurast, you can push trust down to secure hardware instead of the agent operator. And down the line, attestation mechanisms could give us decentralized moderation and verifiable reviews without the usual risks of monopolies or fake feedback.

There are a few other directions we are exploring, as there is a lot of interesting crossover.

Leveling Up Mini-Models

Your mini-model for running AI Agents recently topped the GAIA benchmark leaderboard for its class. What’s the significance of this, both from a company perspective and from that of third-party developers considering using Coral Protocol?

We’re not actually building mini-models. What we’re building is infrastructure to orchestrate what we call the internet of agents.That means making it possible to leverage all the amazing agents already out there. We’re exploring what happens when you scale horizontally: many smaller, specialized agents coordinating like a society.

This approach avoids the limits of size (cost, safety, diminishing returns) and opens up new possibilities. We’ve seen glimpses of this with CAMEL role-playing, DeepSeek’s MoE, and Heavy Grok’s “study groups.” With Coral Protocol, we took it further: a graph of agents that converse, delegate, and form dynamic teams.

On the GAIA Benchmark, our small-agent system scored 60%, beating Microsoft’s Magnetic-UI (42%) and even outperforming larger-model systems like Claude 3.5/3.7. The results suggest that scaling through systems – not just bigger models – might be the next leap for AI. 

Everyone’s talking about AI agents right now, particularly within Web3, where they’ve become one of the industry’s fastest growing sectors in terms of number of startups and investment received. But so far, this hasn’t materialized into major consumer adoption of onchain agents, which are still largely at the experimental stage. How long do you think it will be before agents overtake humans in terms of onchain transaction count, and what are the challenges that must still be solved for this to happen?

I think it is a really exciting space, but I don’t think there’s an agent economy yet. I’m not sure I even see a world where there are loads of use cases for agents to make transactions completely independently. At least for us, we’re waiting to put humans very much in control of the agents and build systems that automate the flows.

So I think that would be hard to compare if we are using onchain transactions as a metric, even though I do agree work will be taken over by agents. Maybe the transactions we should be looking at are more like Coral, where the question is: what outcomes have agents produced, and when are those outcomes going to be more than what humans produce?


Tags:

Joey Mazars

Contributor & AI Expert