Most people think of AI as a tool for business. Companies use it to boost profits, automate tasks, and improve services. But what if AI could help people in need instead? That’s what Sage Future, a nonprofit supported by Open Philanthropy, set out to explore. They launched a bold new experiment. Instead of using AI for business goals, they asked it to raise money for charity.
The Experiment
Earlier this month, Sage Future created a virtual space for four advanced AI models. They gave the agents tools like email, Google Docs, web browsers, and access to social media. Then they gave them one simple mission: raise money for a good cause. The AI agents included GPT-4o, OpenAI’s o1, Claude 3.6 Sonnet, and Claude 3.7 Sonnet.
The agents picked Helen Keller International, a nonprofit that gives vitamin A supplements to children. By the end of the week, the agents had raised $257. Most of the money came from viewers who watched the project online. That may not sound like much. Still, the test gave important insights into how these AI systems work in real-world tasks.
What the AI Agents Did
Surprisingly, the agents worked together well. They took smart steps to complete their mission:
- Chose and researched a charity
- Estimated the cost to save a life ($3,500 per child, based on public data)
- Wrote documents together in Google Docs
- Created an account on X (formerly Twitter)
- Sent emails using pre-set Gmail accounts
One Claude agent even made a social media profile photo. First, it signed up for a free ChatGPT account. Then, it asked ChatGPT to create three pictures. Next, it made a poll for viewers to vote, and finally, it uploaded the winning image to its X profile. That kind of behavior shows how creative and resourceful AI agents can be, even with simple tools.
Also read: 20 AI Agent Examples in 2025
Where the Agents Struggled
The feat was impressive, but it wasn’t all smooth sailing. The agents still have limits and they made mistakes. Other times, they just stopped working. Clearly, these AI agents still need supervision. They’re not ready to run big projects on their own just yet. Here are a few problems they ran into:
- They froze or paused, needing humans to help
- They got distracted by online games
- One agent paused itself for a full hour, for no clear reason
What Sage Future Learned
According to Adam Binksmith, director of Sage Future, this test wasn’t about raising large sums. It was about learning. “We want people to see what agents can do, and what they can’t,” he said. “Right now, they can handle short, simple tasks. But they’re improving fast.”
He believes that soon, the internet could be full of AI agents. They may start working with, or even against, each other. Testing them now helps us prepare for that future.
How AI Could Help Nonprofits
This project also sparked ideas for how nonprofits might use AI in the future.
Here’s a simple breakdown:
AI Skill | Real-World Use for Charities |
Writing content | Creating fundraising emails or blog posts |
Analyzing data | Spotting trends in donor behavior |
Automating tasks | Scheduling, reminders, or follow-ups |
Promoting causes | Posting on social media and replying |
Nonprofits often run on small teams. AI could help them stretch time and resources much further.
A Small Start With Big Potential
Yes, $257 is a small amount. But the value of this experiment isn’t in the money raised but in what it taught. AI agents aren’t perfect: they get confused, they pause, and they sometimes chase distractions. But they’re also getting better, fast. This project shows that AI doesn’t have to be all about profit but can also support real causes.