From the $2,000 Computer to AI in Philanthropy
When new technology arrives, people are skeptical—until it catches on. Then suddenly, everyone freaks out.
I learned this lesson young. Back in the late 90s, I was a high schooler begging my mom to buy me a computer. It wasn’t cheap—$2,000 we really didn’t have. My mom was a single parent raising two kids while working road construction. Yes, road construction! My excuse was that the computer would help me get better grades. What I really knew was that it would give me access to something bigger.
At first, computers were just for email, word processing, and spreadsheets—nothing a 17-year-old dreams about learning, but I did it anyway. Because I could see where it was headed. Through the dot-com bust, the rise of piracy, and all the growing pains of the 2000s, the internet became the backbone of everything we do. Today, it’s not a luxury—it’s a necessity.
Fast-forward to 2025, and here we are again. This time, it’s Artificial Intelligence.
From Early Experiments to Daily Life
AI isn’t new—the field traces back to a 1956 workshop at Dartmouth College where the goal was to create machines that could reason, learn, and solve problems like humans. What you’ve probably heard about most recently are “LLMs,” or Large Language Models. These systems are highly specialized at understanding, generating, and responding to human language.
The turning point came with the release of OpenAI’s ChatGPT. In just three years, usage has soared, with estimates of 122 to 190 million people engaging daily. And PwC projects AI could add up to $15.7 trillion to the global economy by 2030. Like the internet before it, AI is no longer optional—it’s foundational.
Why This Matters for Philanthropy
For philanthropy, AI is not just an IT issue. It’s an organizational strategy.
- Donor Trust at Stake. We live by the Donor Bill of Rights. A policy created without us could inadvertently violate privacy or erode confidence.
 - Data Security Risks. Staff are already experimenting with AI. Without clear guidelines, confidential donor information—giving histories, contact details, personal notes—could be exposed.
 - Bias in Fundraising. AI models can amplify inequities buried in data. Only philanthropy professionals can catch and correct these risks before they harm relationships.
 
The reality is: our teams are already using AI. Pretending otherwise is counterproductive. What we need are guardrails, not gatekeeping. A practical “Acceptable Use Policy” can keep us safe while letting us innovate responsibly.
Practical Guidance: Do’s and Don’ts
| Task | Safe Use of AI (DO) | Unsafe Use of AI (DO NOT) | 
|---|---|---|
| Drafting a Donor Email | Use AI to draft stewardship language using only public project info | Paste donor’s contact info, giving history, or personal details into a public AI tool | 
| Creating a Gift Agreement | Use AI to generate a template based on standard language | Enter a donor’s private financial details into the AI | 
| Analyzing Donor Data | Use a secure, contracted AI tool within the museum’s database | Upload donor lists to an unvetted external platform | 
| Summarizing Information | Paste a public news article into AI for a summary | Upload internal contact reports or confidential documents | 
The Crossroads Ahead
AI won’t replace the empathy, nuance, and strategic judgment that philanthropy requires. But it can amplify our impact—if it’s used thoughtfully.
Just as my teenage gamble on a $2,000 computer changed my world, AI is about to reshape ours. The question isn’t whether nonprofits will use AI, but whether we’ll use it responsibly. And that responsibility starts with ensuring philanthropy has a seat at the table.
Innovation without trust is a risk we can’t afford.
“The views on this website/post are mine alone and not those of the Cleveland Museum of Art.”
