The recent incident of Donald Trump sharing AI-generated images of Taylor Swift supporters isn’t just a quirky footnote in U.S. politics—it’s a stark warning for the humanitarian sector. As AI-generated deepfakes become increasingly sophisticated and accessible, they pose a significant threat to the core principles of humanitarian work. But are we having the right conversations to address this looming crisis?
The Erosion of Trust: A Deepfake Scenario
Picture this: A deepfake video surfaces on WhatsApp, showing aid workers from a well-known NGO taking sides in a local political conflict or engaging in corrupt behavior. The video is fake, but in the chaos of a crisis zone, it’s shared by thousands, causing immediate damage to the NGO’s reputation. Worse yet, the video could incite violence or mistrust, undermining years of carefully built relationships in the community.
This isn’t science fiction anymore. The Trump-Taylor Swift incident shows just how quickly AI content can be created and go viral. Now imagine this happening in a conflict zone where misinformation can spread faster than truth—and the stakes are much higher. A recent report by the Brookings Institution, “Deepfakes and International Conflict.” highlights the real risks deepfakes pose, including:
- Undermining public trust in humanitarian efforts
- Exacerbating social tensions and dividing communities
- Damaging international cooperation by eroding trust between partners
Perhaps most concerning is the “liar’s dividend”—the idea that deepfakes make it easier to dismiss authentic content as fake. In humanitarian work, where credibility is everything, the potential damage could be enormous.
Are Humanitarian Agencies Ready?
So, the big question: Are humanitarian organizations equipped to handle this threat?
Unfortunately, many are not. Most agencies are already stretched thin, lacking the technological resources and expertise needed to detect deepfakes in real time. By the time a fake video is identified and debunked, the damage may already be done.
There’s also the issue of speed—deepfakes can spread across messaging apps in hours, far faster than most organizations can respond. In a crisis zone, where every minute counts, this could lead to irreversible harm.
Engaging AI Developers: Where Do We Start?
The challenge goes beyond internal readiness. Humanitarian organizations need to engage with the developers of AI technologies to establish safeguards that protect the sector’s integrity. But this is no easy task:
- Limited Dialogue: While some conversations are happening, they’re not nearly fast or deep enough to match the pace of the threat.
- Competing Priorities: AI developers face a wide range of ethical concerns, and humanitarian-specific issues aren’t always at the top of the list.
- Open-Source Dilemma: Open-source AI models are freely available and adaptable, making it almost impossible to impose universal safeguards. How do we protect humanitarian principles when anyone can create a deepfake with minimal skills?
On top of that, there’s a shortage of experts who truly understand both AI development and the unique challenges of humanitarian work. This knowledge gap leaves the sector vulnerable to the growing sophistication of deepfakes.
The Open-Source Conundrum
The rise of open-source AI is a double-edged sword. It democratizes access to powerful tools, but also means that these tools can be used for harm. For humanitarian organizations, this presents urgent questions:
- How can we encourage ethical guidelines for the open-source community?
- Can we develop detection tools that keep up with rapidly evolving deepfake technology?
- How do we balance the benefits of open-source AI with the need to safeguard vulnerable populations?
As the Brookings report points out, deepfakes don’t just pose risks for organizations—they also jeopardize marginalized communities. If people start doubting the authenticity of evidence, it becomes harder for these communities to seek justice or protection. In crisis zones, this could mean life or death.
The Road Ahead: Moving from Concern to Action
The Trump-Taylor Swift deepfake incident may seem trivial on the surface, but it shines a light on critical vulnerabilities in the AI landscape. For humanitarian agencies, the stakes couldn’t be higher. Trust, neutrality, and the ability to deliver aid effectively are all at risk.
To address these challenges, we need to move beyond traditional approaches to information integrity. Engaging with AI developers—whether from major companies or the open-source community—must become a priority for humanitarian organizations. This isn’t optional anymore; it’s essential to preserving the very principles that make our work possible.
Yes, the challenge is daunting, but it’s not insurmountable. By fostering collaboration between technologists, ethicists, and humanitarian professionals, we can work toward AI systems that respect and protect the mission of aid work.
A Call for Action: Let’s Have the Dialogue Now
As we stand at this crossroads, the humanitarian sector must act. Key areas for discussion include:
- Developing ethical guidelines for the use (or non-use) of deepfakes by humanitarian organizations
- Creating rapid response protocols to counter misinformation in crisis zones
- Exploring the feasibility of a “Deepfakes Equities Process” to protect humanitarian integrity
- Engaging with AI developers to create safeguards for the humanitarian sector
- Advocating for international agreements on the use of deepfakes in conflict situations
These conversations need to happen now, and they need to include everyone—humanitarian professionals, AI developers, ethicists, and policy makers. I urge you to share your thoughts, expertise, and concerns. How can we balance the opportunities AI presents with the serious risks posed by deepfakes? What safeguards should we be pushing for?
Let’s start the conversation today, before the next crisis hits.