Introduction
In the humanitarian sector, we routinely handle sensitive beneficiary data using various digital tools. We input information into Excel spreadsheets, send emails via Outlook, and store documents on SharePoint. These practices have become second nature, and we often overlook the implications for data security and privacy. However, as we embrace AI technologies, it’s crucial to reassess our approach to data sharing and the trust we place in service providers.
The Familiar Territory: Trusting Established Platforms
Consider our interactions with Microsoft’s suite of tools:
- Excel: We manage beneficiary data and program metrics.
- Outlook: We communicate sensitive information internally and externally.
- SharePoint: We store confidential documents and collaborate on projects.
By using these platforms, we implicitly trust Microsoft to protect our data, relying on their compliance with data protection regulations like GDPR and their robust security measures. This trust has been built over years of consistent service and adherence to privacy standards.
The Disconnect with AI Services
Despite this established trust, many organizations hesitate to use Microsoft’s AI services through Azure or other AI platforms like OpenAI’s ChatGPT. There’s a noticeable disconnect:
- Comfort with Traditional Tools: We trust providers with emails and documents without much hesitation.
- Skepticism Toward AI Services: We question the same providers when it comes to AI APIs and data processing, often due to concerns about data misuse or insufficient understanding of AI’s data handling practices.
The ChatGPT Conundrum: Free vs. Paid
Let’s delve into OpenAI’s ChatGPT as an example:
- Paid Version: Users can opt out of having their data used to train future models, offering greater control and aligning with our data protection needs.
- Free Version: Inputs may be used to train models, treating data similarly to public interactions, which poses significant privacy concerns.
This dichotomy highlights the importance of understanding the terms of service and data policies associated with AI tools.
Rethinking Our Approach to Provider Trust
This situation prompts us to reevaluate our data-sharing practices and the trust we place in service providers, especially when adopting new technologies like AI.
1. Consistency in Trust
- Critical Examination: If we trust a provider with one form of data storage or communication, we should apply consistent criteria to their AI services. This involves scrutinizing their data protection measures, privacy policies, and compliance with relevant regulations.
- Avoiding Double Standards: Questioning AI services while trusting other platforms may indicate a lack of understanding or an inconsistent application of trust principles.
2. Understanding Data Usage Policies
- In-Depth Review: Thoroughly examine how different services—AI and non-AI—use and protect our data. This includes reading privacy policies, understanding data retention practices, and knowing who has access to the data.
- Informed Decisions: Use this knowledge to decide which services are appropriate for specific tasks, ensuring that data handling aligns with our organizational values and legal obligations.
3. Paid vs. Free Services
- Assessing Value and Risk: Recognize that paid AI services often offer enhanced data control and privacy features. While budget constraints are a reality, the potential risks associated with free services—such as data being used to train AI models—may outweigh the cost savings.
- Understanding the Business Model: When a service is free, often you are not the customer but the product. The provider may monetize your data, including sensitive beneficiary information, to improve their models or for other purposes. This underscores the importance of being cautious with free services.
4. Trusting the Data Process
- Evaluating Trustworthiness: Before sharing sensitive data, assess whether the data processing methods of the AI service are secure and trustworthy. If the service has robust data protection measures comparable to trusted tools like Word or Excel, and complies with regulations like GDPR, we can use it with confidence.
- Avoiding Unsafe Platforms: If the data process is not secure—due to unclear data policies, lack of compliance, or insufficient security measures—we should avoid inputting sensitive data altogether. This applies especially to free services where data handling practices may be less stringent.
5. Data Minimization
- Principle Application: While data minimization is always a good practice, its importance is magnified when there’s uncertainty about data processing safety. Sharing only the necessary information reduces the risk in case of data misuse.
- Avoiding Identifiable Data: Limit the use of personal or sensitive data when using AI services whose data processes you don’t fully trust. Even with trusted services, it’s prudent to minimize data exposure, but with untrusted or free services, it becomes critical.
- Balancing Trust and Necessity: If we trust the data processing of an AI service (e.g., it’s secure, compliant, and transparent), we can be more confident in using it fully. However, we should still adhere to data minimization principles as a safeguard.
Moving Forward: Building Trust in the AI Era
Provider trust isn’t binary; it’s a spectrum influenced by various factors, including transparency, accountability, and demonstrated commitment to data protection.
- Transparency and Accountability: Demand clarity from providers about data handling practices. Providers should be open about how they use data, especially in AI services where data may contribute to model training.
- Regular Audits and Assessments: Continuously monitor and review the services used to ensure they meet our standards. This could involve regular security audits or assessments of compliance with data protection regulations.
- Collaboration with Providers: Engage in dialogue with service providers to express our needs and concerns, encouraging them to enhance their services in ways that align with our data protection requirements.
Conclusion
As AI becomes increasingly integrated into humanitarian work, we must evolve our approach to data sharing:
- Stay Informed and Proactive: Keep abreast of developments in AI and data protection, adapting our practices accordingly.
- Foster a Culture of Data Protection: Embed data protection principles into the organizational culture, ensuring that all staff recognize its importance.
- Prioritize Beneficiaries: Always consider the privacy and security of the communities we serve, ensuring that technological advancements do not come at their expense.
By critically assessing our trust in providers across all platforms and applying consistent standards, we can make informed decisions that balance innovation with responsibility.
Join the Conversation
I invite you to share your experiences and insights:
- Have you faced challenges in trusting AI services with sensitive data?
- What strategies has your organization implemented to navigate these concerns?
- How do you see provider trust evolving as AI continues to advance?
Your input can help us collectively enhance our practices and ensure we’re safeguarding the interests of those we aim to help.
Stay Connected
If you’re interested in exploring these topics further, consider joining our next informal discussion on AI in humanitarian programming. Together, we can navigate the complexities of technology in our sector and develop solutions that uphold our commitment to ethical and effective humanitarian aid.
References
- Market Impact. (2024). Harnessing the Transformative Potential of Generative AI for Humanitarian Multi-Purpose Cash Assistance. Market Impact Report
- Byrnes, T. (2024). Navigating the AI Revolution in Humanitarian Aid: NetHope’s New Guidance and Its Impact on Cash Assistance. Tom’s Aid & Dev Dispatches.
Edited for clarity with the assistance of AI, but the content, thoughts, and arguments are solely from the author.