Artificial Intelligence (AI) offers the chance for Not-For-Profit (NFP) organisations to amplify their reach and effectiveness through automation and collaboration. Unlike for-profit sectors, NFPs work toward missions grounded in fairness, equity, and trust. Many organisations may be asking themselves questions around the ethical use of AI. Utilising AI is not just a technical issue but a moral imperative that aligns with these values. By navigating the potential risks and opportunities with care, NFPs can use AI responsibly while staying true to their mission.
Imagine that your organisation has introduced AI to assist with productivity. Emails are being sent faster, tasks are being completed quicker, and fundraisers are being engaged more effectively. However, a few team members have raised some concerns about the use of AI.
"Are the messages it sends truly aligned with the organisation's values?"
"Who is accountable if the AI makes an unfair decision?"
To support ethical decision-making in this complex landscape, we’ve developed the CARE framework: Consequences, Accountability, Responsibility, and Explainability. This structured approach helps leaders evaluate AI’s impact and ensure its use aligns with their organisational values.
Why CARE matters
The CARE framework is more than an ethical checklist; it’s a practical tool for navigating the complexities of AI. By embedding these principles into your decision-making processes, your organisation can harness the potential of AI while upholding the values that define the NFP sector.
AI is neither inherently good nor bad—it’s a tool. The real question is how we, as human stewards, choose to wield it. Through frameworks like CARE, NFPs can make informed, ethical decisions that ensure AI serves the greater good.
The CARE Framework: Four pillars of ethical AI
1. Consequences: Understanding the Ripple Effects
Every action has consequences, and AI is no exception. The first step in ethical decision-making is to anticipate the positive and negative outcomes of deploying AI in your work.
Questions to ask:
-
What benefits will this AI system deliver (e.g., efficiency, improved services)?
-
Could it inadvertently cause harm, such as amplifying biases or excluding vulnerable groups?
How might these impacts evolve over time?
-
By weighing these consequences, you can make informed choices that maximise benefits while minimising harm.
2. Accountability: Who Takes Ownership?
AI systems often operate in a space where responsibility can seem unclear. Accountability ensures someone is ready to step in when things go wrong.
Questions to ask:
-
Who is accountable if the AI makes a mistake—the developer, the organisation, or both?
-
Are there safeguards to review and correct errors?
Why it matters:
-
Establishing clear accountability fosters trust and ensures swift action when issues arise.
3. Responsibility: Ethical Obligations to Stakeholders
Responsibility is about your organisation’s duty of care towards those affected by AI. NFPs, in particular, must consider how AI aligns with their mission to serve and protect their stakeholders.
Questions to ask:
-
Does this use of AI uphold fairness, transparency, and equity?
-
How might it impact vulnerable groups, and what safeguards are in place to prevent harm?
Why it matters:
-
Taking responsibility for ethical considerations strengthens public confidence in your organisation.
4. Explainability: Building Transparency and Trust
For AI to be trusted, its decisions must be explainable. Stakeholders should be able to understand why and how AI systems make decisions.
Questions to ask:
-
Can the reasoning behind AI decisions be easily communicated?
-
How transparent are the data and processes used by the AI system?
Why it matters:
-
Transparent systems foster trust and reduce misunderstandings, particularly in high-stakes areas like resource allocation or donor engagement.
Applying the CARE framework in real life
Consider a scenario where your organisation uses AI to match donors with projects. While the system may streamline the process and boost efficiency, what happens if it favours projects with more data, sidelining newer initiatives? Applying the CARE framework can help you identify these risks and implement safeguards, such as human oversight and diverse data sets.
Another example could involve AI in resource allocation during a disaster response. Rapid decision-making is critical, but what if the AI overlooks marginalised communities due to data gaps? By using the CARE framework, your organisation can balance speed with fairness, ensuring equitable outcomes.
It’s important that before you start exploring how to integrate AI in your organisation, that you develop a policy that suits the organisation’s mission and goals. We have a DIY ethical artificial intelligence (AI) policy template that you can use as a guide to crafting your own AI policy.
If you would like to explore the CARE framework in more detail, we have self-paced learning available, where you will learn how to put this framework into practice.
Status message
Thanks for rating this guide.