As artificial intelligence becomes more integrated into mental health care, clinicians and developers must ensure that these tools are used ethically, safely and equitably. The CARE-PES framework was developed to provide a practical roadmap for evaluating AI systems in clinical settings. It stands for Cultural sensitivity, Accessibility, Reliability, Empathy, Privacy, Equity and Synergy.

Cultural sensitivity: AI tools should be aware of cultural differences and avoid stereotyping or reinforcing harmful norms. They need to recognize that emotional expression, family structure and values vary widely across cultures.

Accessibility: Ensure that AI interfaces are usable by people with disabilities and those with limited technology literacy. This includes clear language, alternative formats and respectful design.

Reliability: Clinicians must verify the accuracy of AI outputs and understand the limitations of predictive models. AI is a supplement to — not a substitute for — professional judgment.

Empathy: Even automated systems should foster a sense of being seen and understood. Responses should be supportive, nonjudgmental and emotionally attuned.

Privacy: Safeguard user data. Clients should know what information is collected, how it’s used and with whom it’s shared. Opt‑in should be the default.

Equity: Evaluate AI models for bias across gender, race, sexuality, socioeconomic status and other identities. Mitigate disparities before deploying systems that may disadvantage marginalized populations.

Synergy: The ultimate goal of AI in mental health is to enhance human care, not replace it. Technology should support clinicians and clients in reaching deeper insights and stronger connections.

"Ethical AI doesn’t just avoid harm; it actively promotes healing, inclusion and cultural humility."

At The Human Equation, we use CARE-PES as a checklist when evaluating new tools. We believe that combining evidence‑based practices with ethical technology can widen access to care while preserving the heart of therapy: relationship. Learn more about our ethical AI initiatives in the AI Bias Testing white paper.