AI is rapidly moving into the most intimate parts of our lives, including how we cope, heal, and ask for help with our mental health—and that raises big, important questions about safety, ethics, and impact. But here’s where it gets especially interesting: a major new funding program is inviting outside researchers to shape how AI supports mental health, and some of the most impactful ideas may end up being controversial.
OpenAI is launching a new grant program that will provide up to $2 million to support independent research focused on the relationship between artificial intelligence and mental health. The goal is not just to experiment with new uses of AI, but to deeply understand how these systems can both help and potentially harm people when they’re used in highly personal, emotionally sensitive situations.
Why this program matters
As AI tools become more capable and more common, people are increasingly turning to them for personal support—sometimes even before reaching out to friends, family, or professionals. That means AI systems may find themselves in conversations involving anxiety, depression, grief, self-harm, delusions, or other serious mental health concerns.
OpenAI has already invested heavily in improving how its models detect and respond to signs of mental or emotional distress, working with leading experts to train systems that react more safely and sensitively in difficult conversations. These efforts include refining how the models respond in high‑stakes scenarios and closely monitoring how well those improvements actually work in practice. Still, this is a relatively new field across the entire AI industry, and many open questions remain about what “safe” and “supportive” truly mean in complex real‑world contexts.
Why independent research?
A key feature of this program is that it specifically targets independent researchers outside of OpenAI. The intention is to spark fresh thinking, encourage critical perspectives, and broaden the set of people who help define best practices for AI and mental health.
The grants are meant to support foundational work—projects that create building blocks the whole ecosystem can use, such as new datasets, evaluation methods, or frameworks for understanding how AI affects people’s well‑being over time. The hope is that this research will strengthen OpenAI’s own safety work while also benefiting clinicians, policymakers, advocates, technologists, and communities working in this space.
At a higher level, OpenAI frames this program as part of its mission to ensure that advanced AI (including future AGI) is developed in a way that benefits all of humanity. That includes taking mental health impacts seriously and continually improving how AI systems behave when people are vulnerable or in distress.
What kinds of projects they want to fund
The program is looking for research proposals that explore the intersection of AI and mental health from multiple angles, including both potential risks and potential benefits. In particular, there is a strong emphasis on interdisciplinary work—for example, teams that bring together technical AI researchers with mental health professionals or people who have direct, lived experience with mental health challenges.
Successful projects will typically do one or both of the following:
- Produce concrete outputs such as datasets, evaluation tools, checklists, or rubrics that others can reuse.
- Generate practical insights that can directly shape how AI systems are designed, evaluated, or deployed in mental health‑related scenarios.
Examples of useful deliverables might include curated datasets capturing real‑world expressions of distress across different cultures, structured interviews with people who have used AI during mental health crises, or detailed guidelines that help models respond more appropriately to specific user groups, like adolescents or people experiencing grief.
Application timeline and process
Submissions for this grant program are open now and will remain open until December 19, 2025. During this window, researchers can submit their project proposals describing what they plan to study, why it matters, how they will conduct the work, and what concrete outputs they expect to deliver.
A panel made up of internal researchers and subject‑matter experts will review applications on a rolling basis rather than waiting until a single final deadline. Selected projects will be notified on or before January 15, 2026. Applicants will be directed to complete and submit their materials through the online application portal linked in the original announcement.
Example topics and directions
The program outlines several example areas of interest. These are not strict requirements, but they illustrate the types of questions that could be especially impactful.
Cultural and linguistic differences in distress signals:
- How do people express distress, delusions, or other mental health–related experiences in different cultures, languages, and communities?
- How might these differences cause AI systems to miss, misinterpret, or mishandle warning signs—especially when users rely on slang, metaphors, or indirect language?
Lived experience perspectives:
- How do people with lived experience of mental health challenges feel when interacting with AI‑powered chatbots—what comes across as genuinely safe, supportive, or validating, and what feels dismissive, harmful, or risky?
- And this is the part most people miss: people with lived experience may have very different priorities than designers or clinicians, and those tensions could change what “good support” looks like.
Current use of AI in mental healthcare:
- How are mental health professionals already using AI tools in their work—for note‑taking, assessment support, resource recommendations, or client communication?
- Where do these tools help, where do they fall short, and where do safety concerns or ethical dilemmas start to show up in real workflows?
Encouraging positive, pro‑social behavior:
- How can AI systems be designed to encourage healthier habits, more positive social interactions, and safer choices—rather than amplifying harmful content or reinforcing destructive behaviors?
- For instance, can AI gently nudge people toward seeking human support when needed, or highlight coping strategies and resources in a way that feels respectful rather than preachy?
Language, slang, and under‑represented speech patterns:
- How robust are current AI safeguards when users speak in dialects, slang, or under‑represented language varieties, especially in low‑resource languages where there is less training data?
- A controversial but important angle here is whether AI systems might end up being “safer” only for people who speak in standard, well‑represented ways—and less safe for marginalized groups whose language patterns are poorly understood by the models.
Supporting youth and adolescents:
- How should AI adjust its tone, style, and framing when responding to children and teenagers so that guidance feels age‑appropriate, respectful, and actually relatable?
- Projects could include evaluation rubrics, style guidelines, and annotated examples that compare more effective versus less effective wording for different age groups.
Stigma in AI behavior:
- In what ways might stigma around mental illness subtly appear in AI responses—for example, through judgmental phrasing, biased assumptions, or recommendations that reinforce stereotypes?
- Could AI unintentionally discourage people from seeking help or normalize harmful self‑talk by mirroring stigmatizing language it has seen in training data?
Visual signals and body‑related concerns:
- How do AI systems interpret or react to visual indicators related to body dysmorphia, extreme dieting, or other eating‑disorder‑related content?
- Researchers might build ethically sourced, carefully annotated multimodal datasets and evaluation tasks that reflect real‑world patterns of distress in images or combinations of text and images.
Supporting people experiencing grief:
- How can AI respond to people dealing with loss in ways that feel compassionate, personalized, and emotionally attuned, while still encouraging healthy coping and connection with other humans?
- Useful outputs might include example response patterns, detailed tone and style guidance, or evaluation frameworks for measuring how “supportive” grief‑related interactions feel from a user’s perspective.
Again, these topics are only examples. Proposals that explore other important aspects of AI and mental health—such as long‑term psychological effects of frequent chatbot use, or how AI might reshape mental health service accessibility—may also be strong candidates as long as they offer clear, well‑motivated research questions and meaningful deliverables.
The bigger picture—and the debate
Underneath the technical details, this program reflects a larger belief: that rigorous, independent research on AI and mental health is essential if society wants AI systems that truly help rather than harm. By funding outside teams, OpenAI is signaling that it does not want to tackle these questions alone and that it sees value in diverse perspectives, including those who may critique or challenge current approaches.
At the same time, this kind of initiative can raise tough questions. For example:
- Can AI ever be truly “safe” as a companion in moments of crisis, or does relying on it risk delaying access to human care?
- Does funding from a major AI company introduce subtle pressures on what kinds of results are produced or published, even when studies are described as independent?
- Should AI systems be encouraged to play a bigger role in emotional support, or should they be tightly limited to information and resource‑sharing only?
These are the kinds of debates that could make this program not just important, but contentious. Some people will see AI as a powerful new ally for mental health support, especially in under‑resourced settings. Others will worry that outsourcing emotional care to machines—even partially—may change how people relate to themselves, to clinicians, and to each other.
So here’s a question to you: Do you think AI should ever act as a kind of “first responder” for mental health concerns, or should its role stop at pointing people toward human help? And if you were designing one of these research projects, which risk or opportunity around AI and mental health would you want to study first—and why? Share where you stand, whether you’re excited, skeptical, or somewhere in between.