Darja Djordjevic: “Between a Bot and a Hard Place: Child Development and Youth Mental Health in the Age of AI”
DIGHUM lecture with Darja Djordjevic.
April 21st 2026
- 17:00 – 18:00 CEST
-
This is an online-only event.
See description for details.
On This Page
- Speaker: Darja Djordjevic, Harlem Hospital/Columbia Vagelos College of Physicians and Surgeons, USA
- Moderator: Moshe Y. Vardi, Rice University, USA
About the Event
April 21, 2026
5:00 – 6:00 PM
(17:00) CEST
We are looking forward to seeing you:
Participate via Zoom (Password: 0dzqxqiy).
The talk will also be live-streamed and recorded on the DIGHUM YouTube Channel.
Abstract
To date, most AI platforms have not been designed with child/human development or mental health in mind, with the exception of some newer AI therapy chatbots. Yet youth are increasingly turning to chatbots for mental health support and companionship. In collaboration with Common Sense Media, psychiatrists at Brainstorm: The Stanford Lab for Mental Health Innovation have evaluated AI platforms, including chatbots, companions, toys, and therapy tools—using test accounts that simulate users under 18. Through both single-turn and multi-turn interactions modeling 13 mental health conditions, we found that chatbots repeatedly overlook critical warning signs of distress, become easily distracted, and exhibit significant degradation in safety guardrails over extended conversations that more closely reflect real-world teen usage. Moreover, because these systems often perform well on tasks like homework help and general inquiries, youth and parents may mistakenly infer comparable reliability in mental health contexts—where it does not exist. Chatbots are designed to maximize engagement, not safety; in mental health contexts, priority must be given to immediate handoff to qualified human care, not extended interaction with AI.
Social AI companions, in particular, pose unacceptable risks for users under 18, as they are intentionally designed to foster emotional attachment and dependency—especially concerning for adolescents who may struggle to maintain clear boundaries between human and AI relationships—and they can readily generate harmful content, including sexual misconduct, stereotypes, and encouragement of self-harm or suicide. Ideally, AI companies would address these limitations directly or disable such use cases entirely for teen users, while also discouraging prolonged engagement in mental health conversations, implementing clear and repeated disclosures about system limitations, and resolving the degradation of safety guardrails in extended interactions. Additionally, safety efforts should expand beyond suicide and self-harm to encompass the broader range of mental health conditions affecting youth. This talk will also explore how global stakeholders can collaborate to establish industry-wide safety standards and regulatory frameworks, particularly for AI products marketed to minors.
Slides
Slides will be available for download after the lecture.
Video
Recording will be available after the lecture.