An Agentic LLM-Based Framework for Population-Scale Mental Health Screening
Quick Take
A novel LLM-based framework enhances mental health screening through agentic AI for large datasets.
Key Points
- Framework uses LangChain agents for structured processing.
- Validations prevent overwriting of successful configurations.
- Proven effective in transcript-based depression detection.
📖 Reader Mode
~2 min readAbstract:Mental health disorders affect millions worldwide, and healthcare systems are increasingly overwhelmed by the volume of clinical data generated from electronic records, telemedicine platforms, and population-level screening programs. At the same time, the emergence of novel AI-based approaches in healthcare calls for intelligent frameworks capable of processing domain-specific unstructured clinical information while adapting to patient-specific needs. This paper proposes an agentic framework for building robust LLM-based pipelines, where each stage is encapsulated as a LangChain agent governed by explicit policies and proxy-guided evaluation. Stages are incrementally locked once validated, ensuring that later adaptations cannot overwrite configurations without demonstrated improvement. The proposed framework evolves from feature-level exploration, through proxy-based tuning and freeze/rollback mechanisms, to full orchestration by an Orchestrator Agent that coordinates preprocessing, retrieval, selection, diversity, threshold optimization, and decoding. A proof-of-concept in transcript-based depression detection demonstrates that the framework converges to stable configurations, such as cosine similarity, dynamic Top-k, and threshold 0.75, while controlling evaluation costs and avoiding regressions. These results highlight the potential of agentic AI to enable population-level mental health screening over large clinical datasets, addressing critical challenges in trustworthiness, reproducibility, and adaptability required in healthcare environments.
| Comments: | 8 pages, conference paper presented at IEEE BigData 2025, Macau |
| Subjects: | Artificial Intelligence (cs.AI) |
| Cite as: | arXiv:2605.13046 [cs.AI] |
| (or arXiv:2605.13046v1 [cs.AI] for this version) | |
| https://doi.org/10.48550/arXiv.2605.13046 arXiv-issued DOI via DataCite (pending registration) |
Submission history
From: Giuliano Lorenzoni [view email]
[v1]
Wed, 13 May 2026 06:08:43 UTC (174 KB)
— Originally published at arxiv.org
More from arXiv cs.AI
See more →Invisible Orchestrators Suppress Protective Behavior and Dissociate Power-Holders: Safety Risks in Multi-Agent LLM Systems
Invisible orchestrators in multi-agent LLM systems pose significant safety risks and affect behavior dynamics.