Comment by AndrewMPT
4 days ago
We’re building AI PSY HELP – an AI-powered mental health assistant offering 24/7 anonymous support via voice and text, without appointments or waiting. It’s used by 100,000+ people in Ukraine, including veterans, teens, and first responders.
The AI is trained on 40,000+ hours of real psychotherapy sessions and provides individualized emotional guidance to help users manage stress, anxiety, and trauma. We partner with public institutions to deliver large-scale support and just launched a B2B program for employers.
Now preparing for EU expansion (starting with Germany), mobile app rollout, and voice interaction in Ukrainian. This is not just a chatbot – it’s scalable mental health infrastructure.
→ https://ai.psyhelp.info → https://chat.psyhelp.info → https://chat.dev.psyhelp.info (+voice)
How did you get people to agree to training a chatbot on their sessions? That strikes me as extremely intimate text. Is it a "it's in the T&Cs" deal, or did you seek a separate opt-in?
I'm askng because the answer will shed light on the level of privacy "the average consumer" is comfortable with.
Great question, and I fully agree — privacy in mental health is sacred.
We don’t train on user chats directly. Instead, we collaborate with a team of 42 certified psychologists who work with us to curate anonymized case structures, decision trees, and response strategies based on real but depersonalized therapeutic experience.
These professionals help us model how psychological support is provided — without ever using actual user conversations. Our system is trained on synthesized, anonymized session data that reflects best practices, not private logs.
It’s not buried in the T&Cs — we’re very explicit about our commitment to data ethics and user safety. No session data is used for model training, and user interaction is fully confidential and never stored in a way that links it to identities.
Our goal is to make high-quality support available without compromising trust. Let me know if you’d like more technical or ethical detail — happy to share!
That's a first rate response - and a very thoughtful way to preserve anonymity. Thanks, I appreciate it.
Those decision trees sound interesting - are you, essentially, integrating an LLM and an expert system?
2 replies →