book your spot

In the final part of our series of posts from our event in Warsaw, we dive into the ethical aspects of using AI, which is the most relevant and pressing topic in the digital mental well-being domain. Usually a significant portion of the debates in this issue centers around algorithmic bias, data privacy, the transformation of the doctor-patient dynamic, etc. These concerns underscore the complexity of deploying AI in sensitive areas, where the stakes involve not just efficacy but the profound impact on individual dignity and equity. What do experts think about this? What should we keep in mind as we embark on this journey?

Embracing Technology in Mental Health Care

Tom Van Daele, Clinical Psychologist and Research Coordinator in Psychology and Technology at Thomas More University of Applied Sciences, addressed the perception that mental health professionals are hesitant to adopt technology. He acknowledged that some professionals are not tech-savvy and that ethical concerns play a significant role in this reluctance.

“There’s always people who are not tech savvy or don’t know how to make proper use of them. But on the other hand, ethics essentially comes into play. We dramatically fail to help all people in need. So I can only applaud anything that can add to more people seeking and finding some sort of support,” Van Daele said.

However, Van Daele also stressed the importance of caution when using AI in mental health. “It’ll do pretty impressive stuff for like 95% of the cases. But 5 or sometimes even 1% that worries me where it gets hallucinations or you get recommendations that are not fully substantiated or potentially even false. So, it’s important to carefully choose when and how and for what purpose you could use them.”

DŌBRA Solution: Magic Mirror

Tetiana Kochetkova, Product Owner and Co-Founder of Magic Mirror at DŌBRA, shared her insights on how her startup addresses these ethical concerns. Magic Mirror is a mobile app designed to facilitate decision-making and expand users’ perspectives by leveraging a hybrid approach that combines AI, professional profiling, and cognitive psychotherapy.

“Magic Mirror is built on the idea that the best partner for a person to think through problems is themselves. We see that negative self-image and negative self-talk impact a lot on our activities, and it’s harming us a lot,” Kochetkova explained. “We are trying to build some way to get into a partnership with myself—a person who I can trust and also do that in a safe space.”

To ensure ethical use of AI, Magic Mirror employs a three-tier AI ethics checkup: at the protocol level, verified by experts; at the data level, ensuring privacy and security; and at the ethics level, providing appropriate emergency responses and referrals to specialists as needed.

“AI in Magic Mirror serves as a ‘train for your thoughts,’ accelerating decision-making and improving self-image in a manner as modern transport makes movement faster and comfier. It’s designed to be a supportive tool that guides users to better understand and address their own mental health needs with a focus on mental well-being,” Kochetkova added.

Leveraging Technology to Combat Loneliness

Dr. Christina Spragg, Clinical Psychologist and Global Workplace Mental Wellness Consultant, highlighted the potential of technology to address issues like loneliness and grief.

“Technology, including AI, can help by facilitating online support groups where people with similar experiences can connect. For instance, someone in a small town who feels self-conscious about sharing personal issues can find support online while remaining anonymous. This is a powerful way to combat the loneliness and isolation exacerbated by our increasingly digital lives,” Spragg noted.

Ethical AI and Compassion in Mental Health

Van Daele further emphasized the importance of ethical AI in mental health care, particularly in providing compassionate responses. “People, on average, do want to be compassionate. It is however sometimes just difficult to find the right words. If you want to outsource this to AI, I think it’s perfectly fine if the alternative is that you won’t be giving a compassionate response.”

He noted the progress in destigmatizing mental health and the increasing willingness of people to seek help. “Over the last decade, we’ve seen an increase in the destigmatization of mental health. I think it was a very first important threshold for people to acknowledge ‘I’m not okay and I need help.’ The next step is the kind of support we can give those people who clearly indicate that they need some help.”


In conclusion, as we advance the integration of AI into mental wellbeing, we are reminded of our capabilities and responsibilities. The discussions from Warsaw provide crucial insights into the ethical deployment of AI technologies in mental health, emphasizing both the vast potential and the profound responsibilities we carry in shaping this evolving landscape.