AI for Mental Health
AI Ethics: Developing AI for Mental Health
It seems like all we’ve been hearing about lately is Artificial Intelligence (AI). It is being discussed and advertised everywhere we turn as companies race to develop products to jump on this new trend. AI is developing at a rapid pace, seemingly overnight, and it is here to stay- whether we like it or not.
AI in Mental Health
AI has many potentially beneficial uses within mental health care, and it opens up possibilities of new interventions for areas in which there remain significant unmet needs. For example, AI can be particularly useful in early detection of mental health concerns, for reaching high-risk groups, and for providing care to individuals concerned about the social stigma of therapy or to those who mistrust the medical system.
AI can provide outreach to underserved populations, such as those living in isolated rural communities and other resource-poor areas. AI can also help us improve access to care by reducing the time-consuming processes of screening and admittance into the healthcare system. In short, AI can complement existing mental health services or provide an easily-accessible entry point for those initially reluctant to pursue care.
AI for Children
Children have not been immune to the advances of AI. Children are exposed to or communicate with virtual assistants at home, such as Google Assistant, Alexa, or Siri. There are advances in AI within the education sector with interactive instructional technologies. The creation of social robots designed to support children with disabilities have also been growing. These social robots may be particularly helpful for children on the Autism Spectrum who struggle with social interactions as these non-judgmental and human-like robots foster social bonding and teach them about emotions, help them communicate, learn how to imitate actions, and follow instructions. Social robots are also unendingly patient and can provide boundless care, unlike a human caregiver.
Ethical Concerns with AI
However, while AI is rapidly developing and becoming a part of our daily lives, there have been little ethical guidelines to shape AI design. Not having any explicit ethical considerations is particularly troublesome in the mental health field, and especially when working with children. In particular, both patients and clinicians have concerns around trust, privacy, autonomy, and consent. There are currently no well-established standards on issues surrounding confidentiality, information privacy, and secure management of collected data as more and more personal health information and other types of user data are gathered by AI.
Psychologists are bound to the nonmaleficence (“do no harm”) ethical principle and it is necessary that AI within mental health adheres to this ethical standard. It is important that AI be subject to the same risk assessments and regulatory oversight of other medical devices before they are approved for clinical use. Yet, there is no existing guidance specific to the field of mental health and AI’s best uses within the field. In addition, there are no recommendations available on how to train and prepare early-career clinicians in using such tools with patients.
This lack of guidelines is particularly concerning when working with children, as children often understand AI differently than adults. Patients may misunderstand or mistake AI for humans as some may not understand how an application or avatar works, especially if the patient is a child. Some children believe that these devices have a real human inside or they attribute human characteristics to the device. A 2022 study found that children shared more information with a humanoid robot with a childlike demeanor than with human interviewers. Children may feel more safe confiding in a non-judgmental robot as they worry that they will get in trouble disclosing information to human adults.
This brings up the important question of consent for children. Can children truly provide informed consent to AI if they do not completely understand what it is and all its complexities given that us adults are struggling to work out these issues ourselves? Who has access to the data that young children are sharing and how is it being used?
As a clinician-led company, these concerns are at the top of mind at Kismet as we are committed to designing AI through an ethical lens and want to build AI that enhances human capabilities rather than replace them. Due to the lack of clear ethical guidelines around AI in children’s mental healthcare, our team is committed to staying up-to-date with research and recommendations specific to this topic as we develop our child-centric AI that recognizes children’s rights and needs.
Building Ethical AI for Children
Early identification of ethical issues can help us in the construction of AI for mental health. It’s clear that the input of children is key to developing systems that meet their needs and uphold their rights. A recent survey of hundreds of Dutch children indicated that children are able to articulate their values and ethical boundaries around AI, which were: human literacy, emotional intelligence, love and kindness, authenticity, human care and protection, autonomy, AI in service, and exuberance. These findings highlight that children can and should play an informative role when developing AI.
Even children have identified the growing concern that AI will replace human care based on the values that they have identified within that same survey. In the mental health field, a legitimate concern is that AI will replace established services, resulting in fewer available human-driven services. This is particularly problematic in an already inequitable and overtaxed healthcare system. In order to provide patients with quality mental healthcare, we should identify ways to augment existing human mental health resources, not replace them.
At Kismet, we plan to do just that- we want to ensure that AI is used as an additional resource and not an excuse for reducing high-quality care by trained mental health professionals. We utilize the best tools and technologies for our services and we want to use AI only when it is the best tool available to us. As a team, we are prioritizing ways to integrate AI with human-driven services, which offers the opportunity to draw on the strengths of both AI and real human knowledge and expertise. We also believe that while AI is in its early development phase, we must always mandate a qualified human mental health clinician to supervise the AI tool.
Developing Kismet’s AI Guidelines
Because of the lack of standardized guidelines in our space, at Kismet we are actively collaborating with pediatric and AI experts to craft our own comprehensive AI ethical guidelines based on our best practices and our own research. We’re starting from UNICEF’s policy guidance on AI for children with requirements for child-centered AI:
- Support children’s development and well-being
- Ensure inclusion of and for children
- Prioritize fairness and non-discrimination for children
- Protect children’s data and privacy
- Ensure safety for children
- Provide transparency, explainability, and accountability for children
- Empower governments and businesses with knowledge of AI and children’s rights
- Prepare children for present and future developments in AI
- Create and establishing environment for child-centered AI
- Engage in digital cooperation.
As a team, we’ve identified additional areas of concern that are especially important to us: we want to combat bias and lack of cultural competency in healthcare, minimize healthcare provider burnout, and discourage spreading misinformation with inaccurate or under-tested models. Our goal is that our final set of guidelines ensure that we are creating an AI tool that increases the quality and accuracy of and access to mental health care while avoiding harms.
We’ll keep you posted on our progress – watch the Kismet blog for more updates! And if you are an expert in AI ethics and want to collaborate, we’d love to be in touch! Shoot us an email at hello@kismethealth.com.