
Yorkville University recently marked an important milestone with the launch of The Practice Forum – a new professional learning series designed to help Master of Arts in Counselling Psychology (MACP) students, alumni, and mental health practitioners stay informed, reflective, and ready to navigate the rapidly evolving world of counselling and psychotherapy.
For its inaugural session, the Forum welcomed Candice Alder, M.Ed., RCC, a clinical counsellor, AI ethicist, and president of the BC Association for Clinical Counsellors (BCACC), for a dynamic 60-minute keynote titled AI in Therapy: Trends Shaping the Future of Counselling and Psychotherapy. A leading voice in the field, Candice authored Canada’s first ethical AI guidance for clinical practice (BCACC, 2025), serves as an IEEE CertifAIed assessor, and brings two decades of experience supporting children, youth, and families impacted by trauma and abuse.
Watch the full discussion HERE!
Following her keynote, Candice joined Yorkville University professor Dr. Cindi Saj for a fireside chat, before the floor opened to a lively audience Q&A facilitated by Yorkville faculty members Dr. Deborah Seabrook, Dr. Avideh Najibzadeh, Dr. Carolyn Phillips, Dr. Amjed Abojedi, and Dr. Shaelyn Kraemer. Together, they unpacked the emerging technologies reshaping mental health care and the ethical considerations practitioners need to keep top of mind.
With time running short, several thoughtful audience questions went unanswered during the live session – but Candice has since taken the time to respond to them:
Canada is part of Five Eyes. In that sense, is data stored in Canada really protected from meddling or intrusion from the US government?
Five Eyes is an intelligence-sharing alliance between Canada, the US, UK, Australia, and New Zealand that relates primarily to counter terrorism, cyber security threats, espionage threats, serious organized crime, and foreign interference. It is not a mechanism for warrantless cross border access, nor does it override Canadian privacy law. Domestic laws still protect Canadian data.
Won’t the ‘mindful’ use of AI still influence clinicians?
Yes, AI – like anything else – is bound to influence clinicians and their practice, in the same way that journal articles, peer-to-peer consultation, and supervision influences clinicians. This is not inherently a bad thing. The important piece to remember is that no matter what influences you as a clinician, it is ultimately your responsibility to think critically and use your clinical experience to guide any and all decisions.
At what point does the weight of systemic responsibilities outweigh what individuals can hold? Are the companies also responsible?
If a clinician feels that the weight of systemic AI responsibilities exceeds their capacity to hold them, then this is a clear sign that using AI in your practice is not for you…at least until you build your skill set and knowledge base. While it is reasonable to expect that companies will hold some level of responsibility for their product/service, a company is not responsible for you – as a clinician – utilizing it in your practice. Utilizing AI is not obligatory. However, if you choose to leverage the benefits of AI, you must also have developed your capacity to do so in an ethical and responsible fashion.
I am a Latino and most of the tools listening to my speech make mistakes while transcribing my speech. How can we be sure Latino communities, for example, are included in AI training? And how can we be sure that is happening with all the other groups residing in Canada? We are, by definition, a multicultural nation.
There is no broadly available AI transcription service that never makes mistakes, regardless of the ethnicity or spoken language of the subject. This is precisely why clinicians must always review AI outputs for completeness and accuracy prior to finalization. When considering the use of an AI transcription service in your clinical practice, it is always advisable to gather information from the service provider (either on their website and/or by reaching out to them directly) to determine if their service will best meet your clinical needs. This includes finding out if the AI was trained to recognize language, accents, dialects, etc. of the populations that you serve.
It has been mentioned a number of times that the AIs are training, but no one has said anything around the ethics of how the data was obtained. As professionals, don’t we owe it to other professionals to make sure that the data is obtained ethically?
Absolutely! When considering an AI tool or service for use, this is a great question to ask the tool/service provider to inform your decision making.
Has there been any discussion about the use of LLMs that are locally stored on your encrypted computer to mitigate any concerns of data in transit?
Regardless of where the AI is hosted, when data is moving between one location and another it should always have appropriate data security features in place. This includes, but is not limited to, strong encryption, TLS (transport layer security), and authentication and identity controls.
Are there specific AI for minorities such as autism spectrum clients that help a counsellor’s competency?
This would be highly dependent on what precisely that counsellor is seeking support with. However, we are seeing the emergence of AI augmented tools to support learning, and we can expect innovative applications to continue to emerge.
One opportunity for discussion I see missing in the March 2025 Guidelines is the impact of AI on the mental health of ourselves as practitioners, and also the impact on our clients. We’re talking about responsible use of a technology that we know, based on copious research, can have deleterious effects on cognition, mental health, and social skills in the people who use it – and that includes us as counsellors, as well as our clients, whether the use of AI is private or professional in our office. Given that we are mental health practitioners, isn’t this central to the conversation? Shouldn’t the guidelines include ethical guidance around the psychological and cognitive impacts of AI on both practitioners and clients?
The intention of the current guideline is to specifically support the ethical integration of AI into clinical practice. While it is outside of the scope of this particular document to address the impact of AI on the mental health of users (practitioners or clients), the section on competency speaks to a small component of this. Tech anxiety is real and precedes the emergence of AI, having shown up in previous industrial/technological revolutions. As such, if a clinician is experiencing tech anxiety, including to the degree of negative impacts on mental health, then this likely highlights gaps in understanding and skill that need to be addressed before use.
Anecdotally, it seems a lot of people are using AI as a sort of therapist. I’m wondering: a) How is this affecting therapy? and b) Is there concern that it’s affecting the business of therapists?
There is very little data on this at this time.
It is reasonable to anticipate that AI will impact the business of therapy, and the way that you can ensure the changes that are coming align with your needs and values is to increase your AI literacy and support your clients in understanding the benefits and limitations of AI.
Have you heard of a plan for any professional organization (such as CCPA, BCACC, etc) to accredit AI service providers?
As far as I am aware, there is no plan in the works at this time to do so, nor is it something that I would anticipate anytime soon from professional associations. Not only would AI accreditation require specialized knowledge that is likely to be outside the capacity of professional associations to obtain and maintain, there is also no AI legislation in Canada at this time that would provide a baseline for that process. Certainly, accreditation does not require legislation, broadly speaking; however, in the case of AI this would be highly recommended. There are international accreditation-type processes that AI systems can voluntarily go through. For instance, as an IEEE authorized CertifAIEd assessor of AI systems for ethical compliance, I regularly refer to the IEEE framework to do so (though it is not accreditation, per se).
This is a huge question, but how would one go about creating an AI policy for their practice? Everything we discussed is hugely helpful as guiding principles, but not sure how we start putting them into practice?
AI policy development is going to be highly dependent upon several factors. For example, the size of the organization and the agreed upon acceptable uses of AI by staff and/or clinicians, just to name two. Group practices and larger organizations that are unable or unwilling to develop the competency to do this internally would be well advised to bring in an AI ethics professional to support with the process of developing regulatory compliance, risk tolerance, acceptable use cases, limitations, informed consent procedures, logging procedures, and all other aspects of policy development in this area that are relevant to the organization in question.
If AI is only as good as the data, how do organizations/service providers ensure that they are learning on reliable data?
Asking prospective AI tool and service providers about the training data used for their model. Ask specific questions, ask follow-up questions if you are not satisfied with the answer, and provide feedback (even if they do not ask for it).
I am an AI product maker, as well as a student in the MACP program. What kind of tools do you think therapists need the most? And how do I contact Dr. Alder after this presentation?
The kinds of tools that therapists need the most is highly dependent upon the kind of work the therapist is engaged in. Candice can be contacted at [email protected]
If our clients use AI to do some counselling outside of our counselling sessions, which makes them feel distrustful to our counselling relationship, how could we intervene in the counselling?
I would recommend getting curious about your clients’ use of AI as a proactive measure rather than a reactive one. However, in both scenarios engaging in some supportive information sharing about the limitations of general-purpose generative AI is a great place to start.
How do we reconcile the principle of fairness in AI with the need for these systems to engage in honest, rigorous truth-seeking? In some cases, these goals clearly conflict. For example, Google’s Gemini model recently generated historically inaccurate images (such as racially diverse Nazi soldiers) because fairness constraints overrode factual accuracy. How should counsellors understand and navigate this tension as AI tools enter therapeutic contexts?
It is extremely important to remember that AI models like ChatGPT, Claude, and Gemini are general purpose generative AI models. They ARE NOT designed for use in mental health contexts, either by the clinician or the client, and should not be.
What about environmental concerns regarding AI? I have heard that it uses a LOT of power…and water?
Yes, the environmental impact of AI is very real, and an ongoing topic of conversation in the field. AI is a new and rapidly evolving technology, and we are likely to see refinements in all aspects of its development, operation, and use over time.
Considering how quickly things are moving, and the robustness of the guidelines, have you encountered a company that actually meets these requirements?
Not every single aspect of the guideline is applicable in all AI use cases. For instance, confidence levels being provided with AI outputs is not something that is generally going to be applicable to an AI transcription service. As for companies that meet the applicable recommendations from the guideline, broadly speaking, yes, they do exist.
What do you anticipate will be the most useful/common application of AI in the psychotherapy landscape in Canada?
This is a really tough question to answer, as the evolution of AI capabilities and use cases are moving quickly, and factors such as target population and affordability of hardware are important to consider. However, one of the applications that I am most excited about is the integration of AI into VR and AR assisted therapies.
What do you know about current research on AI models specialized in Psychology? Any suggested authors?
Check out the work of Nell Watson, she is an AI powerhouse who I have deep appreciation for.
Do you use AI, specifically ChatGPT, in your personal life? If you have kids (or if you did), do you/would you let them use it?
Yes, I definitely use AI in my personal life, and yes, I would allow children and youth to use AI in age appropriate and supervised ways with clearly and consistently applied limits (no different than any tech usage).
What happens if a client does not consent to their therapist using AI to work with them?
Then the therapist should not use AI in any aspect of their work with that client. A client not providing consent to the use of AI in service provision should not be a barrier to that client receiving services.
Is the Canadian Parliament working on AI regulations in our country?
The Artificial Intelligence and Data Act (AIDA) was brought forward by the Trudeau government and was effectively “dead in the water” when government was prorogued in January 2025. It is believed that this is actively being worked on by the current government, given such indicators as the AI strategy for federal public service that was launched in March 2025, and a public engagement initiative (and task force) that was launched in September 2025.
Is AI going to be a threat to our profession in the future?
I have long said that AI is a humanity impacting technology, and exactly how it will impact life, communities, and sectors is largely unknown – in the same way that it is likely that no one truly foresaw how the printing press would change humanity. AI already poses threats to the mental health field (as much as it offers amazing opportunities), so it is reasonable to expect that will continue. However, the only way to mitigate against those threats is to build your AI literacy, and get involved through asking tough questions of prospective mental health AI applications, providing feedback, educating clients, etc. Tech and the companies that develop them will only shape our field if we allow them to!
If AI is out there for our field, how we can keep ourselves employed and not let AI take over the mental health field?
Build your AI literacy and get involved!
How do you know what error rate an AI system has?
If you are ever curious about error rates of AI systems, and you cannot find this information on their website, reach out directly to the company and ask.