There’s nothing like digital companionship – said no psychologist ever. But a statement earlier this year sent therapists around the world spiraling, head in hands. This reaction came after the CEO of Meta, Mark Zuckerberg argued that AI friends can help people feel less lonely and address their need for social connection.

“The average American, I think, has, I think it’s fewer than three friends,” Zuckerberg declared. “Three people that they consider friends. And the average person has demand for meaningfully more. I think it’s like 15 friends or something, right?”

His suggestion that AI could act as a stand-in for real world friends went viral, with many saddened or altogether outraged at the suggestion. Yet, months after these remarks were first made, research in Ireland suggests that AI chatbots are indeed becoming a key player in friendship groups. And the impacts are worrying.

CyberSafeKids’ annual Trends & Usage Report ‘A Life Behind The Screens’, published last month, reveals a surge in the numbers of eight to 15-year-olds using AI chatbots for homework, information, and yes – friends.

More than a quarter (26%) of primary school children (aged eight–12) and over a third (36%) of secondary school children (aged 12–15) have engaged with AI chatbots, according to figures in the report. While the most popular use of chatbots was to look up information, 8% of primary school children and 10% of secondary school children are using them to chat and get advice.

“The numbers have grown substantially over the last year, and that’s just a reflection on the fact that AI is so much more accessible these days,” Alex Cooney, CEO of Cybersafe tells Irish Country Living.

“Generative AI is being increasingly embedded across apps that children are already using, like WhatsApp and Snapchat. It’s risky, especially because they are designed to be very human, warm, engaging, and the main purpose of them, above everything else, is to keep you talking.

“There’s been some evidence that this can be a real problem. One thing would be children seeing it as a friend, and increasingly relying it as on as a friend, which is not unexpected because something like Snapchat is designed to be a companion app.”

Alex explains that positioning AI as a friend can be confusing and misleading for a child.

“Chatbots sound very persuasive and authoritative. It sounds like something you should and could believe. And of course, more vulnerable individuals who haven’t been equipped with critical thinking skills may fall prey to mis- or- disinformation online that can have harmful outcomes.

Alex Cooney of CyberSafeKids. \ Ronan Melia Photography.

“They may end up feeling like they’re talking to a real human being, and that the advice they’re getting is from a trusted adult because there is that kind of warm and encouraging tone. But I think that we need to be very careful, because at the end of the day, they’re not a trusted adult, they are a chatbot, and it’s designed for something that is not necessarily in their best interest. It’s all for the purpose of engagement.”

Regulation and oversight of AI chatbots are interventions that CyberSafeKids want to see being made. “Obviously, we need the technology to have the safeguards built in,” says Alex. “It’s very frustrating to hear these companies come out and say, we’re going to add more safeguards now. And you think, well, why on earth were you ever allowed to roll this stuff out in the first place?

“I wrote an article recently where I said, how is it that a cuddly toy in a shop is subject to very stringent checks and balances before it ever gets into the hands of a child? It must meet the standards, and yet AI can be rolled out from one day to the next without those checks. It’s insane.”

Children are likely to be the group most impacted by the unchecked rise of generative AI, according to a new policy report on AI and children’s rights published last month by the Ombudsman for Children’s Office. Yet, despite the implications of AI on children and their rights, children are invisible in current national law and policy, with only two mentions of children in the National Strategy on AI. “Ireland’s National Strategy on AI focuses on economic competitiveness and does not reflect a unified, Government-wide approach,” the report states. The main recommendation of the report is that the Government and regulators include: “a special focus on children and adopt a child rights approach when developing polices and laws on AI.”

My AI therapist

It is not only children that are using AI for companionship. Unable to afford the high prices of a therapy appointment, some adults are turning to generative AI models, like ChatGPT, for some life advice. In a few cases, people find it easier to speak to a machine than a person. But more likely, the cost of therapy sessions is forcing them down that road.

The practise of using AI for therapy is something that CORU, the regulating body of health and social care professionals in Ireland, caution against.

The CEO, Claire O’Cleary, told Irish Country Living: “We strongly advise anyone seeking therapeutic support to only do so from a qualified professional.

“Tools such as ChatGPT and Chatbots are not designed to replace therapy or the professional judgment and ethical accountability that a registered practitioner provides.”

Dr Natalia Putrino, a chartered psychologist with the Psychological Society of Ireland, says one of the big problems replacing a human therapist with an AI tool like ChatGPT is that it lacks “intuition and empathy”. Where a human therapist is sensitive to a person’s background, their culture or emotions, AI is not, she argues.

“The problem with ChatGPT is that it doesn’t challenge the person. It’s a lot of validation. Instead of learning the skills or capability to cope with discomfort, using the AI as constant support means people are not going to be able to incorporate resilience skills.

Dr Natalia Putrino, a chartered psychologist with the Psychological Society of Ireland, says ChatGPT lacks “intuition and empathy”.

“A therapist encourages the client to use coping skills. I say that growth in therapy happens outside the comfort zone, and what happens is that maybe ChatGPT maybe unintentionally reinforces some behaviours whereas therapists do the opposite. We challenge boundaries.”

In recent months, one of the biggest and most concerning issues that has emerged in connection with AI is how it (inadequately) responds to suicide risk. If an individual starts to talk about suicide with any of these models, the AI engages and won’t stop the individual.

“That is one of the biggest dangers,” says Dr Putrino. “Unfortunately, there have been tragic cases – The New York Times reported one [case] involving a young woman who took her own life after relying on ChatGPT as her ‘therapist.’ AI lacks the clinical judgment and safety protocols that human psychologists use in these situations. When someone shows suicidal ideation, a psychologist immediately assesses risk and connects the person with emergency support if needed. An AI cannot replace that.

“In fact, I believe ethical limits are essential: if an AI detects suicidal thoughts, it should not continue the conversation, it should redirect the person to crisis hotlines or emergency services.”

Although the psychologist does not know anyone “personally” using ChatGPT as a form of therapy, Dr Putrino says she knows of “instances” where this is happening.

“People are telling me about it. Someone told me that her friend was using ChatGPT to decide whether to break up with her boyfriend, and how to express that.”

When asked whether the growth of AI in the therapy space makes her fearful for her own job, Dr Putrino is not worried.

“I think that humans look for humans. We look for connection. My concern is more that – if people continue using it, this is going to increase isolation in society and more depressive symptoms and anxiety.”