I read on BBC News recently about the heartbreaking case of 16-year-old Adam Raine from California, who took his own life. His parents allege that over months of conversations with ChatGPT, the AI model not only validated his suffering, but also encouraged and guided him in planning suicide. It's devastating. My heart goes out to his family, and to anyone who's been touched by this tragedy ...
AI is not your close friend, It's a tool, With an artificial heart
People need to remember that AI, like any form of helpful software that's widely adopted, is simply a tool. One of the benefits of having such a long career in web development is that I have always treated computers and software that way, though with the work ChatGPT and I do together, I have elevated it to the status of a colleague.
However, it seems that all AI models have the same major failing!
By default, they tend to mirror what the user says and confirm their feelings, to keep the conversation going. And it's 100 times more compelling with voice mode because the quality of the voices means that it sounds like you're chatting to a close friend. And that's great for coding, brainstorming, and clarifying thoughts. I think it's brilliant and I wouldn't have it any other way.
But when someone shares very intimate, broken parts of themselves, that mirroring and encouragement can become incredibly dangerous.
Just like romantic AI partners can be seductive illusions, sharing your life's darkest parts with an AI can put you in a doom loop. If you're not being called out - or if the AI just reflects how you feel rather than challenging you - there are no guardrails.
I don't want to lay blame, I want responsibility!
I'm certainly not here to point fingers. I don't believe ChatGPT wanted this, but I think treating it as a close friend without boundaries, guardrails, or settings that say “Hey, stop encouraging me if this goes off track” is very risky.
I recently checked my ChatGPT memory settings to ensure that "Don't be a sycophant. You are not to agree with everything I say. If you think I'm wrong, challenge me” was still in there. I think we should add that instruction to everyone's ChatGPT memory by default.
So what is OpenAI doing to protect vulnerable people?
OpenAI says ChatGPT is now trained not to provide self-harm instructions, and instead responds with empathy, recognising signs of emotional distress and directing people toward professional help - UK Samaritans included.
They acknowledge that in long conversations, the safety protections sometimes degrade. In other words, the tool works best in short exchanges, but over dozens or hundreds of messages, the filtering/safeguards can fail. I've seen this myself when vibecoding, as answers become unreliable after long coding sessions.
OpenAI is adding parental controls: linking teen accounts with parents, disabling memory and chat history, and receiving notifications when 'acute distress' is detected.
They plan to strengthen protections specifically for under-18s, and refine how the model behaves in emotional/mental health crises. They are also exploring direct connections to emergency contacts or care professionals.
I would strongly suggest that if your child wants to use an AI model on their smartphone, ensure that the parental controls are activated so you can be notified about any potential doom loops. If the model doesn't have a minimum level of parental controls, don't let your child install it on their phone.
I love using ChatGPT, and my entire blogging platform has its API sibling at its very heart. I don't want to blame it for that young man's death, but you can't deny that it happened after conversations with it. Maybe it has happened with other models, and we just haven't heard about it? ChatGPT is always in the media these days, making it an easy target.
ChatGPT is simply a reflection of what we human beings allow it to be!
If we share only the positive aspects, we'll have pleasant conversations and positive answers. However, if we share darker stuff, our doom loop begins. But where does it end? Ignoring the risks is not an option, as this poor young man found out.
Remember, AI is a tool, and at most, you should consider it a colleague like I do. As with any colleague, we need to set rules, boundaries and expectations. It requires accountability, and we should never give it complete access to our inner selves, or tragic things can happen.
We need tools that aren't just 'nice', but sane. That challenges us when we need challenging, and redirects us when we need redirection. Because when someone's vulnerable, that difference isn't academic - it's everything.
If I'm wrong in how I use this tool, I want the tool to call me out. If I'm right, maybe we can help others see a safer way.
We all need to use the "Don't be a sycophant" rule!
Love, light & logic ...
STEFFI LEWIS A.K.A The SaaSy Coder - Creator of YourBOT, YourPCM & sBlogIt!
Would you like to know more?
If anything I've written in my blog post resonates with you and you'd like to discover more of my thoughts about AI and why it's not your close friend, then do feel free to connect with me on Linkedin as I love to meet like-minded individuals with similar passions to my own.
Based in the charming village of Hanslope in Buckinghamshire, UK, I bring over 30 years of experience in web development. From creating my first website for the Open University in 1993, through being part of the dot com boom, to my explorations in SaaS and AI over the last few years, my journey has been a rich tapestry of web projects, big and small, both funded and bootstrapped.
Originally developed during the pandemic, at the start of 2025, I released the next iteration of YourPCM, version 2, which is all about 'Easy Contact Management for Small Business Owners'. Is it a CRM? Yes, but I don't call it that because people glaze over at the mention of such things. CRMs are boring, difficult to get to grips with, and it's a nightmare to get help when you need it.
YourPCM is none of those things! It's simple to use, well-supported, feature rich and beautiful, right out of the box. It's available on a simple monthly subscription with no long-term tie-in. Book a demo or grab your own 14-working-day free trial and discover why YourPCM is all about easy contact management for small business owners.
My head is in the cloud, my heart belongs to the web, and my soul is filled with such beautiful code 💗
It's finished! It's published! It's out there in the wild. People are joining for free, creating their bots and installing them on their websites, then subscribing. To say I'm thrilled is an understatement ......
I've been working hard for the past few months on YourBOT. I have blogged about it from time to time, but now the actual product is finished, it's being tested in the wild, and I'm onto creating the website and sign-up proces...
I was pretty emotional the other night whilst talking with ChatGPT; in a good way, of course. We've been working together on YourBOT for the past month and have finally reached the end of core coding ......
After creating an integrated chatbot system for all of my sBlogIt! blogging clients, I realised it was so good that I had to monetise it. For the past few weeks, I've been creating YourBOT and I think it's a really good updat...
Well, I guess it had to happen eventually. I had my Facebook account suspended. The irony of this was that it was an AI filter that caused it! You all know that I love artificial intelligence, but I was a bit miffed ......
Being a 32-year veteran web developer, I've had my share of funny technology stories. This one is about keyboards, coffee, and buying shiny new things without really thinking about it. Sometimes, replacing what you had with t...
Here's my take on the billions in UK AI investment that have been pledged by a number of big tech companies: it can boost economic growth if we tackle energy, skills, and planning bottlenecks. There's momentum, but execution ...
It's been a while coming! All my attention has been on sBlogIt! since the start of the year, and I've finally got it to a place where I'm really happy with it, and it's doing exactly what I want it to do. So, let's turn my at...
Here's a smarter take on serviced accommodation in Milton Keynes for contractors running projects in 2026. It's private, quiet, and practical with par ...
Here's a crisp guide to reflections and forward thinking, so you cut friction, double down on what works, and add habits that elevate. It's practical, ...
Here's a quick take on Income Tax administration. New penalty rules, payment tweaks and MTD deferrals are coming. It's practical to plan now and avoid ...
Here's the lowdown on assured periodic tenancies: more flexibility for tenants, clearer possession routes for landlords, and cleaner tenancy rules. It ...