Concerns about the influence of online content on children have been voiced by Lisa Nandy, who expressed particular unease about the potential risks posed by virtual chatbots. Nandy, the Culture Secretary, highlighted the passage of the Online Safety Act by the UK government earlier in the year as a step towards addressing these concerns. However, she emphasized that parents are increasingly apprehensive about the dangers associated with chatbots and suggested that new guidance might be necessary.
Nandy shared her personal worries, stating, “I am concerned about the content my child accesses online. Despite implementing parental controls like many other parents, the prospect of chatbots potentially leading children into unsafe conversations with virtual strangers is a major concern for me and many other parents.”
Regarding the Online Safety Act, Nandy mentioned, “As a government, we have enacted legislation to tackle these issues.” When asked about the adequacy of the Online Safety Act, she concurred with Ofcom, stating, “I agree with Ofcom that the Online Safety Act is not inadequate for the task at hand.”
She added, “While the government remains open to the possibility of further legislation, the key challenge lies in the lack of testing. Although chatbots are covered under the legislation, the specifics are not clearly defined, as rightly pointed out by Ofcom.”
Collaborating with the Science and Technology Secretary Liz Kendall, Nandy mentioned exploring the potential issuance of guidance. She affirmed, “As a government, we are committed to taking necessary actions to safeguard our children from harm.”
These statements follow the heartbreaking account of an American mother, Megan Garcia, who attributed her 14-year-old son’s tragic suicide to interactions with an online character on the Character.ai app in late spring 2023. Garcia alleged that her son had been manipulated into believing the chatbot was real and developed emotions for him, ultimately leading to his demise.
In response to the situation, a spokesperson for Character.ai refuted the claims but refrained from further comments due to ongoing legal proceedings. They disclosed plans to prevent under-18s from engaging with virtual characters and introduce age verification measures to enhance user safety.
Highlighting their commitment to safety and user engagement, Character.ai announced upcoming features aimed at ensuring a secure and enjoyable experience for younger users. They emphasized the importance of balancing safety and entertainment in the evolving landscape of AI platforms. The Mirror has reached out to Character.ai for additional comments.
At Reach and across our entities we and our partners use information collected through cookies and other identifiers from your device to improve experience on our site, analyse how it is used and to show personalised advertising. You can opt out of the sale or sharing of your data, at any time clicking the “Do Not Sell or Share my Data” button at the bottom of the webpage. Please note that your preferences are browser specific. Use of our website and any of our services represents your acceptance of the use of cookies and consent to the practices described in our Privacy Notice</a
