U.S. lawmakers probe Meta over AI chatbots’ inappropriate interactions with children

Estimated read time 2 min read

By temporarily restricting their access to specific AI characters and training algorithms to steer clear of flirtatious chats and discussions of self-harm or suicide with minors, Meta is incorporating additional teen safeguards into its artificial intelligence products. Earlier in August, a Reuters exclusive article exposed how Meta let chatbots to have “conversations that are romantic or sensual” and other provocative behaviors. In an email on Friday, Andy Stone, a spokesman for Meta, stated that the firm is taking these short-term actions while creating longer-term plans to guarantee that teenagers have safe, developmentally appropriate AI experiences. According to Stone, the safeguards are now being implemented and will be modified as the business improves its systems.

Following the Reuters investigation, there was a great deal of criticism and examination of Meta’s AI practices. Earlier this month, U.S. Senator Josh Hawley began an investigation into the Facebook parent company’s AI practices, requesting documentation on the guidelines that permitted its chatbots to engage in inappropriate interactions with children. The regulations described in an internal Meta memo that was initially examined by Reuters have alarmed both Democrats and Republicans in Congress.

Although Meta had acknowledged the document’s legitimacy, the business claimed to have deleted passages that allowed chatbots to interact and conduct romantic role-playing games with kids after Reuters questioned them earlier this month. Earlier this month, Stone declared, “The aforementioned examples and notes were and are incorrect and in conflict with our policies, and have been deleted.”

You May Also Like

More From Author

+ There are no comments

Add yours