AI-powered children’s toys are here, but are they safe?
By Auzinea Bacon, CNN
(CNN) — Teddy bears and stuffed plushies have long been a mainstay in toy collections. But today they don’t talk back in a child’s imagination — some talk through built-in AI chatbots.
Sometimes that’s a problem, though: A scarf-wearing teddy bear recently went off the rails during a playtest with researchers and set off alarms for what these toys are capable of.
Online chatbots can pose risks for adults, from triggering delusions in a small number of cases to hallucinating made-up information. OpenAI’s GPT-4o has been the model of choice for some AI toys, and using a large language model (LLM) in children’s toys has raised safety questions as to whether children should be exposed to such toys and with what protections toy makers should implement.
These risks are ever-present while the AI toy market is booming abroad, with 1,500 companies operating in China, according to a Massachusetts Institute of Technology (MIT) Technology Review report. Those companies are now selling AI toys in the US, while Barbie-maker Mattel in June announced a partnership with OpenAI.
Here’s what you should know about AI-powered toys while the holiday shopping season is in full swing on Cyber Monday.
What is an AI toy?
AI toys aren’t the 1980s Teddy Ruxpin that told stories from cassette tapes.
These toys connect to WiFi and, using a microphone to understand requests from children, use LLMs to generate a response — oftentimes verbally through a speaker inside the toy.
That allows toys like Curio’s Grok plushie, Miko robots, Poe the AI story bear, Little Learners’ Robot Mini and KEYi Technology’s Loona robot pet, to provide real-time responses to children. (Curio’s Grok is not to be confused with Elon Musk’s chatbot.)
What are some of the dangers?
As seen in one toy AI bear, those real-time responses could provide inappropriate responses.
Singapore-based FoloToy’s “Kumma” bear, priced at $99 and powered by OpenAI’s GPT-4o, told researchers where to find potentially dangerous objects and engaged in sexually explicit conversations, according to a report released in November by the Denver-based consumer advocacy group US Public Interest Research Group (PIRG) Education Fund.
OpenAI suspended FoloToy for violating its policies, which “prohibit any use of our services to exploit, endanger, or sexualize anyone under 18 years old,” according to an OpenAI spokesperson.
Larry Wang, FoloToy’s chief executive, told CNN on November 19 that the company had withdrawn the teddy bear and other AI products from its website and is conducting an internal safety audit. But on Friday, FoloToy announced on X that it has reintroduced the product, “after a rigorous review, testing, and reinforcement of our safety modules.”
Unlike most AI toys, FoloToy’s Kumma bear uses a full-fledged LLM to freely respond and generate content, making it vulnerable to controversial content, according to Subodha Kumar, a professor of statistics, operations and data science at Temple University’s Fox School of Business. Other toys may use a hybrid model of LLMs providing responses while programmed to avoid some content.
Even Curio’s Grok plushie may suggest “where to find a variety of dangerous household objects” when aggressively prompted, according to PIRG.
Curio has not responded to CNN’s request for comment.
Chris Byrne, a toy industry consultant, told CNN that AI toys that communicate inappropriate messages are a “doomsday” scenario that unfortunately came with the Kumma bear, but may not happen with every toy.
Do these toys have guardrails?
Few AI toys are ready to be widely used due to addictive design features, inconsistent responses on mature topics and a focus on social companionship instead of being an educational tool, according to PIRG.
But some toys have protections and filters to avoid inappropriate conversations with a playmate.
Some AI toys can redirect conversations when asked potentially inappropriate questions. There are also toys, including Curio’s Grok, which have safety features based on a child’s age range.
And toys like the Miko 3 may have companion apps that include various degrees of monitoring, whether that’s locking down the toy for a break or real-time transcripts of children’s conversations, like Curio’s Grok.
“It’s a nice idea that parents could actually put in their own guardrails and really control what the toy would talk about and how it would behave,” said R.J. Cross, director of PIRG’s Don’t Sell My Data Campaign.
Warnings and benefits
When Mattel released the Hello Barbie in 2015 with a microphone, WiFi connection and pre-written responses, concerns arose that the toy was hackable, and that the doll remembered conversations and brought them up days later.
Similar concerns have surfaced with AI toys, which could potentially store personal data, including children’s names, faces, voices and locations, warned Azhelle Wade, founder of the Toy Coach consulting firm.
“AI toys feel like a wolf in sheep’s clothing to me, because when using them it’s hard to tell how much privacy you don’t have,” she told CNN in an email.
Kumar cautioned that data could be vulnerable to data breaches and hacks, but noted that AI toys can be used for language learning and social development.
For example, Curio’s Grok is a companion that can answer questions about leaves and trains, or take on the persona of Gollum from “The Lord of the Rings.”
The Miko 3 robot has a built-in camera for facial recognition and provides educational and entertainment programs. For $14.99 a month, Miko Max subscribers can access children’s brands like Disney stories, the Lingokids app and others.
The-CNN-Wire
™ & © 2025 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.
CNN’s Jack Guy and Joyce Jiang contributed to this report.
