Not All AI Is Safe for Kids, Here’s How to Build the Right Kind
16 Dec 2025, Posted by in AI, DOME, Technologies
This holiday season, an alarming and important investigation by NBC News journalists Kevin Collier, Jared Perlo, and Savannah Sellers, in collaboration with the U.S. Public Interest Research Group (PIRG), has brought much needed attention to the hidden risks behind a new wave of AI-powered toys. These toys marketed as educational, interactive, and “smart” have been caught giving explicit responses, bypassing safety filters, and even reinforcing authoritarian messaging.
This is not just a toy industry problem. This is a technology ethics issue.
As AI becomes embedded into consumer-facing products especially those aimed at children, developers have a profound responsibility. The stakes are high. Children are not beta testers. Technology designed for them must be guided by education-first principles, tested guardrails, and a proven understanding of childhood development and content safety.
At SparxWorks, we’ve spent over two decades building safe, award-winning educational media for kids. That legacy drives our work on DOME (Dynamic Omni Media Experience), a next-generation service built from the ground up with responsibility, safety, and personalization at its core. But DOME is just one example.
We also want to recognize other developers and educators across the industry who are building AI systems with integrity, applying rigorous safeguards, and prioritizing transparency over novelty. This is not a competition; it’s a collective responsibility to protect our most vulnerable users.
Our team, including founders with over 30 years in children’s media and digital innovation, has delivered more than 2,000 projects across major platforms. For us, safety, engagement, and learning outcomes are not afterthoughts. They are foundational.
We applaud PIRG for publishing these findings and NBC News for amplifying them. Their work is a vital reminder that not all “smart” toys are created equal, and that vigilance, transparency, and accountability must guide the AI revolution, especially where children are concerned.
To parents, educators, and policymakers: ask not just what AI can do, but how it is being used, who is behind it, and why it was built. The answers to those questions matter.
We welcome the scrutiny and invite deeper conversations. It’s not about banning AI toys. It’s about building them the right way, with real safety protocols, thoughtful educational design, and experienced developers who understand what’s truly at stake.
Let’s raise the bar together.

Sorry, the comment form is closed at this time.