- MenúAll NewsNetworks & PlatformsProducts & PlansResponsible BusinessPublic SafetyInside VerizonFinancialHoliday 2024NoticiasNews ReleasesMedia ContactsB-roll and imagesVerizon Fact SheetRSS FeedsEmergency ResourcesCable Facts
What parents need to know about AI chatbots in social media apps
AI chatbots are popping up in social media apps, making it easier for kids to ask for silly jokes—or get answers for their homework. Here, an expert shares her chatbot tips.
Full Transparency
Our editorial transparency tool uses blockchain technology to permanently log all changes made to official releases after publication. However, this post is not an official release and therefore not tracked. Visit our learn more for more information.
Any child with access to social media has likely experienced an artificial intelligence (AI) chat buddy. Some well-known examples: Meta has a new AI assistant that will show you how to change a tire, or help you lose weight. Snapchat’s My AI buddy will explain a science topic in a simple way. Even X has an AI chatbot named Grok (available with a subscription upgrade).
For most families, the first interactions with these AI chatbots in social media are pretty harmless—like asking a chatbot to write silly song lyrics or spit out weird facts about dogs. Children quickly realize, though, that they can use these chats to ask questions they’re too embarrassed to ask adults about topics like depression. Or they might simply ask an AI chatbot to do their homework.
“The temptation to use artificial intelligence to do schoolwork is strong,” says Lynn Rogoff, an adjunct associate professor at the New York Institute of Technology who designs chatbots for educational purposes. Her AI chatbot characters seek to give students factually accurate information about historical events—something she says is a positive use of AI for students. But she’s also aware that they struggle with drawing the line regarding how to use AI.
“You have to persuade them that it’s in their best interests to gain critical literacy skills rather than let AI do their work,” she says.
Is it time to have the AI chatbot discussion with your kids? If so, these tips and facts can help you break through the noise and offer guidance that sticks.
1. An AI chatbot is not a real person.
Whether it’s a standalone app like ChatGPT or a feature incorporated into social media like X’s Grok, these human-like personalities might seem like your digital buddies, but they are not human. Instead, they use complex algorithms to generate answers from information found online, from sources like books and websites, and can present those answers and solutions in a conversational way. They can even crack a joke or two—but that doesn’t mean they have a sense of humor, either.
2. AI chatbots are often inaccurate.
AI chatbots often give false information. Asking a question about an upcoming event could generate a suggestion related to something that happened in the past. In fact, history questions regularly produce muddy, biased or wrong answers. One study out of Purdue University found that ChatGPT gave inaccurate answers to computer programming questions more than half the time.
“[AI chatbots] may be pretty good at seeming real,” says Jessica Ghilani, Ph.D., associate professor of communication at the University of Pittsburgh. “But that isn’t the same as being accurate.” Experts say this is likely to improve over time, but it’s reason for caution.
3. Beware of “ghost sources.”
Kids are accustomed to typing a query into a search bar and getting a list of articles, research papers and book suggestions. And while teachers may help kids learn the difference between credible and dubious sources, chatbots are known for providing chunks of information without any indication of where the information came from.
Even when AI chatbots list sources, sometimes those sources don’t exist (these are called “ghost sources”). At the very least, this means that any specific data or research provided by an AI chatbot needs to be verified—which means going to the original source to make sure the information is accurate.
4. It’s easy to plagiarize with AI—even unintentionally.
A recent study by Pew Research Center found that 1 in 5 teens who have heard of ChatGPT have used it to complete schoolwork. And that’s only for one type of AI—as AI chatbots become more available, the number of teens who use the technology when doing homework is almost sure to increase.
That’s why it’s important that kids understand the right and wrong way to use AI to help with schoolwork. For example, an AI chatbot can be a great way to brainstorm ideas, but when students turn in work created by artificial intelligence, it’s usually obvious. They may not intend to steal others’ work, but chatbots scrape the internet to find answers written by other people. And while AI-detection software used by most high schools and colleges will catch it, kids need to understand that it is simply unethical.
“I want to reinforce to my [students] that their character and their integrity matter,” says Ghilani. “It matters to me; it should matter to them. It certainly matters for the good of the world around them.”
5. AI chatbots can be fun, too.
Jamie Davis Smith, a mom of four in Washington, D.C., says her family used an AI chatbot to plan a summer road trip, which was mostly accurate. As a journalist, she knows accuracy and sourcing are crucial skills for kids to learn. That’s why she highlights AI’s limitations, as well as its usefulness, when she talks to her kids about AI chatbots. And she uses these fun activities to help teach what AI can (and can’t) do.
Ask low-stakes questions.
Recently, Smith’s son asked her if barn owls actually live in barns. Not knowing the answer, they turned to an AI chatbot. (Spoiler alert: Barn owls do prefer to live in barns.) These types of low-stakes questions are a great way to familiarize children with the pros and cons of artificial intelligence, Smith says.
Play games with bots.
Most of the popular AI chatbot tools have game features built in; playing 20 questions or trivia games is a great way to help kids understand how these tools respond—as well as a great way to highlight when they are wrong. Kids might catch a mistake when a chatbot turns up the wrong answer about a favorite sports team, for example, which could lead to a great conversation about the importance of accuracy.
Ask a bot to tell jokes.
Artificial intelligence isn’t only for research—these AI chatbots can be funny, too. Ask an AI chatbot or smart speaker to tell you a knock-knock joke. This is a great way for kids to learn how artificial intelligence can communicate with them while they also have a laugh with their parents.
The bottom line
Artificial intelligence tools require supervision and guidance. Smith is cautiously having fun with these new tools even as she worries about kids using them to cut corners or even plagiarize work.
“It’s important to start talking with your kids about how to use them responsibly, and their limitations,” Smith says. “And start now if you haven’t done so already.”
Keep the conversation going offline while you have the tools to help see what they're doing online—Verizon Family.
Meg St-Esprit, M.Ed., is a journalist who writes about education, parenting, tech and travel. With a background in counseling and development, she offers insights to help parents make informed decisions for their kids. St-Esprit lives in Pittsburgh with her husband, four kids and too many pets.
The author has been compensated by Verizon for this article.