How AI Can Impact Behavior; Particularly in Your Children
By: Kanan Levy
“Alexa, what is the weather today?” “Hey Google, play my favorite song.” “Alexa, stop.” “Hey Siri, shut up.”
These are just some of the phrases we ask our in-home smart assistants on a daily basis. While adults these days are able to recognize the difference between how we treat these artificial intelligence systems and actually living humans, there are concerns that children these days didn't grow up with such a stark differentiation. If you are at least around 20 years old, in-home autonomous systems were not around when you were growing up. If you needed to know the weather, you watched the news or looked it up. If you wanted to listen to your favorite song, you manually got up and played it on your phone, radio, or speaker. These actions are widely accepted, or normalized, in the technologically advanced world we live in.
The Anthropomorphization of Technology
What we are not used to, or prepared for, is the anthropomorphization of technology. Anthropomorphization occurs when we attribute human characteristics to a non-human object or thing. This tendency is not new by any means. For generations people have been anthropomorphizing non-humans animals. We name our pets, groom them, talk to them, etc. This has proven to be unharmful, besides the grief we experience when we lose them; a grief that, in extreme cases, can be comparable to that of losing a friend or family member.
Overall, this is a well researched and understood behavioral response and connection. What psychological researchers have not been able to sufficiently research is the impacts of anthropomorphizing artificial intelligence, especially when we grow up with it. Children have a harder time differentiating the things they encounter in the world. This lack of ability, while normal, can be very detrimental to their behavior and attitude. In this blog, I stress how through being demanding and disrespectful of technology (especially technology that talks back to us), children can develop the tendency to believe that this is how it is normal to treat other humans. One possibility that sets a good example for this worry is the increased risk of perpetuating gender stereotypes.
The Perpetuation of Gender Stereotypes
Historically, most can agree that the role of a woman was to be a caretaker while the role of a man was to be the provider. WIthin these jobs dedicated to men and women there were responsibilities, whether that was cooking, for women, or bringing home a paycheck, for men. As the world progresses, both socially and politically, there is a notable shift in the role of the man and the woman. Despite this, however, the woman's voice is used in virtual assistants, like a Google home, Amazon Alexa or even Siri for Apple. It is possible to change the gender of the voice, but having the default be a woman says something entirely in itself. In childhood, the brain is malleable and the implicit teachings children receive stick. Therefore, it is not progressive to perpetrate these norms, especially if it is in the role of an assistant or caregiver to children. If children get used to giving stark commands to female sounding virtual assistants like “Alexa, what is the weather today?”; “Hey Google, play my favorite song.”; “Alexa, stop.”; “Hey Siri, shut up.”; how are they going to treat the women, or woman caregivers, in their life?
There is Still Hope
While all I have written previously may seem hopeless and concerning, there is work being done by researchers to combat the worries presented. Kurian (2023) has suggested in his work to develop child safe/centered designs for the AI they encounter; that we need to focus on instilling empathy and reducing the promotion of harmful content or behavior. Beyond just a research setting, parents and caregivers can make an effort to ensure their children remain respectful and display positive behaviors towards their peers. Maybe make an effort to check in on your children and how they interact with the artificial intelligence you have in the home. Remind them to be nice, no matter what or who they are talking to. Taking these steps can create a kinder and more understanding child. I am not asking you to let your child anthropomorphize the devices you have in your home (in fact, I discourage this). What I am suggesting is that you reward, or reinforce, respect and kindness and discourage rudeness. Even if this is not seemingly an issue yet, it has the potential to be. It is possible not to realize how your child treats artificial intelligence is how they treat other kids on the playground when you are not around.
Final Words and Suggestions
As technology continues to advance and integrate more deeply into our homes, it’s essential that we don’t ignore the subtle yet powerful ways it can shape young minds. While adults are generally capable of distinguishing between artificial intelligence and real human interaction, children, whose brains are still developing, may not have the same clarity. When children grow up commanding AI assistants, especially ones with default female voices, with abrupt or disrespectful language, they may internalize these behaviors as socially acceptable, potentially reinforcing outdated gender norms and disrespectful communication styles. This is not a guaranteed outcome, but the risk is too significant to overlook.
Thankfully, we are not powerless! Research, such as that by Kurian (2023), highlights the importance of designing child-centered, empathetic AI systems that avoid promoting harm. Despite this, responsibility also lies with parents, caregivers, and educators. Simple actions like encouraging polite language with AI, discussing the difference between machines and people, and modeling respectful behavior can go a long way. By being proactive now, we help raise a generation that is not only tech-savvy but also emotionally intelligent and respectful both to machines and, more importantly, to the humans around them. The future of AI is in our homes so be sure your children engage with it responsibly.
References
Kurian, N. (2024). ‘No, Alexa, no!’: designing child-safe AI and protecting children from the risks of the ‘empathy gap’in large language models. Learning, Media and Technology, 1-14.