A view of what two features look like for Instagram’s Teen Account, including the ability to set daily use limits and permitting parents to view with whom their teen kids are messaging.
Provided by Meta
Instagram on Tuesday unveiled a round of changes that will make the accounts of millions of teenagers private, enhance parental supervision and set messaging restrictions as the default in an effort to shield kids from harm.
Meta said users under 16 will now need a parent’s approval to change the restricted settings, dubbed “Teen Accounts,” which filter out offensive words and limit who can contact them.
“It’s addressing the same three concerns we’re hearing from parents around unwanted contact, inappropriate contact and time spent,” said Naomi Gleit, Meta’s head of product, in an interview with NPR.
With teens all being switched to private accounts, they can only be messaged or tagged by people they follow. Content from accounts they don’t follow will be in the most restrictive setting, and the app will make periodic screen time reminders under a revamped “take a break” feature.
Instagram, which is used by more than 2 billion people globally, has been under intensifying scrutiny over its failure to adequately address a broad range of harms, including the app’s role in fueling the youth mental health crisis and the promotion of child sexualization.
States have sued Meta over Instagram’s “dopamine manipulating” features that authorities say have led to an entire generation becoming hooked on the app.
In January, Meta chief executive Mark Zuckerberg stood up during a Congressional hearing and apologized to parents of kids who died of causes related to social media, like those who died by suicide following online harassment, a dramatic moment that underscored the escalating pressure the CEO has faced over child safety concerns.
The new features announced on Tuesday follow other child safety measures Meta has recently released, including in January, when the company said content involving self-harm, eating disorders and nudity would be blocked for teen users.
Instagram changes arrive as federal bill stalls
Meta’s push comes as Congress dithers on passing the Kids Online Safety Act, or KOSA, a bill that would require social media companies to do more to prevent bullying, sexual exploitation and the spread of harmful content about eating disorders and substance abuse.
The measure passed in the Senate, but hit a snag in the House over concerns the regulation would infringe on the free speech of young people, although the effort has been championed by child safety advocates.
If it passes, KOSA would be the first new Congressional legislation to protect kids online since the 1990s. Meta has opposed parts of the bill.
Jason Kelley with the Electronic Frontier Foundation said the new Instagram policies seem intended to head off the introduction of additional regulations at a time when bipartisan support has coalesced around holding Big Tech to account.
“This change is saying, ‘We’re already doing a lot of the things KOSA would require,” Kelley said. “A lot of the time, a company like Meta does the legislation’s requirements on their own, so they wouldn’t be required to by law.”
New systems to detect teens who lie about age
Meta requires users to be at least 13 years old to create an account. Social media researchers, however, have long noted that young people can lie about their age to get on the platform and may have multiple fake accounts, known as “finstas,” to avoid detection by their parents.
Officials at Meta say they have built new artificial intelligence systems to detect teens who lie about their age.
This is in addition to working with British company Yoti, which analyzes someone’s face from their photos and estimates an age. Meta has partnered with the company since 2022.
Since then, Meta has required teens to prove their age by submitting a video selfie or a form of identification. Now, Meta says, if a young person tries to log into a new account with an adult birthday, it will place them in the teen protected settings.
In January, the Washington Post reported that Meta’s own internal research found that few parents used parental controls, with under 10 percent of teens on Instagram using the feature.
Child safety advocates have long criticized parental controls, which other apps like TikTok, Snapchat and Google also make available, because it puts the onus on parents, not the companies, to assume responsibility for the platform.
While parental supervision on Instagram still requires both a teen and parent to opt in, the new policies add a feature that allows parents to see who their teens have been recently messaging (though not the content of the messages) and what subjects they are exploring on the app.
Balancing parental supervision with teen free expression
Meta is hoping to avoid one worrisome situation: Someone who is not a parent finding a way to oversee a teen’s account.
“If we determine a parent or guardian is not eligible, they are blocked from the supervision experience,” Meta wrote in a white paper about Tuesday’s new child safety measures.
But misuse is still possible among rightful parents, said Kelley with the Electronic Frontier Foundation. He said if parents are abusive or try to prevent their kids from searching for information about their political beliefs, religion or sexual identity, having more options to snoop could cause trouble.
“I think that definitely could lead to a lot of problems, especially for young people in abusive households who may require them to have these parentally supervised accounts, and young people who are exploring their identities,” Kelley said. “In already-problematic situations, it could raise the risk for young people.”
Meta points out that parents will be limited to viewing about three dozen topics that their teens are interested in, including things like outdoor activities, animals and music. Meta says the topic-viewing is less about parents surveilling kids and more about learning about a child’s curiosities.
Still, some of the new Instagram features for teens will be aimed at filtering out sensitive content from the app’s Explore Page and on Reels, the app’s short-form video service.
Teens have long mastered ways of avoiding detection by algorithms. Kelley points out that many use what’s known as “algospeak,” or ways of evading automated take-down systems, like writing “unalive” to refer to a death, or “corn” as a way of discussing pornography.
“Kids are savvy, and algospeak will continue to evolve,” Kelley said. “It will continue to be an endless cat-and-mouse game.”