Instagram is starting to look more like television, a move that might make some parents happy but ultimately shows that tech companies are getting closer to complete victory in their campaign to capture as much attention as possible.
The company just announced a new default content setting for teen accounts that promises to show teen users only content that is “similar to what they would see in a PG-13 movie.” (There are also new settings that offer rough equivalents of PG and R-rated content for teens, though parents must approve the change.) On top of that, Instagram is exploring the idea of launching a TV app so you can watch Reels on the big screen in your living room.
These developments dovetail nicely with the argument Derek Thompson made a few days before Instagram’s announcement: “It’s all TV.” Citing a Federal Trade Commission filing, he notes that only 7 percent of the time users spend on Instagram involves consuming content from people they know. Meanwhile, podcasts are on Netflix and AI can create an infinite loop of crap to tap into your consciousness. “Digital media, powered by the serum of algorithmic transmissions, have become a super-television: more images, more videos, more isolation,” Thompson writes.
A brief history of how television rots our brains
Old television used to be extremely tame, thanks to a combination of technological limitations, federal regulations, and social norms. There used to be a limited number of channels, because there was a limited amount of spectrum to transmit. And because there was a limited amount of spectrum, nearly a century ago, the federal government created an agency to control the airwaves: the Federal Communications Commission.
In the early days of the medium, there was still much fear that television was ruining American minds, especially young ones. Broadcaster Edward R. Murrow condemned the rise of entertainment television as “the real opium of the people” in a 1957 interview with Time. A few years later, in 1961, Newton Minnow gave his first speech as chairman of the FCC describing television as a “vast wasteland…a procession of game shows, totally unbelievable family comedies, blood and thunder, mayhem, violence, sadism, murder, Western bad men, Western good men, private detectives, gangsters, more violence and cartoons.” This guy would have hated TikTok.
The bad things Minnow pointed out were especially bad, because kids could tune in and see them every time they found themselves looking at a screen. Over time, the FCC would control the types of content that could be broadcast during certain hours. Obscene content was illegal on television, but beginning in 1978, some profane or indecent material was permitted between 10 p.m. and 6 a.m., when children were presumably asleep. (You can thank George Carlin for that.) That amounted to an early form of age verification which, as Instagram’s announcement makes clear, remains a problem on the internet. It also seems unsolvable.
However, protecting children seems to be the only bipartisan motivation for regulating today’s super television. Whether it’s social media’s controversial contribution to the youth mental health crisis or the “unacceptable risks” that AI chatbots pose to children and teenagers, politicians have plenty of reasons to impose new regulations on platforms that have become the equivalent of 21st-century broadcasters. Senators Richard Blumenthal and Marsha Blackburn, co-sponsors of the Child Online Safety Act (KOSA), recently began campaigning to push the bill through the Senate (again) before the end of the year.
However, things are changing rapidly. When new AI-powered feeds such as OpenAI’s Sora and Meta’s Vibes are taken into account, it becomes clear that digital media (or super television, if you prefer) has its own vast problem.
The mirage of an age-appropriate Internet
Banning certain types of content is difficult when there is no single government agency policing the airwaves or, these days, the tubes that keep us online. So the preferred path to regulation appears to be to create three internets: one for children under 13, one for teenagers, and one for adults. A PG, PG-13 and R internet, if you want.
Doing this successfully requires verifying IDs, and the current state of age verification is a disaster. In the past three years, 25 states have passed laws requiring websites with adult content, specifically pornography, to verify a user’s age. This is the R-rated Internet. Several of these states also require age verification for social media platforms. Because the Children’s Online Privacy Protection Rule (COPPA) places limitations on websites that allow users under 13, this is the PG-13 Internet. Presumably, PG versions of the websites would include some of these protections, including the ability to disable addictive algorithms, as New York recently proposed.
By the way, online age verification is really difficult. For the most part, to confirm someone’s age, you need to confirm their identity. Free speech advocates warn that strict age requirements will prevent anonymous adults from accessing content protected by the First Amendment. Civil liberties groups say age verification presents a huge security risk, which seems like a reasonable concern after the recent attack on an age verification company exposed the data of 70,000 Discord users. High-tech age verification methods, such as using artificial intelligence to estimate a user’s age based on their activity or facial recognition to guess their age based on their appearance, are still untested. And, most of all, kids can figure out how to get around age verification systems, whether by lying about their birthday or using virtual private networks (VPNs).
If we look back at the golden age of television, when game shows and bad language were the big dangers, we can see how much the stakes have changed. Digital media works with mathematics so sophisticated that not even the people who wrote the code know how it works. Platforms like Instagram and TikTok are interactive and deliberately addictive. The use of these products has been linked to depression, anxiety and self-harm.
If the three-Internet strategy works, it would be an improvement for parents who want their children to have an age-appropriate online experience. There would likely even be positive effects, such as better privacy protections, which are a hallmark of existing laws protecting children online. Heck, it might even be useful for those of us who simply want to avoid accidentally seeing a murder on their phone.
Creating feeds that are safer for kids, movie rating style or otherwise, is one step toward making feeds safer for everyone. Or, at least, it’s proof that Instagram and its competitors are capable of doing so.
A version of this story was also published in the User Friendly newsletter. Register here so you don’t miss the next one!