Since the dawn of the internet, parents have been concerned about the content children might be exposed to during the long hours they spend in front of those blue-tinted screens. The Children’s Internet Protection Act (CIPA) was an attempt made by Congress in 2000 to try and address parents’ anxieties about dangerous and extreme media that seemed to emerge no matter how many virtual barriers and walls were erected to try and shield children from the world. However, the internet is like the Wild West — it’s difficult to impose regulations on a seemingly lawless place. People have tried, sure — asking birth dates, maybe answering a math question or three — but no one’s wanted to push too hard and accidentally alienate their user base. As a result, they’ve been pretty easy to circumvent. How many of us at twelve didn’t scroll down on the “Select your age” button when creating social media accounts until it reached 1975 or 1967? Not even the creation of YouTube Kids — a curated YouTube based platform that allegedly strictly only allows videos that don’t contain deceptive clickbait, sexual content, excessive violence, obscenities, or anything and everything that might be even mildly upsetting for someone under eighteen — worked. The algorithm, a fickle mistress, still led kids down dangerous rabbit holes with harmful messages that were hard to screen without watching every single video beforehand. The content is rife with misinformation, and parents have also expressed concerns over the impact that short-form videos have on children’s attention spans; pumping dopamine, flashing blinding colors, and keeping kids hooked and drooling for hours.
In response to the pressure, governments have flirted with the idea of prohibiting social media entirely. In June 2025, the Australian government mandated that the sweeping prohibition on social media accounts for children under sixteen include apps like Snapchat, TikTok, YouTube, Instagram, X, and Facebook, which fully went into effect on December of 2025. They aren’t the only ones. Both the United Kingdom and the European Union have begun to consider similar bans. As a result, corporations have begun to scramble for safeguards, in an attempt to prevent losing a large chunk of their users.
One of these tech conglomerates is Alphabet Inc., parent of Google, Waze, Fitbit, and YouTube, among others. On August 13, 2025, YouTube — which has taken measures to prevent similar bans — began rolling out their new artificial-intelligence powered tool to verify user’s ages based on the videos they regularly watch on the platform. According to de Guzmán from Time magazine, if a user is detected as under eighteen, YouTube will disable personalized advertising, and install safeguards among other kinds of digital wellbeing tools, such as limiting repeated exposure to certain kinds of sensitive content. If they’re uploading content, any videos will automatically be set to private as well. But, if a model incorrectly designates an adult user as under eighteen, they will regain full access to their account after they upload a picture of their government-issued ID, selfie, or credit card information. However, YouTube’s age restriction crackdown will only work for logged-in users, meaning that kids could still try to circumvent the restrictions by watching videos without an account. It wouldn’t be very effective though, as signed-out users can’t access age-restricted content.
Predictably, most users are up in arms over having to hand over their personal information as part of an appeals process to regain access to an account that, in most cases, was already theirs for several years. Many are concerned about how YouTube will manage their personal information, as cybersecurity experts have warned that requiring users to upload sensitive documents could create a honey-pot for hackers, especially if said companies lack rigorous end-to-end encryption protocols. For example, in July 2025, a data breach at the social media app, Tea, leaked around 13,000 user pictures and government IDs to the far reaches of the internet.
While YouTube has claimed they won’t retain the user’s credit card information or ID for the purposes of advertising, they were rather vague in their blog post about what exactly would happen to the biometrics they demanded from their users. How long will the information be in their servers before being deleted? How will that data be stored? Will it ever be sold to third parties? Such skepticism isn’t unwarranted — Alphabet Inc., YouTube’s parent company, has faced harsh scrutiny for their data practices. In 2019, Google was fined $170 million by the Federal Trade Commission of the United States for violating the Children’s Online Privacy Protection Act (COPPA) by collecting personal data from kids without parental consent on YouTube.
YouTube hasn’t been particularly transparent about these aspects, or even how effective their age-verification AI even is. According to Dr. Gassa Asare, from Forbes, a study from 2024 found that there were a high amount of false positives in terms of women and Black people — meaning that minors were falsely classified as adults by the AI age verification algorithms. This not only further undermines the effectiveness of the tool, but highlights the real biases surrounding the algorithms crafted, and raises serious ethical concerns about underrepresentation in training data sets.
So, the actual use for the information the corporation is asking for isn’t clear, and neither is the effectiveness of the AI software they want to implement. Some users online are claiming that this is another step deeper into the “surveillance creep,” a concept that’s been growing in popularity after the COVID-19 pandemic. The term refers to the general idea of a trade-off: giving up private information in exchange for safety or services. While initially seeming harmless or beneficial, data-recollection technologies continue to expand their reaches with minimal or non-existent oversight. The expansion of Ring camera networks, contact-tracing apps during the pandemic, and the rise of facial recognition in public spaces have all raised similar alarms. These are often implemented in the name of safety, or even convenience, but without accountability, systems evolve, and that information ends up falling into the hands of those with malicious intentions.
Ensuring that proper regulations are enforced when it comes to data handling should be more of a priority to the public, especially as technologies continue to evolve and develop. Nowadays, most of our lives are scattered throughout the internet, fingerprints left over liked posts, stories, and comments. While it may be tempting to give up more of our information on the basis of convenience, it’s important to never forget that these corporations have their own interests and values, which may not always align with our own. At the end of the day, who’s watching those who watch us?