
California’s bid to curb the influence of social media on children cleared a major legal hurdle this week when a federal appeals court largely upheld the state’s “Protecting Our Kids from Social Media Addiction Act."
The ruling marks one of the most consequential legal developments yet in the battle between Silicon Valley and state governments over algorithmic content delivery. (Source: Reuters)
The law, originally signed in 2023, requires platforms to restrict algorithmically generated “addictive feeds” for minors unless parents explicitly consent.
In practice, that means companies such as TikTok, Instagram, and YouTube must disable algorithmic recommendations for under-18 users by default, providing only chronological feeds unless a parent opts in.
At the bill’s signing, Governor Gavin Newsom defended the measure as a way to protect families, warning that “Every parent knows the harm social media addiction can inflict on their children – isolation from human contact, stress and anxiety, and endless hours wasted late into the night." (Source: Governor of California)
The tech trade group NetChoice, which represents giants including Meta and Google, challenged the law on First Amendment grounds.
They argued that California was effectively dictating how platforms curate and present speech, striking at the heart of editorial discretion protected by the Constitution.
The appellate court rejected most of NetChoice’s claims, siding with California’s argument that protecting children from manipulative design features constitutes a compelling state interest.
Attorney General Rob Bonta praised the outcome, saying the decision showed that “companies have blatantly shown us that they are willing to use addictive design features, including algorithmic feeds and notifications at all hours of the day and night, to target children and teens, solely to increase their profits.” (Source: California Office of the Attorney General)
Still, the judges did pare back the law slightly, invalidating the requirement that platforms hide “likes” and comments by default.
The panel concluded that this measure was not the “least restrictive means” of advancing the state’s interest, suggesting California went too far in dictating interface design.
At its core, the case highlights a fundamental conflict in the digital age: protecting children’s welfare and reinforcing parental authority versus safeguarding free speech and platform autonomy.
California presented the law as a tool to restore parental control over minors’ online experiences, while NetChoice argued it intruded unconstitutionally on editorial discretion.
Attorney General Rob Bonta emphasized that the ruling affirms the state’s resolve, noting that “through the passage of SB 976, California’s elected representatives sent a strong message: It’s time to put families in control.” He underscored his office’s commitment to enforce and defend the statute going forward.
The decision establishes that algorithmic design may be subject to regulation, setting a potential model for other states.
Social media companies will now face the operational challenge of implementing technical modifications and parental consent systems within an already complex global regulatory landscape.
Supporters view the ruling as a significant step toward holding platforms accountable for the psychological impact of their design practices. Critics caution, however, that divergent state laws could create a fragmented regulatory environment, one likely to draw eventual review by the U.S. Supreme Court.
The ruling underscores the gap left by Congress’s failure to enact comprehensive child online safety legislation. In that vacuum, states such as California, Utah, and Arkansas are advancing their own measures, creating the risk of a fragmented regulatory landscape.
As Governor Newsom noted when signing SB 976, the law targets features that “feed destructive habits.” Whether a national framework emerges or the Supreme Court ultimately defines the constitutional limits of state authority, remains unresolved.


