Forgot your password?
typodupeerror
AI The Almighty Buck

Group Pushing Age Verification Requirements For AI Sneakily Backed By OpenAI 54

An anonymous reader quotes a report from Gizmodo: OpenAI hasn't been shy about spending money lobbying for favorable laws and regulations. But when it comes to its involvement with child safety advocacy groups, the company has apparently decided it's best to stay in the shadows -- even if it means hiding from the people actually pushing for policy changes. According to a report from the San Francisco Standard, a number of people involved in the California-based Parents and Kids Safe AI Coalition were blindsided to learn their efforts were secretly being funded by OpenAI. Per the Standard, the Parents and Kids Safe AI Coalition was a group formed to push the Parents and Kids Safe AI Act, a piece of California legislation proposed earlier this year that would require AI firms to implement age verification and additional safeguards for users under the age of 18. That bill was backed by OpenAI in partnership with Common Sense Media, which proposed the legislation as a compromise after the two groups had pushed dueling ballot initiatives last year.

But when the coalition started to reach out to child safety groups and other advocacy organizations to try to get them to lend support to the bill, OpenAI was apparently conveniently left off the messaging. The AI giant was also left out of the marketing on the coalition's website, according to the Standard. That reportedly led to a number of groups and individuals lending their support to the Parents and Kids Safe AI Coalition without realizing that they were aligning themselves with OpenAI. As it turns out, OpenAI isn't just one of the members of the coalition; it is the group's biggest funder. In fact, the Standard characterized the Parents and Kids Safe AI Coalition as being "entirely funded" by OpenAI. While it's not clear exactly how much the company has funneled to this particular group, a Wall Street Journal report from January said OpenAI pledged $10 million to push the Parents and Kids Safe AI Act.
Gizmodo notes that OpenAI's backing of the Parents and Kids Safe AI Act "could be self-serving for CEO Sam Altman," who just so happens to head a company called World that provides age verification services.
This discussion has been archived. No new comments can be posted.

Group Pushing Age Verification Requirements For AI Sneakily Backed By OpenAI

Comments Filter:
  • by SlashbotAgent ( 6477336 ) on Thursday April 02, 2026 @07:16AM (#66073674)

    "could be self-serving for CEO Sam Altman," who just so happens to head a company called World that provides age verification services.

    Could be. Hmmm. Could be.

  • human vs slop (Score:5, Interesting)

    by ZiggyZiggyZig ( 5490070 ) on Thursday April 02, 2026 @07:22AM (#66073676)

    i've read several times here and other places online that the global push for ID verification online was to allow AI firms to differentiate between slop and content made by real humans. But I don't understand, what will prevent ID-authenticated humans to post slop, only in their name? For me this argument is kind of bogus, although I understand why there is a global push for the end of online anonymity - more likely being because of the new rise of fascism and the need to control the masses...

    • Re:human vs slop (Score:4, Interesting)

      by CEC-P ( 10248912 ) on Thursday April 02, 2026 @09:01AM (#66073782)
      The threats of AI to shut down every account you own and not let you shop anywhere or search anything prevents you from doing anything it doesn't want you to.
    • Re:human vs slop (Score:5, Insightful)

      by alexgieg ( 948359 ) <alexgieg@gmail.com> on Thursday April 02, 2026 @09:03AM (#66073784)

      The main pusher has been Meta. They want age verification everywhere because it (mostly) allows distinguishing real humans from bots, including AI bots. From what I read, no idea whether this is accurate or not, they want that because of ads. Bots don't generally buy products, so showing them ads reduces click-through metrics, thus ad revenue.

      AI companies I don't know. For Altman, World might be a driving factor, but I imagine a more important factor is regulatory capture. The more roadblocks to competition billion- and trillion-dollar incumbent companies manage to add to their markets, the less competition from new entrants unable to afford compliance.

      • How does age-verification for bots work? I think bots should be walled out because they are too young and don't have the maturity to understand what they're seeing. Especially Open AI bots.

        • The main ways age verification is being done are by following instructions while a video of one's face is recorded, by submitting a photo of a legally valid State or National ID the system knows how to process, or by submitting valid credit card data. A bot can do those, and it's relatively cheap for one-off cases, but it gets very expensive for any kind of mass use:

          * Listening to instructions and generating a real-time video of an adult face that follows them requires a lot of processing power.
          * An ID can

    • by SumDog ( 466607 )
      Facebook is tightly tied to the government, launched the day after DARPA shut down LifeLog and was originally funded by Peter Thiel. It's always been intended as a global surveillance system. OpenAI also has ties to the US government and any of the same Peter Thiel backed entities.

      OpenAI benefits from a global control grid. You know what China has with surveillance and companies providing individual person credit scores? The US and EU governments want that, but automated and on steroids. These people ar
    • by allo ( 1728082 )

      That's just a conspiracy theory. The truth is much simpler: You can make money using age verification.

      And AI companies can even promise things like "We preserve your privacy, because our AI matches behavor patterns no need to upload your ID". And behind the scenes they sell your behavior to identify you on other websites ...

  • by rossdee ( 243626 ) on Thursday April 02, 2026 @07:24AM (#66073678)

    Yes, an AI shouldn't be allowed on the Internet until its 18 years old.

  • Two changes (Score:5, Funny)

    by Hentes ( 2461350 ) on Thursday April 02, 2026 @07:26AM (#66073680)

    I could accept Worldcoin based authentication with two minor changes: instead of an iris scan, it should use the more modern, IgNobel winning rectal print technology [slashdot.org], and instead of a creepy orb Sam Altman would have to personally sketch the prints with a broken pencil.

  • His grimy little hands are everywhere.

    • I've been told he wants to turn me gay and Jewish, but that scares me a lot less than building a panopticon and turning the eyepiece over to Trumpistas.

    • by alexgieg ( 948359 ) <alexgieg@gmail.com> on Thursday April 02, 2026 @09:07AM (#66073790)

      Soros hasn't been the Soros of tech, or anything, for a long time. He's one billionaire doing advocacy and lobbying for liberal causes, while all the others, individually and put together, are nowadays doing advocacy and lobbying for conservative causes. If anything, he's currently the lone underdog fighting an uphill battle against impossible odds.

  • Liability (Score:5, Interesting)

    by Dan East ( 318230 ) on Thursday April 02, 2026 @07:42AM (#66073688) Journal

    It absolves them of liability. If there is a law they have to validate age (even if it is ineffective and easily worked around by minors), and they are doing whatever silly thing they need to do to be compliant, then they have shielded themselves from liability.

    By being involved in the process they can steer things to something easy and affordable to implement on their end. Make it work the way they want to (scan an ID, have AI look at their face, DNA test, measure their height - whatever method they're specifically wanting to do is why they are funding this and pushing for it).

    • Re:Liability (Score:5, Interesting)

      by DarkOx ( 621550 ) on Thursday April 02, 2026 @08:09AM (#66073706) Journal

      All of that is true but I think it is far more about barriers to entry. For all the talk about the need for these massive datacenters, a lot of, maybe most of, the use cases for the the frontier models that actually are worth $$ like code assistants etc rapidly falling into the range where what OpenAI is selling just isn't needed. Qwen is not as good as GPT but it is close, a Mac Studio maybe can't pump out tokens quite as fast as an API hosted on OpenAI's infrastructure but it is knocking on the door (for one human consumer, applications).

      Is there going to be market for hosted models, of course not many are going to want to onprem the LLMs running the chat bot on their websites. A lot of companies will want to onprem their RAG tools and anything handling data they care about protecting.

      At one point Microsoft people were saying workstations were over, that developers, engineers (not in the software sense), Architects (not in the software sense), were going to use Azure hosted VDIs...Yeah have not seen that, yes I know its possible and someone here will tell us how wonderful their thin-client virtual desktop experience is, but the lion's share of these professionals that I encounter anyway are still buying workstations (or near-workstations pro-line Mac). Point is people are going to want to run their GenAI work loads locally, and they very nearly can. The free and "Open" models combined with affordable performant hardware are going to eat OpenAI's lunch, in a huge slice of the market.

      Unless - they could somehow make it impossible to distribute and bundle these things for compliance reasons....Then they'd have nice little moat that would be difficult to cross.

    • Re:Liability (Score:5, Interesting)

      by alexgieg ( 948359 ) <alexgieg@gmail.com> on Thursday April 02, 2026 @09:15AM (#66073804)

      even if it is ineffective and easily worked around by minors

      Australia is on the forefront of not allowing that to work for long. Their age-verification enforcement agency is actively monitoring every single trick kids use to bypass verification and updating their compliance rules to force companies to block those loopholes one by one.

      For example, they've recently started threatening fines to websites that allow users to update their age to be higher than the threshold when they had previously informed they were younger than that, that allow a user to keep sending photos over and over and over until one is accepted as being higher than the threshold, and that accept known videogame characters to be accepted as photos of real people.

      The game of cat and mouse will continue, and there's going to always be techniques that work, but they will become harder and harder, as well as more and more hidden, since revealing them in public where the authorities can also learn of them will trigger their banning. At some point it'll become so hard to bypass for anyone but the most dedicated teens that they expect most will simply give up such attempts and accept living under the imposed restrictions. Some will bypass them regardless, but as long as the percentage is tiny the law will be considered a success from the enforcers' perspective.

      • I am curious how they identify kids that are using VPNs to non-hostile countries to then do what they want on the Internet. Are they blocking all VPNs? Are they decrypting all traffic leaving Australia? I can't imagine that the law is anything other than the most minor annoyance to minors.

        Or are you folks keeping your kids uneducated and ignorant over there like they are in the USA?

        • VPN usage can be detected via deep packet inspection, as China shows. In China, the government is aware of all VPN usage and lets it slip, or blocks it, as they see fit. In Xinjiang they even went after VPN users to demand look into their mobile devices to check whether they had forbidden content there, not due to need but as an intimidation tactic, an explicit "we know who you are, and where to find you" warning to all inhabitants so they wouldn't feel empowered by the mere fact the government is allowing

    • It absolves them of liability. If there is a law they have to validate age (even if it is ineffective and easily worked around by minors), and they are doing whatever silly thing they need to do to be compliant, then they have shielded themselves from liability.

      By being involved in the process they can steer things to something easy and affordable to implement on their end. Make it work the way they want to (scan an ID, have AI look at their face, DNA test, measure their height - whatever method they're specifically wanting to do is why they are funding this and pushing for it).

      I'm not a lawyer, but that's not how compliance works, at all. Complying with regulatory requirements only protects you from the government, and narrowly. That should in no way absolve you of anything liability related. Doing the minimum required age verification and then knowing you have minors where you shan't, should leave you at risk the same way as fully complying with anti-money laundering rules but knowing you work with criminals and likely criminally obtained money, or complying with auto safety reg

  • Trying to hide their involvement, while pumping $billions into lobbying. No surprise that OpenAI is doing much the same. Bet: So are Google, Microsoft, Apple and other tech giants - they just haven't been caught yet.

    The question is why? Why do the tech giants want to force ID checks in order to use basic service, or even to log into your own computer?

    • Re:Like Meta (Score:4, Interesting)

      by leonbev ( 111395 ) on Thursday April 02, 2026 @10:54AM (#66073922) Journal

      The "why" is pretty easy to understand:

      1) It makes them look like responsible citizens to government officials, who will now be more willing to turn a blind eye to their privacy raping default "privacy" settings. Who knows, it might even help with the permitting process to plop a new data center somewhere.

      2) It adds a barrier of entry for startups and open source projects who can't afford an army of lawyers to ensure that they're meeting the specific age regulations for every US state and country.

      3) It allows Open AI/Meta to stop wasting time and effort trying to upsell users who don't have a credit card or bank account.

    • by DarkOx ( 621550 )

      Speaking as someone who does think we need stronger age, and locality verification on the internet; I too find the whole thing unseemly.

      There are plenty of good reasons to want know if someone is over the age of majority whatever that is defined to be wherever they are, and what laws the other party to your interaction may or may not be subject to in terms of jurisdiction.

      I also believe this is achievable while preserving some degree of privacy/anonymity. States could as part of issuing IDs for example prov

  • This is about forcing their competitors (google, tiktok, meta, twitter, etc) to invest in validation of users identity before allowing public posting on those services.

    Meanwhile, OpenAI has no public posting capability and is not required to authenticate users age. It also is mostly pay-for-play and one of the top users of AI are kids under 18,.

    So, either Google/Meta socials validate users - which wont happen - and users gravitate to AI for 'time spent'. It's a win-win for AI services.

  • by rsilvergun ( 571051 ) on Thursday April 02, 2026 @08:49AM (#66073748)
    So this was always about AI slop. The problem these companies are facing is that AI slop is infesting the internet. It's starting to infect their data sets. It's becoming difficult to tell programmatically who's a real person and who is a slop bot.

    This is an existential threat for both the AI companies who need real humans to train from and the social media companies who need clean data sets to sell to law enforcement and advertisers and corporations and governments.

    If that data isn't clean none of these people have a product because you're the product and if you're mixed in 80/20 with highly sophisticated bots that date is going to become real worthless real fast.

    So this not only improves their ability to track you but it lets them know you're a real person who's data can go into the set.
    • I heard, on a regular-people radio show, some commentary on the Sora thing. (They first had to explain what LLMs were, to give you an idea of the audience.) Even a lot of regular people now are getting tired of the slop. It's just not interesting to look at day-in and day-out.

      On the flipside, I do know 1 person who seems to gravitate heavily toward the slop and repost it on a daily basis. He seems genuinely drawn to it. He has a learning disability.

      In many ways, these times are really separating the wheat f

      • That's interesting - I also know someone that gravitates towards and shares AI slop daily, that has mental disabilities. Worst part is, lately if we tell him it's AI (because he'll share stuff like "10 unexplained videos" and directly ask what we think it is, or ask why something happened in a video, or how something was done) he gets defensive, saying things like "not everything is AI!" He can't understand that because he's engaging with AI slop, the algorithm shows him more AI slop, so his feed is all AI
  • Big players *love* regulation, as long as it's red tape but doesn't actually interfere with the business. It's a fixed cost, which they can spread out over their large operations while it strangles the smaller competition that might be a problem.

  • Maybe they're using this to go after local models?
  • For a fee, of course. The whole AI thing is heading the way of tulips. And they need a revenue source* to backfill that hole before anyone notices.

    *AI was best positioned as a tool for developers and a back end for smarter search. A small market in the final analysis. But if we can charge every user a little bit each month to get onto the MSN network (nee Internet), we can still do OK.

  • Honestly, Sam Altman and company do not deserve your trust at this point. Bogus company valuations built on Intellectual Property and Copyright theft should have been your first clue. Same goes for most of the other Magnificent Seven in general. It's funny that Anthropic is issuing Claude take-downs for a product that was trained on the rest of the internet predominately without permission. "Theft and infringement for me, but not for thee."

  • If you can't supervise your children, then turn them over to state care.

    Your kid is your responsibility.

The earth is like a tiny grain of sand, only much, much heavier.

Working...