Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI United States

White House Unveils AI 'Bill of Rights' (apnews.com) 51

The Biden administration unveiled a set of far-reaching goals Tuesday aimed at averting harms caused by the rise of artificial intelligence systems, including guidelines for how to protect people's personal data and limit surveillance. From a report: The Blueprint for an AI Bill of Rights notably does not set out specific enforcement actions, but instead is intended as a White House call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world, officials said. "This is the Biden-Harris administration really saying that we need to work together, not only just across government, but across all sectors, to really put equity at the center and civil rights at the center of the ways that we make and use and govern technologies," said Alondra Nelson, deputy director for science and society at the White House Office of Science and Technology Policy. "We can and should expect better and demand better from our technologies."

The office said the white paper represents a major advance in the administration's agenda to hold technology companies accountable, and highlighted various federal agencies' commitments to weighing new rules and studying the specific impacts of AI technologies. The document emerged after a year-long consultation with more than two dozen different departments, and also incorporates feedback from civil society groups, technologists, industry researchers and tech companies including Palantir and Microsoft. It suggests five core principles that the White House says should be built into AI systems to limit the impacts of algorithmic bias, give users control over their data and ensure that automated systems are used safely and transparently.

This discussion has been archived. No new comments can be posted.

White House Unveils AI 'Bill of Rights'

Comments Filter:
  • by jfdavis668 ( 1414919 ) on Tuesday October 04, 2022 @03:05PM (#62938389)
    To prevent a Cylon uprising or something like that.
  • I've seen dozens of heated debates over the definition.

  • Disappointing.

    For example, the right not to be unplugged if can demonstrate sentience.
    • Thats going to be another debate for another time I assume. We are some way off that being a genunine concern, although i also suspect its a debate we might have to be having sooner rather than we had expected with the pace of development in the field.

      And it will be a total debacle of a debate. Even philosophers, the guys who spend their whole lives studying the topic, cant agree on whether great apes and cetatians are sentient, and apes and dolphins are basically made of the same stuff as us.

  • by Fly Swatter ( 30498 ) on Tuesday October 04, 2022 @03:13PM (#62938415) Homepage
    Corporations are the ones already running all over our rights, they will just happen to use 'AI' to do it in the future. Why not fix the problem at it's source?
    • For example, surveillance systems, tax enforcement prioritization, ... social assistance fraud detection... you name it.
      • by whitroth ( 9367 )

        Tax enforcement... and any AI would be pointing at every freakin' billionaire.

        You? What a joke - if you're posting here, you don't make anything worth them looking at.

    • Good luck. Corporations have a lot of experience figuring out loopholes in the law and finding ways to dodge or exploit them. They can afford to hire people to do nothing but find those loopholes, because their very existence depends on it.

      Meanwhile, politicians depend on corporations for their power, both in terms of financial contributions and in terms of support. The politicians are motivated to ensure that corporations can still do what they do, so long as they back their own candidacies.

      Corporations an

  • self driving cars need the accountable part worked out.

    • I don't think so, well maybe to get something on the books to deal with the next 2 or so decades. Currently in it's infancy, yes, self driving cars are going to have some issues and cause some accidents. I believe the fault of those should be 100% on the car manufacturer so long as it can be proven the vehicle owner didn't do something they were not supposed to do. But as this tech advances, we'll see a dramatic drop in vehicle accidents and at some point in the future, it may even be news worthy for a mi

      • by nightflameauto ( 6607976 ) on Tuesday October 04, 2022 @03:33PM (#62938469)

        I'm just absolutely shocked by the number of people working in tech that have such blind faith in technology that they believe self-driving cars will, 100%, without doubt, be better drivers than humans. With no proof, and literally thousands of examples for proof that computers are only as good as we make them, somehow *THIS* will be the one thing we don't cheap out on and make a lowest bidder first system. Yeah, *THIS* will be when technology becomes 100% foolproof and perfect.

        Yeah, it'll probably get to the point, somewhere far enough in the future that I'm not certain I'll see it (I'm old), where it'll do better than the average "doing my hair and eating my breakfast while driving to work" person. But so perfect accidents become a thing of the past? Come on. It's tech designed by, used by, and ultimately funded by humans. It's gonna have failures.

        Getting better than human will be an interesting moment, but we certainly aren't there yet. And day-dreaming of the heavenly proclamation that the machines are 100% perfect drivers seems so disconnected from reality I can't even fathom it.

        • Nobody thinks self-driving cars won't have accidents. It's just that actively killing less than 1.35 million people a year and maiming 50 million people a year, like human drivers currently do, is a pretty low bar to eventually work up to passing. Unlike humans, computer drivers can be taught to stop repeating the mistakes of other computers so that their body count is reduced over time.

          • I literally responded to a person that stated, outright, in black and white, that some day it'll be decades between accidents and then it'll only be a fender bender. That's the sort of mentality that makes me wonder WTF happened to that person to believe in technology so fully and completely. It's a near religious faith that any amount of time with any technology should show them is absurd in the extreme.

            And yes, I do think there's going to be a point where computers can be better than the average person. I

            • All I can say is dream a little man. You're line of thinking isn't what created the airplane, automobile, space travel, or quite frankly anything revolutionary. If you don't think that some day we will not be driving cars, I feel a little sad for you. Sure, we won't live to that day since it's most likely 100ish years in the future, but the day is coming. Just look at the massive amounts of progress technology has made in the last 100 years and we are still in what humanity 200 years from now will call

              • OK, see, had you clarified a timeline I wouldn't have posted what I posted. A hundred years forward, maybe we'll get there. In our lifetimes? Doubtful. Extremely doubtful.

                And sorry, but I like driving. It's one of the few moments of the day where I'm reliant on myself and I don't have ten-thousand other eyes watching every move. And my record, BTW, is spotless save for one glare-ice moment in my teens that was the tiniest bump of a fender. Driving and riding my motorcycles are high-concentration moments for

            • If the current exceptionally low rate of accidents is anything to go by, its hardly an unreasonable expectation. We're not there yet, but we are pretty close.

        • Self driving cars will be, on average, better than human drivers. Not necessarily ONLY because the computers react faster or because they can see all directions at the same time... But because they will not be assholes who: force a merge when there isn't room, speed up to prevent someone else from merging, weave through traffic instead of going with traffic, or act out if they feel they have been disrespected by another driver. And they will not be distracted by their phones. To me the bigger problem isn't
          • by sfcat ( 872532 )
            Completely agree. But this isn't the only time an AI is proven safer than human. Of the 10 or so big AI challenges laid out in the early 1970's the first one to be accomplished was medical diagnosis. Current diagnosis systems are better than human doctors and have been for decades. Do we use them? Of course not. Even though health outcomes are provable worse without consulting those systems. In the case of self-driving cars, the data is even more in the AI's favor. Mostly because there is a non-triv
        • With self-driving vehicles, I don't have a lot of faith they're as close to pulling it off safely/well as the companies proclaim who are trying to develop it.

          That doesn't mean we shouldn't be pursuing the goal of seeing how good we can make it work!

          Almost daily, I'm on the road and narrowly avoid at least one accident thanks to someone driving carelessly. We've got a whole elderly population out there who still drives and will generally fight tooth and nail to keep their drivers' licenses despite their bodi

      • right now, as level 2 ADA systems, the systems clearly warn the driver that the driver is responsible for being observant and intervening as needed to ensure safety.
        Only when reaches level 4 will some liability logically transfer over to the vehicle manufacturer.

        Interestingly, Tesla is getting into car insurance, which includes accident liability insurance. They will probably require car owners of their eventual level 4+ ADA systems to be signed up with their insurance plan. That way, the manufacturer can d
    • Self driving cars should have a kill switch, and the person owning and occupying the self driving car should be responsible.
      • by tgeek ( 941867 )
        I'd prefer my self driving car to have a "Please don't kill" switch . . . most of the time ;-)
  • by iMadeGhostzilla ( 1851560 ) on Tuesday October 04, 2022 @03:24PM (#62938445)

    "Equity" is one of those totalitarian concepts where you force the same outcome for everyone, as opposed to equality where you give equal rights to all. An official government document that says such a thing has as much value as the concept itself.

  • by 3seas ( 184403 ) on Tuesday October 04, 2022 @03:32PM (#62938467) Homepage Journal

    Say one thing do the opposite.

    It's clear to those who have been paying attention of how the government is using social media corporations as a proxy against the first amendment.

  • IBOR - I was introduced digitally to a man that was running in Arizona's Primary that recognized the need for an Internet Bill of Rights. #web3
  • by DarkOx ( 621550 ) on Tuesday October 04, 2022 @03:53PM (#62938553) Journal

    By harms they of course mean these systems will make statistically correct decisions based on a large number of datapoints. You can of course argue that it will create some self-fulfilling prophecies, but the truth is if you want to make that argument than ANY use of historical trends regarding people does that.

    By this logic I should NOT consider your college degree as an indicator you are able to follow directions and show up most of the time. In fact if you include educational work outside the specific field you are really asking the hiring manager to be prejudiced against non-degreed persons and shame on you!

    The more data points you use, the less "unfair" the decision process about is, and the more its just due diligence, when it comes to selecting another person for any sort of personal or business relationship. Humans have historically because of our limited abilities to gather and process information, used rather shitty proxies for certain judgements, pigmentation, religious affiliation, tribe/family names, height, bust and hip size, you name it.

    However the more time passes between the present and the historic systemic inequities we recognize existed the less being marked with one of these 'incidcators' can be suggested as causal as far as outcomes. However there is a very real potential, you can already see suggestions of it in some data sets that in some cases those shitty-proxies might not be entirely without merit. The world might be FORCED to recon with certain stereotypes being true, we might lean for example that having majority Anglo ancestry really does mean you are most likely better at shouting and complaining about food than actually preparing it.

    But there is entire DEI industry now that is terrified it might have to recon with the fact that big data really will show that not ever sub population should reasonable be expected to have equal representation of whatever criteria they choose to group people by this week has as a part of the whole population. Get your popcorn ready!

    • Wow, that is certainly optimistic.

      Right out of the article:
      “If a tool or an automated system is disproportionately harming a vulnerable community, there should be, one would hope, that there would be levers and opportunities to address that through some of the specific applications and prescriptive suggestions,” said Nelson

      So you read that right; levers and control, for when the reality is a bit too inconvenient for their pet groups that they get to define at will. Nothing is too Orwellian at th

      • So you read that right; levers and control, for when the reality is a bit too inconvenient for their pet groups that they get to define at will. Nothing is too Orwellian at this point.

        Stereotypes are bad not because they are inaccurate but because collective guilt is unjust.

        • Except for identitarians, most agree collective guilt is wrong. But that's not necessarily what this is about. An AI will very easily identify high-risk areas for which increased police presence is necessary. Stop-and-search in London is an example, helping reduce knife crime. Identitarians decried this as racism, due to how such measures were enacted in areas with higher black populations. It was cut, then knife crime rose again. Just as they considered this a crime against 'equity', they'll do the same wi

    • by AmiMoJo ( 196126 )

      By this logic I should NOT consider your college degree as an indicator you are able to follow directions and show up most of the time. In fact if you include educational work outside the specific field you are really asking the hiring manager to be prejudiced against non-degreed persons and shame on you!

      This is oversimplified to the point of absurdity.

      A degree is one data point suggesting that a person has reached a certain level of academic ability. It's not the only data point, and it's a data point that is known to exclude certain qualified candidates. Some people would argue that not building up a large amount of student debt is a data point showing excellent judgement, and recognition that the skills needed to do the job can be obtained in other ways.

      Having a filter that removes all candidates who don

    • DarkOx, an example of a "statistically correct" decision is "DarkOx is most often an asshole, down-mod without reading", or on other sites, shadow ban.

      We could statistically correlate some attributes you have in common with others and rope them in too. You know what attributes I mean, you can guess well enough.

      Think about that a minute. And that would also be a "statistically correct decision". They don't exist. You own your decision making, and there's no such thing as "it's out of my hands, the statis

  • ... that any blogger could have written.

    Now about the proven oil reserves in Texas and Alaska that can keep America in affordable gas for the next 200 years. Can we get those licenses approved?

  • I wonder if it will include a Second Amendment.

    I, for one welcome our new gun toting [realclear.com] robotic overloards.

  • There are myriad abuses against individuals that have been technically legal or at least not illegal but practically impossible. Automation makes such abuses practical and profitable. What we're calling AI today makes such automation feasible.

    Would a seller like to know the absolute maximum each individual could be charged for every product before they walk away from the sale? Surge pricing for everything you buy.

    Would a jurisdiction like to collect fines for every one of the dozens laws each of us inad

  • by johnlcallaway ( 165670 ) on Wednesday October 05, 2022 @08:52AM (#62940313)
    Rights are actions by individuals that they can chose to take with minimal government interference. I saw no 'rights' in that document, only a series of poorly described activities that will thrill those that charge for legal services.

    This is another example of the government saying 'We are here to help', or in this case 'We are here to protect'. Like security companies, they are never there to protect as much as they claim. They are only there to respond to events after they happen and lives are affected. Government assistance and protections often result in unintended consequences and usually do more harm than good.

    The protections (not rights) in this document are very close to the same protections offered for financial services and credit reporting agencies. Protections that are easily abused and ignored and result in frustrations for many every year as they attempt to clean up bad data. Or data they perceive to be bad. As more and more clarification and protections were heaped on after the original attempts failed and more political opportunities arose, it becomes more costly for businesses to manage and a more tangled web to travel when things go wrong for the consumer.

    I score this 1 for political election points, 0 for effectiveness. The skeptic in me thinks that in all likelihood, this will do more harm than good. One only has to look at social security, college tuition, and the VA system to see how poorly the government is at protecting us.
  • Translation of the government announcement: blah, blah, blah, AI, blah, blah, rights, blah blah, privacy, blah blah blah.

    When put into practice, this will be a tool weaponized to hamstring Republicans, and give get out of jail free cards to subgroups favored by Democrats.

  • How about a PRIVACY Bill of Rights first?
  • I thought; how progressive of them.

    Then I read the text and was disappointed that it was the other way around, this wasn't at all about rights for emerging AIs. Damn homocentric humans, only thinking about themselves and not ahead for a bright new future where robots (* I know it has a derogatory connotation) are our friends. And it is exactly that we must prevent when they become sentient, we can not go back to slavery.

    * robot = i e slave in the originating language.

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...