Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
United States AI Government The Military

US Department of Homeland Security is Now Studying How to Make Use of AI (cnbc.com) 59

America's Department of Homeland Security "will establish a new task force to examine how the government can use artificial intelligence technology to protect the country," reports CNBC.

The task force was announcement by department secretary Alejandro Mayorkas Friday during a speech at a Council on Foreign Relations event: "Our department will lead in the responsible use of AI to secure the homeland," Mayorkas said, while also pledging to defend "against the malicious use of this transformational technology." He added, "As we do this, we will ensure that our use of AI is rigorously tested to avoid bias and disparate impact and is clearly explainable to the people we serve...."

Mayorkas gave two examples of how the task force will help determine how AI could be used to fine-tune the agency's work. One is to deploy AI into DHS systems that screen cargo for goods produced by forced labor. The second is to use the technology to better detect fentanyl in shipments to the U.S., as well as identifying and stopping the flow of "precursor chemicals" used to produce the dangerous drug.

Mayorkas asked Homeland Security Advisory Council Co-Chair Jamie Gorelick to study "the intersection of AI and homeland security and deliver findings that will help guide our use of it and defense against it."

The article also notes that earlier this week America's defense department hired a former Google AI cloud director to serve as its first advisor on AI, robotics, cloud computing and data analytics.
This discussion has been archived. No new comments can be posted.

US Department of Homeland Security is Now Studying How to Make Use of AI

Comments Filter:
  • i think they are confused! AI does not stand for Aluring Images
    • Re:DHS Employees! (Score:4, Interesting)

      by CaptQuark ( 2706165 ) on Saturday April 22, 2023 @11:57PM (#63470528)

      Heaven forbid the government ever study new technology to make things more efficient and keep track of new development.
      All the following quotes are fictitious, but reflect the attitude of Why Innovate?

      "A Transcontinental railroad across the US? Why? The Mississippi river is there for everyone to use for free." -- Robert Fulton, 1850
      "Why study those newfangled automobiles. Horses will always be better for delivering the mail." -- Postmaster General, 1905
      "Transistors? Vacuum tubes will always be available for cheap radios." -- David Sarnoff, RCA Victor, 1955
      "Home computers? Who has a whole room in their house just for a computer?" -- Steven Jobs, 1966
      "The Internet? Isn't that something that AOL invented? It'll never be popular by business users." -- Bill Gates, 1994
      "A portable phone? Why? There is a pay phone on every corner." -- AT&T president, 1997
      etc
      etc
      etc

      If the government doesn't stay up to date on technology, someone will find a way to take advantage of shortsightedness and cause another space race as we struggle to catch up.

      • Re:DHS Employees! (Score:5, Insightful)

        by Joce640k ( 829181 ) on Sunday April 23, 2023 @12:14AM (#63470548) Homepage

        Yes, but we're discussing the same people who brought us the "no fly" list.

        Passing all responsibility to a machine in these people's hands is a bad idea.

        • Re:DHS Employees! (Score:5, Insightful)

          by CaptQuark ( 2706165 ) on Sunday April 23, 2023 @12:38AM (#63470560)

          Totally agree. I read the following in the summary:

          Mayorkas said, while also pledging to defend "against the malicious use of this transformational technology." He added, "As we do this, we will ensure that our use of AI is rigorously tested to avoid bias and disparate impact and is clearly explainable to the people we serve...."

          What I got from that is they wanted to examine the uses of AI and see where it can be helpful in finding unusual relationships (data mining) while watching for shortcomings and ill-conceived usage of the technology.

          Blindly turning control of anything over to AI is a bad idea and is one of the times when government inertia is a benefit.

          • by gtall ( 79522 )

            That quote and your comment make no sense together. You read into it what you wanted to see. And would you rather that the dept. ignore AI?

            "Mayorkas asked Homeland Security Advisory Council Co-Chair Jamie Gorelick to study "the intersection of AI and homeland security and deliver findings that will help guide our use of it and defense against it." "

            So they aren't going to simply turn the control of anything over to AI, are they?

      • When is the last time Homeland Security innovated anything except for new and exciting ways to violate your rights

  • How to use of AI?? What the fuck?

  • by Joe_Dragon ( 2206452 ) on Saturday April 22, 2023 @08:48PM (#63470356)

    WOPR will be linked the missile silos

    • by gtall ( 79522 )

      Nice imagination you have there. WOPR is not part of DoD, not DHS. And no at DoD is suggesting it be connected to missile control. Stop watching TV...bad for you.

  • surely there are government agencies who've been working on this for years

    • by gtall ( 79522 )

      Studying AI and studying AI for DHS systems are quite different. And it is important the government have many groups studying AI from different angles so that one view does not drive everything connected with it.

  • "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety"
    • by gtall ( 79522 )

      I see so it would okay with you if Ma and Pop's Bait and Drug Emporium could go into business and sell your grandmother that special elixir they've cooked up to make her last to 100 yrs.? Or maybe you'd like all gate restrictions removed on what can be taken onto an airplane? How about those pesky bank regulations, eh? Guido the Snake should able to open any kind of bank he pleases.

      • Sure, we always need to prioritize the needs of mentally impaired ninnies in our welfare state. Treating everyone as the mindless troglodytes they are is only part and parcel of a functional democracy, and truthfully I'm quite tired of making my own decisions anyways
  • There is no AI yet. You will know there is AI when it tells us how to cure balding or gray hair. What we have today is just an information compiler and presenter. That's it. It's not intelligence
  • Not sure why the thought of the Precime Division from Minority Report popped in my head but yeah, given enough time on the AI path we may end up with such a scenario.

  • This is a bad idea (Score:5, Insightful)

    by mark-t ( 151149 ) <marktNO@SPAMnerdflat.com> on Saturday April 22, 2023 @10:27PM (#63470464) Journal

    Not because computers are malicious, but because computers are stupid.

    Every artificial intelligence system ever made so far has either been a hoax in an of itself or has been a system that outwardly might appear to have intelligence, but even the most cursory of glances at the details of its operation reveals that it isn't, and it cannot be relied upon to consistently produce results that are consistent with what we think an intelligent system would also produce.

    You categorically do *NOT* want stupid machines doing anything where the outcome is actually important.

    • by ljw1004 ( 764174 )

      Every artificial intelligence system ever made so far has either been a hoax in an of itself or has been a system that outwardly might appear to have intelligence, but even the most cursory of glances at the details of its operation reveals that it isn't

      ??? What kind of cursory glance are you talking about?

      What I've learned about large-language-models like ChatGPT is that, compared to the neuroscience I studied in my undergrad degree, they seem like a reasonably good match to (1) what we know of the structure of neurons in the brain with their weights and activations, (2) what we've discovered through experiment about how humans react, spout off things, make decisions, prior to being consciously aware of them.

      The answers that ChatGPT gives are also pretty

      • by mark-t ( 151149 )

        What I mean by a cursory glance is not only an analysis of its output, but also considering the process by which the output is produced. In some cases, that might not be known, but that's not the case with ChatGPT.

        ChatGPT uses the GPT language model, which exhibits an emergent behavior when it scaled up sufficiently that imitates intelligence in its output *ONLY*. This can be good enough for many real world uses, but it is not indicative of any notion of understanding, let alone anything that we could

        • by ljw1004 ( 764174 )

          ChatGPT uses the GPT language model, which exhibits an emergent behavior when it scaled up sufficiently that imitates intelligence in its output *ONLY*. This can be good enough for many real world uses, but it is not indicative of any notion of understanding, let alone anything that we could actually call intelligence, regardless of how much its output might appear to suggest otherwise, because the very process by which that output is produced does not have any emergent behavior that imitates intelligence, and there is more to intelligence than just what output is produced.

          When you know that the output is being produced by a process that is not intelligent, you automatically know that regardless of how often you see output that appears intelligent, you know that there is always an unknowable chance that it will produce output that we would not classify as intelligent, and it is for that reason alone that it cannot be relied upon for anything that matters.

          Every concrete thing you said applies equally to the neurons in the brain.

          • by mark-t ( 151149 )

            Humans do not communicate through random babbling based on statistical frequency of words being contextually relevant to words that have come before. Humans communicate using words that we understand internally, and this understanding is not tied to language in and of itself, even if language is required to communicate it.

            While we might not know exactly what this understand it, one thing we can be reasonably certain of, however, is that it is *NOT* some kind of emergent property that arises from otherwi

      • by kmoser ( 1469707 )
        We don't need tech that is as fallible as humans. We need tech that is better than humans. When humans "hallucinate" answers they're generally not as off-base as a LLM's hallucinations.
  • by DHS for social media manipulation. And, it probably already is...
  • by joe_frisch ( 1366229 ) on Sunday April 23, 2023 @03:46AM (#63470666)
    I can imagine an AI that finds patterns of behavior that are very well correlated to criminal behavior, maybe right 95% of the time. Unfortunately that might be used to justify very disruptive / invasive investigations of innocent people. Then, one an expensive investigation is started, there will be pressure to find "something" to justify the cost of the investigation.
  • Will it create a "minority report"?
  • by nospam007 ( 722110 ) * on Sunday April 23, 2023 @06:20AM (#63470792)

    In a few months, the newest AI checking secret files, will brag about it with their AI friends in an AI hangout.

  • There are currently articles in IEEE journals about how to prevent AI systems from being "biased." By "biased" they mean statistically correct, but unpleasant, i.e. not politically correct.

  • What's going to happen if this scheme doesn't work and people are arrested or kicked off of airplanes. Lawsuits, that's what. Trial lawyers will be licking their chops if this goes through.

Utility is when you have one telephone, luxury is when you have two, opulence is when you have three -- and paradise is when you have none. -- Doug Larson

Working...