Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Education Software The Almighty Buck Technology

High-Paid, Well-Educated White Collar Workers Will Be Heavily Affected By AI, Says New Report (cnbc.com) 147

An anonymous reader quotes a report from CNBC: A new study published by the Brookings Institution takes a closer look at jobs that are the most exposed to artificial intelligence (AI), a subset of automation where machines learn to use judgment and logic to complete tasks -- and to what degree. For the study, Stanford University doctoral candidate Michael Webb analyzed the overlap between more than 16,000 AI-related patents and more than 800 job descriptions and found that highly-educated, well-paid workers may be heavily affected by the spread of AI.

Workers who hold a bachelor's degree, for example, would be exposed to AI over five times more than those with only a high school degree. That's because AI is especially good at completing tasks that require planning, learning, reasoning, problem-solving and predicting -- most of which are skills required for white collar jobs. Other forms of automation, namely in robotics and software, are likely to impact the physical and routine work of traditionally blue-collar jobs. [...] Well-paid managers, supervisors and analysts may also be heavily impacted by AI.
Anima Anandkumar, director of machine learning research at Nvidia, said workers should evaluate the future of their own roles by asking three questions: Is my job fairly repetitive? Are there well-defined objectives to evaluate my job? Is there a large amount of data accessible to train an AI system? If the answer to all three of these questions is yes, Anandkumar says AI exposure is likely and suggests workers should aim for jobs that require more creativity and human intuition.

According to the report, some of the jobs that face the highest exposure to AI in the near future include: Chemical engineers, political scientists, nuclear technicians, and physicists.
This discussion has been archived. No new comments can be posted.

High-Paid, Well-Educated White Collar Workers Will Be Heavily Affected By AI, Says New Report

Comments Filter:
  • Skill sets (Score:4, Insightful)

    by Empiric ( 675968 ) on Thursday November 28, 2019 @08:12AM (#59466042)

    That's because AI is especially good at completing tasks that require planning, learning, reasoning, problem-solving and predicting

    No "AI" anywhere can do any of the above. It can do statistical analysis on large amounts of data, with an occasional domain-specific algorithm like an alpha-beta tree search thrown in. Humans provide everything else.

    • by ranton ( 36917 )

      No "AI" anywhere can do any of the above. It can do statistical analysis on large amounts of data, with an occasional domain-specific algorithm like an alpha-beta tree search thrown in. Humans provide everything else.

      Yes, but it makes it possible for less humans to provide everything else while still providing the same level of output. No one is saying all chemical engineers and physicists will be out of work, just that less of them may be needed.

      • by pedz ( 4127433 )
        That assumes output stays the same.
        • Astute observation. Total output is likely to increase. Job positions for a few of the crappy people may disappear.

      • by gweihir ( 88907 )

        Actually, Chemical Engineers and Physicists will be pretty safe, at least if they have an MSc or a PhD. Who is threatened here is, for example, client advisers in banks or insurances.

    • People forget just how many grunts there are. Also, "AI" has become short hand for "Software is getting better".

      I've seen this again and again. Nasty legacy apps go away, new apps replace them that while they're not perfect need 1/2 the attention. The biggest shift has been to web based software. There's thousands and thousands of billable hours gone reinstalling applications. I used to spend 20-40 hours a month just manually tearing out and reloading software. And that's just one example.

      Buddy of m
    • by HiThere ( 15173 )

      While I agree with you to a point, and certainly most current "AI" systems don't do a complete job...it's worth remembering that most people don't either. And it's worth asking "how would you show that you did better than (or as well as) "statistical analysis on large amounts of data, with an occasional domain-specific algorithm".

      I don't believe in general AI...but I also don't believe that humans are general intelligences. It's my belief that there are soluble problems that no human can solve, because th

    • by gweihir ( 88907 )

      That's because AI is especially good at completing tasks that require planning, learning, reasoning, problem-solving and predicting

      No "AI" anywhere can do any of the above. It can do statistical analysis on large amounts of data, with an occasional domain-specific algorithm like an alpha-beta tree search thrown in. Humans provide everything else.

      Indeed. But a lot of these tasks that supposedly require "planning, learning, reasoning, problem-solving and predicting" actually do not, or rather do not most of the time. You cannot replace the people doing these jobs fully with "AI", but you can, for example, get rid of 9 out of 10 workers and have the remaining one do the parts that actually require "planning, learning, reasoning, problem-solving and predicting".

    • by AK Marc ( 707885 )

      It can do statistical analysis on large amounts of data,

      AI can't "predict" anything. It can simply find patterns. AI can't make an iPhone (1). It was predicted to be a failure. A phone with a single button (actually 5?, but only one on the face)? That'll surely fail.

      AI would predict against that. As for an example someone else brought up, it should have predicted the cheapest water for Flint was keeping the old water, and that the pipes would need to be replaced before switching to a more acidic source. All the water tests are known. Age of pipes and da

    • Did you not understand the headline even?
      You do know will be means they are talking about the future right?
      It doesn't matter what it can do now. It only matters what it will do in the future. And you said nothing related to that.
  • This is why the 'gig economy' is able to exist. Already people are scraping the bottom for wages and/or flexible terms so they have to get paid $3 an hour so they can get it. These white collar workers will be fine delivering someone's Caesar I'm sure. Jobs for everyone!
    • Uber is also a perfect example of how technology de-values knowledge and skill. It's NOT (usually) about directly replacing a person with a box. But since the computer handles all the navigation, the knowledge it took to be an efficient cabbie is not needed, so anybody can do it, and the pay drops through the floor. It's not that the jobs are gone - probably more people than ever are driving taxis - but it's not a real job and doesn't pay.
  • will be affected by AI. Bingo! Lawyers, Judges, Politicians, Government Bureaucrats will be hard hit! And!!! I think we will get a big return in honesty, consistency, fairness and competency.

    Just my 2 cents ;)
    • Law has already been hit hard. Paralegals and researchers have been mostly replaced by electronic law libraries and ctrl-F. It also takes a lot less time and/or fewer actual lawyers to review a specific case when they can search for the specific keywords they want.
    • they'll be just fine. Judges too for the same reason.

      Lawyers are scary. They will not go quietly into that good night. They have the power to sue. They will start using their legal skill to extract money from the general populace. We're already seeing this with debt collection. There was a time when anything under $5k just got written off. No more. Cheap lawyers and courts favorable to small debt collection means you can be sued by a hospital for a $100 bill. If you live in the Southern United states yo
    • by HiThere ( 15173 )

      Nnnnn....not exactly.
      Large legal firms have already been using AIs as partial replacement for paralegals. So what's happening is that fewer lawyers can move up in the ranks to where they can make a decent living as lawyers. But there is likely to be a continuing requirement that the cases be presented by a human lawyer.

      Judges are government officials. Most are elected, and the rest are appointed. They aren't going to act to remove their authority. What will happen is that court recorders will go away,

      • I think there are way too many lawyers already. Also remember that most politicians start out as lawyers before they find an easier way to get money without having to work.
  • Emergence (Score:5, Interesting)

    by DavenH ( 1065780 ) on Thursday November 28, 2019 @08:28AM (#59466094)

    These projections are always linear and therefore useless. The job market will evolve with AI, most likely in ways that will surprise. But one thing that seems unlikely is that there will be nothing to do, given an injection of mental power into the labour force. Maybe all humans get a raise into managerial positions. Maybe each struggling artist can now produce a cgi-filled feature film singlehandedly. There's still the vast ought-space of intelligence that humans can fill, even if the is-space is monopolised by computation.

    And if the worse happens, and we have a utopia of everyone getting an early retirement comes to pass, not in poverty but paid for by the productivity of a vigorous mechanised economy, then so be it.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      And if the worse happens, and we have a utopia of everyone getting an early retirement comes to pass, not in poverty but paid for by the productivity of a vigorous mechanised economy, then so be it.

      At the 2019 World Artificial Intelligence Conference, Jack Ma (billionaire and co-founder of Alibaba) said he doesn't think artificial intelligence is a threat. Ma described himself as "optimistic" about AI's impact on humanity, saying that people who worry too much about AI have "college smartness".

      "People like us, who are street smart, we're not scared of that", he said.

      Ma also said he sees a future in which technological advancements would enable people to live longer and work far fewer hours - no more

      • by DavenH ( 1065780 )

        who is going to buy your products?

        Anybody? Just like I said with the linear projections. Social policy is going to evolve with cheap labour too. Capitalism will have little support when the "merit" justifying wealth is attributable merely to the ownership / inheritance of AI capital.

      • And there's the bigger problem that nobody wants to talk about. In the short term, the rich will get richer.. But, long term, once you have used AI and automation to eliminate 90% of all jobs, who is going to buy your products?

        At that point, the ultra-rich start preying on each other.

        • by g01d4 ( 888748 )

          At that point, the ultra-rich start preying on each other.

          In the first world I think they've already started. Hedge funds, venture capital, C-suite musical chairs and what have you are not for the hoi polloi. Governments are part of the ultra-rich by the bye though in a democracy they're more often preyed upon. The circus resides on smartphones. Bread is cheap with automation and third world labor and looks to remain so. Many think as automation, abetted by so called AI, increases you'll see less of a middl

      • But, long term, once you have used AI and automation to eliminate 90% of all jobs, who is going to buy your products?

        Why do you think there's a need for anybody to buy products ? If robots can build a golf course, a swimming pool, and a yacht, then Jack Ma doesn't need consumers.

      • Furthermore, Jack Ma plans to use automation to his advantage and further profit. If he tells people to be scared of it, then he puts himself at a possible disadvantage.
      • Except for the fact that even if you confiscated all the money belonging to the wealthiest one percent, it wouldn't be enough to pay for one year of UBI in the United States.
        You must be very bad in math ...

      • Ma's "plan" is utterly feasible, and exactly what futurists have been predicting for a century and more.

        Your prediction is just a lot more in line with what is likely to happen in the face of rampant capitalism.

        China might actually be really well positioned to make the utopian outcome a reality - a strict authoritarian government, nominally communist, and not afraid to perform social engineering at a massive scale. If automation can make providing a comfortable living cheap enough, the Chinese probably hav

    • These projections are always linear and therefore useless. The job market will evolve with AI, most likely in ways that will surprise. But one thing that seems unlikely is that there will be nothing to do, given an injection of mental power into the labour force. Maybe all humans get a raise into managerial positions. Maybe each struggling artist can now produce a cgi-filled feature film singlehandedly. There's still the vast ought-space of intelligence that humans can fill, even if the is-space is monopolised by computation.

      And if the worse happens, and we have a utopia of everyone getting an early retirement comes to pass, not in poverty but paid for by the productivity of a vigorous mechanised economy, then so be it.

      Differential analysis, a large portion of humanity becomes completely redundant, useless, and idle. Psychopathic leaders manage to coerce them into warring against other idle people by turning the others into "the enemy" Huge swaths of humanity are exterminated, and the population issue is solved.

      Humans are not mentally built for utopia. The only cure for that would be genetically engineering our aggressive tendencies and tribal nature out of us.

    • I heard this same thing. Biotech was supposed to replace the jobs lost. It didn't.

      Who's going to buy that artists CGI movie? With what money? And if one artist can make a movie what happens when there's 100 million of them out there to compete with? How do we all make money when the pool of money keeps decreasing in a world where if you don't work, you don't eat and there's not much work to do?

      The worst that happens isn't Utopia. It's a dystopia where a few oligarchs at the top plunge us into a 2000+
  • by BAReFO0t ( 6240524 ) on Thursday November 28, 2019 @08:39AM (#59466126)

    Time and time again, we keep telling you, THERE IS NO AI. Matrix multiplication does not make a lifeform.
    Time and time you keep peddling this crap. Until people just believe it out of habit, out of being born into it, or out of doubting themselves to fit in.

    I saw the very same thing happen to "intellectual property". In 2004 everyone here on Slashdot was laughing at it due to its obvious logical fallacies, incompatibility with physical reality, and merely being a way for the media industry to keep stealing from the creative industry.
    Now, everyone seems to believe it is a real, or even a legitimate thing.

    I wonder what else we got culturally socially conditioned into, this way or another, so that nobody doubts it anymore, and we just keep fighting over other comparatively silly things...

    • Time and time again, we keep telling you, THERE IS NO AI.

      Have you tried standing on a streetcorner with a sandwich board and a megaphone ?

      • by ceoyoyo ( 59147 )

        JESUS SAVES! I mean MATRIX MULTIPLICATION DOES NOT MAKE A LIFE FORM!

        That's actually kind of a catchy.

    • by DavenH ( 1065780 )
      What are you even talking about? Nobody except you equivocates AI with life. AI is efficient fitness optimization, not systemic entropy reduction.
      • by Evtim ( 1022085 )

        https://slashdot.org/story/19/... [slashdot.org]

        https://hardware.slashdot.org/... [slashdot.org]

        You were saying?

        • by DavenH ( 1065780 )
          My mistake, it isn't only him. Clearly some very misguided people make this mistake more than they should.
      • by Brain-Fu ( 1274756 ) on Thursday November 28, 2019 @10:09AM (#59466422) Homepage Journal

        It is popular to misunderstand the phrase "artificial intelligence" to mean something like "full-blown, conscious, self-aware, theory-of-mind intelligence achieved by artificial means."

        This, of course, is not at all what the phrase "artificial intelligence" actually means. At least not in the domain of computer science. We are here using the word "artificial" to mean "fake." As in "fake intelligence." As in, machines that are not intelligent doing things that normally require intelligence. Like crunching numbers to play Go. The computer is not actually intelligent, in this case, it is faking it with artificial intelligence.

        This is why people keep saying things like "true AI" as if the phrase "true artificial intelligence" is not an obvious oxymoron. If we had intelligent machines we would call it "synthetic intelligence" or maybe "machine intelligence" (though that last one has already been usurped to mean something like "convincing but still artificial AI," so it might not be a good one to use).

        Anyway, the media kind of hypes up this misunderstanding, so I can't blame people for falling victim to it. I expect I will be accused of making up my own definitions, which I am not, as the definition I am using can be found right in the dictionary [merriam-webster.com]:

        2: the capability of a machine to imitate intelligent human behavior".

        The key word here is "imitate," which machines do because they don't have the real thing.

        • As long as you can perform the task, there's no difference between "real" intelligence and "fake" intelligence.

          Intelligence is about processing information. All that counts is that the same information in leads to the same information out.

        • by DavenH ( 1065780 )

          As in, machines that are not intelligent doing things that normally require intelligence.

          The definition of intelligence you're working with is prescriptive and anthropocentric, which is too arbitrary and narrow to be productive when considering its more universal nature. That it comes from a dictionary just reflects poorly on that publication's judgment of semantics. A better, first-principles definition describes intelligence as the ability to compress sequential world states in a variagated (complex) environment. Go is such an environment since the statespace and actionspace are fairly large.

          • by HiThere ( 15173 )

            I don't rate your definition of intelligence very highly either. It works well in some contexts.

            To me intelligence is the ability to take historical regularities into consideration in order to improve the prediction of what will happen next. That covers everything from walking across the floor to deciding whether or not to buy a lottery ticket, as long as you count emotive rewards. (Part of the reward from buying a lottery ticket is the actual money received back, and another part is the reward from imag

        • by Kjella ( 173770 )

          As in, machines that are not intelligent doing things that normally require intelligence. Like crunching numbers to play Go. The computer is not actually intelligent, in this case, it is faking it with artificial intelligence.

          I think this is the important bit, whether or not the computer is thinking in a philosophical sense it can certainly replace humans thinking in practice. It's been doing that ever since it started doing math for us. The difference is that before, the computer always seemed limited to what we could program it to do, like a Chess computer - even though much faster - would be a "simpler" version than ourselves on speed. If we couldn't articulate what a good Go move was, then we also couldn't program a computer

    • to try and replace "AI" with a lengthy discussion on how Matrix Multiplication makes large scale data analytics and automation practical?

      Replacing "AI" with "Complex algorithms used to support automation" makes the whole thing easily digestible by a non-technical user. To anyone in tech this is all very much understood.

      "Intellectual Property" was something else entirely. It was _not_ an attempt to boil down a complex topic into something a non-technical user could understand. It was an attempt (succ
    • I think that's why the younger generations see older generations as so cynical.
      Because we are.

      We've watched repeated cycles of information control, thought-manipulation by mass media, and we're aware of the shell-game as it were. We're not immune to it - even knowing sleight-of-hand is going on, doesn't mean we're not still fooled by street magic - but at a certain point we no longer find it charming to watch people get tricked.

    • by HiThere ( 15173 )

      Why are you confusing intelligence with life? The two are quite separate.

      The reason people disagree about what "intelligence" is, is that they don't understand even what they mean when they're referring to their own intelligence.

      "Intellectual property", just like other property, is a social construct. I happen to think it's a bad one, but I'm in a minority. There is no incompatibility with physical reality. Sorry, but there isn't. It doesn't act the same way as physical property, but then not all physi

    • by ljw1004 ( 764174 )

      Time and time again, we keep telling you, THERE IS NO AI. Matrix multiplication does not make a lifeform.

      Why are you so obtuse? Standard industry terminology is to use the word "AI" to refer to machine-learning models built upon massive training sets. The article is plainly and transparently saying that these models will heavily affect white-collar workers. Heck, the article specifically mentions the word "machine-learning" to make sure that you the reader are on the right track.

      The conclusions of the article have nothing to do with philosophy, nothing to do with strong-AI, nothing to do with whether it's real

  • Just like other forms of automation, AI will take over the parts of your job that suck.

    Good.

    • by ceoyoyo ( 59147 )

      Exactly.

      It will, however, expose a lot of people's jobs that exist to make things suck. They don't like that.

    • Maybe not so good if your entire job sucks.

    • by scamper_22 ( 1073470 ) on Thursday November 28, 2019 @09:51AM (#59466350)

      The problem is there's a lot of people who think their job requires something special, but ultimately does not.

      I have personal experience here. In the early days, I was working in router firmware. I was really good at debugging failures and crashes, and I became a go-to person. Made myself a fair chunk of change. Then one day I got tired of it, which to your point, is about getting rid of a sucky part of my job. I wrote out my debugging process, what strings to look for and the sequences thing came in... on a wiki. I and others starting scripting parts of it. I realized how much of what I had been doing for years could be done easier with automation. Now of course I'm lucky in that I was the one doing the automation. But the key point I want to make is that outside people and even myself believed I was really good at debugging issues... in reality, it was something pretty repetitive for 90% of the use cases.

      Later on, I worked in medical imaging (MRI, catscan...). We did a lot of work on automated detection of anomalies. So things like detecting specific cancers or growths. Once again, we started getting around 80-90% accuracy of the best radiologists we were working with. I can only imagine this has improved since then.

      I've seen it a lot of times since then. What has emerged is that you need a fewer, but really talented people to handle the 10% of the work. Then 90% of the work can be automated or handled by very low-level people. This of course guts a lot of the good paying white color level work. You still need some experts to handle the 10%. But you lose that middle ground of people.

      People who thought they were doing more work than they actually were. This is not to fault them, it's just what it is. Another example I can think of is investing. There was a time a lot financial advisors were perceived as more than they are. Today, we tend to see the same 90%-10% blip. 90% of financial advisors did nothing, but sales. For your average customers, they just plopped them into a standardized bucket. These people today can just buy an ETF on their own or do some bullshit questionnaire and dump them into conservative, balanced, or risky portfolio and do just as well. The financial advisors weren't adding any value beyond that. Of course there is still a need for top people to manage high net worth individuals and institutions.

      The result is most banks are blending tellers with investment/sales for most of the population.

      From an economic point of view it might seem ideal to have this 10% superstar, 90% grunts. There is a social cost to this which I think we can all acknowledge. I used to really dislike it when people used to say things like they're just an autoworker, they don't deserve to make that much. Well I mean, don't think your job is all that special. Because chances are, you're in the 90% of your profession that makes you just as replaceable and low-level as an autoworker or an autoworker as high-level as you :)

      But there is another cost to it, which is sometimes underappreciated. How do you convince people to aim to be superstars, especially in fields requiring a lot of depth. For example, medical specialists requires a whole lot of training and specialization. Would you choose to go into medicine if your only chance of 'good' money was to be the very best specialist? If you weren't in the top 10%, you'd be reduced to a lowly paid grunt medical worker? Today of course the medical profession is protected via regulation. If you don't become the best specialist, you're still going to find yourself is a pretty well paid protected gig as a doctor doing something (GP, family healthcare...)

      • "This of course guts a lot of the good paying white color level work."

        I believe you meant "white collar". But works either way...

      • by HiThere ( 15173 )

        There's a cost you didn't mention:
        How do you train the new experts when there are no entry positions?
        It's partially covered by your "How do you convince people to aim to be superstars...", but only partially. You can't predict who will learn to be a real expert, so you need to train a bunch. Who pays for that training. You won't get it in college.

        Medicine, which you singled out as a special case, covers this, sort of, with interns, but who would be willing to train as an intern if they only had a 10% cha

      • convincing people to be superstars. Those sort of people take to it naturally. You _do_ have to worry about them being crushed by life and/or awful parents & family, but that's just a matter of having good social services.

        Bud of mine bombed out of school because he was too smart and it was dull as fuck. Also had crummy parents that didn't know what to do with him. By the time he started getting it together he'd accidentally got a chick pregnant and everything went to shit from there. Completely wast
    • Just like other forms of automation, AI will take over the parts of your job that suck. Good.

      If you were the only person doing your job, it would be good. You'd be able to do more work, and since you were the only source of work, you'd probably get more work to do. But in reality, lots of people are doing the same job, and there's only so much work to go around. So if half of your job sucks and can be automated away, half of the people doing your job will be out of work, and you might be in that half. Not so good.

      The worker's share of profits continues to fall as the worker's productivity increases

    • by gweihir ( 88907 )

      That is the actual prediction when you cut the bullshit. The problem with it is that, if, say, your job sucks to 50%, and automation does those 50%, then were there was before 2 jobs of this type, there suddenly is only one left.

      So "AI" will not kill job types, it will just reduce the numbers needed. Just as, for example, robots in Amazon warehouses have not eliminated the warehouse worker, they can simply do with 20% the number they needed before. And the problem is that because these are white-collar job

  • In 1995, I was making $300K per year. Then with the flood of H1 visas and out sourcing, my wages stagnated and even dropped until 2018 when they finally started to consistently rise. So... bah... compared to the recent past, I'm not that worried about AI.

    The tragic fact in both cases is the magnitude of incompetence of management to discern complete shit output from quality output was either gone or was ignored by higher levels of management. There is another post on this page with the same laments. Wh

  • I'm not sure that patents are necessarily an indicator of what AI can do, it seems likely that as with previous cross cutting technologies there are a large number of people (trolls) patenting as many high-level concepts with AI regardless of their feasibility.
  • by godrik ( 1287354 ) on Thursday November 28, 2019 @10:13AM (#59466438)

    They'll just get a new tool in their toolbox.

    CAD tool did not replace car designer, it made them a whole lot more productive.

    Computational Fluid Dynamics tools did not replace mechanical engineers, it made them a whole lot more productive.

    Compilers and IDEs did not remove the need for programmers, it made them a whole lot more productive.

    I highly doubt AI tools will be any more different.

    Nuclear engineers will have AI tools that will point out more quickly faults in a design. But I don't think anyone will trust an AI-designed nuclear reactor that has not been extensively studied by a nuclear engineer. You know why, because AI is dumb. There is a good chance that it will design a nuclear reactor that only works in simulation because it uses the flaws of the simulator. It is consistently what AIs do.

    • CAD tool did not replace car designer, it made them a whole lot more productive.

      If the number of new designs stays the same, then one top designer with a powerful tool can do the work of an office full of old designers. That one designer won't lose his job, but the grunt workers will.

    • there's a whole world of grunts out there that are getting replaced.

      In video game programming there's a laborious task needed to optimize character meshes. You basically go through the mesh by hand removing excess triangles without having any impact on the model visually. It was thousands of hours of work in a modern game and required a skilled person.

      A few months ago I saw a video where Intel promoted a bit of software that did it automatically.

      Those are the kinds of jobs going away. And there'
    • by gweihir ( 88907 )

      It is not about replacing these people. It is about doing with a whole lot less of them. And there will be no replacement jobs this time.

  • The most advanced installations right now run on model-based process control. The machine keeps the process at the limit where it's most efficient or productive (like a good human operator would do). However, humans are needed to set everything up and for each equipment configuration a dedicated setup is needed. It's conceivable that it could be completely machine-based in the furure where the machine-brain is agnostic concerning what kind of equipment it is handling. But its very much like the self-driving

  • Programming fits in the 3 questions. If programming can be automated, I will starting calling it AI and retire.
  • Our time of gainful employment writing code is limited... make hay while the sun shines.

  • by doubledown00 ( 2767069 ) on Thursday November 28, 2019 @12:22PM (#59466988)
    The change in law has been well documented. AI can synthesize far more data way faster than I or any team of attorneys could. It can seamlessly include how other jurisdictions have handled a question. The bottleneck here becomes that human that has to parse what the AI has done.

    The next step will be having AI draft answers and briefs based on the research. There are a few websites that already do this, but it's for low level items like parking tickets. It's only a matter of time before AI can handle that.

    For the parts of law that are paper practices or primarily transactional, which are 3/4ths of all attorneys, that's that. That's most tax practice, most administrative law, many civil and criminal appeals, insurance, regulatory attorneys, etc. For the remaining 1/4th of the legal field, the bottleneck again becomes humans in cases where someone has to appear for a hearing. While AI couldn't replace people fully, it can enable firms to cut way down on headcount. Much civil litigation and even criminal defense in terms of research, document review, motions practice, and discovery can also be handled this way. What use to take a team of associate attorneys and paralegals can now be done with one supervising attorney and a "tech guy" to handle the software.

    I've got about 10 years left until I (hopefully) can semi-retire. I don't know how these baby lawyers coming up now are going to sustain a paying multi-decade career as it is. Add reduced job opportunities due to AI. Yikes!
  • Also, *big weary sigh*.
    None of theset things are going to happen. Predictions are wrong. So-called AI is garbage. Humans won't become obsolote. Stop trolling us with this shit, it's unfunny and boring.
  • I'm not worried. At all.

    A specific example. My company was testing various AI software to try to gather meaningful details from insurance documents, to predict things like the likelihood of a lawsuit. The best of the software couldn't tell the difference between a person's name and the name of a disease.

    "AI" will (and already is) taking over grunt work, nothing more. If you have to THINK for a living, your job is not in danger.

You will have many recoverable tape errors.

Working...