Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Canada

Canada's Major News Organizations Band Together To Sue OpenAI (toronto.com) 39

A broad coalition of Canada's major news organizations, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, is suing tech giant OpenAI, saying the company is illegally using news articles to train its ChatGPT software. From a report: It's the first time all of a country's major news publishers have come together in litigation against OpenAI. The suit, filed in Ontario's Superior Court of Justice Friday morning, seeks punitive damages, disgorgement of any profits made by OpenAI from using the news organizations' articles, and an injunction barring OpenAI from using any of the news articles in the future.

"Journalism is in the public interest. OpenAI using other companies' journalism for their own commercial gain is not. It's illegal," said a joint statement from the media organizations, which are represented by law firm Lenczner Slaght.

Canada's Major News Organizations Band Together To Sue OpenAI

Comments Filter:
  • I sincerely hope that OpenAI gets pushed into bankruptcy and all research on AI halts. I hate AI.

    • Re: (Score:1, Flamebait)

      When did you realize you were old?

      • When I was born. But really, why all this naive infatuation with barely functional AI with no use?

        • Yup, everyone is just infatuated with something that's utterly useless, barely functional, and worthless. Billions of dollars are changing hands and entire industries are being disrupted by something that has no functionality whatsoever. You're the smartest of all the people.

          • I have to agree with him. We certainly shouldn't be dedicating entire nuclear powerplants to spewing BS. Is "dazzle them with brilliance or baffle them with bullshit". I suppose if you like AI you are the latter. No I'm not a boomer.
          • I also think that AI is useful, but the argument "people are investing huge amounts in it so it must be sensible" is an argument with a kind of varied history.

    • Re:I sincerely hope (Score:4, Interesting)

      by SirSlud ( 67381 ) on Friday November 29, 2024 @06:49PM (#64980215) Homepage

      I love AI. It's very useful for a variety of tasks (and no, not the "write me an essay tasks") I get the feeling a lot of people really are not using some of the state of the art stuff today - it's fucking astounding at how useful it is at doing menial tasks for you using plain old casual language. The best models today make inferences (well, predictions of course) that are truly impressive. And useful for doing a variety of things. I really get the sense that people who think AI is useless or stupid used something a year or two ago or used Gemini or ChatGPT a few times and went like "AIs kind dumb" and they probably would not have been off the mark wrong. But the state of the art is moving that quickly. Scary too, for sure.

      That said, it's just straight up wrong - ethically and I contend legally - that they use the commercial product of other companies' without any form of compensation - including newspapers - full stop. I sincerely hope these sorts of efforts lay further groundwork for these companies to pay for the material they train their models on. People sincerely undervalue the importance of proper investigative reporting.

      • by SirSlud ( 67381 )

        Also I'm a relatively older senior c++ programmer, so it's not like I don't have skin in the game or I'm some kid who thinks all this shit is neat and that old people should "get over it." I'm that older guy.

        I have been sold on the power of LLMs over the last few months. It is extremely useful. It saves me a ton of time in my work to focus on only the stuff I do better than it does - which is plenty and makes me feel relatively secure in the future of my line of work. But I really don't have anything to pro

        • by seoras ( 147590 )

          56 years old here.
          AI and OpenAI is the best thing that has happened to me in recent years.
          My productivity as a pro software dev has gone through the roof.
          That and I'm selling AI as SAS in my software has given me a new, and much loved by users, revenue stream.

          "I hate AI"... because? Enlighten us.

    • Before you get your pitchforks out there is actually ONE use that I've see that is valid:

      AI Scambaiters: O2 creates AI Granny to waste scammers' time [youtube.com]

      I've already seen jokes about scammers being Artificial Intelligence since Grandma dAIsy can expand its training. The scammers can't. /s

    • Re:I sincerely hope (Score:4, Interesting)

      by geekmux ( 1040042 ) on Friday November 29, 2024 @07:10PM (#64980247)

      The argument here, essentially complains about OpenAI making a (commercial) profit again by ingesting the news from the sources.

      If that were a sound argument, we should shut down every popular YouTuber who does the same damn thing.

      You got a problem with a third party taking your output meant for mass consumption and regurgitating it for profit? Punish everything and everyone who does that, or maybe realize your argument is bullshit.

      Sniff Test: If Canada owned OpenAI, would they be suing?

      • So true. It's like the news publishers are saying "we want you to read our news but we don't want your computers to read our news".

        Can't have it both ways boys!

        To the news media: Just focus on getting back to the basic tenets of the Fourth Estate instead of plastering by-lines on press releases and pitching them as "news" or publishing dross with click-bait headines and calling it "journalism".

        There is only one industry that is resonsible for the decline in the fortunes of the news media -- and it's the n

      • The "commercial gain" trope just makes the reasoning more understandable to most people, but it's actually irrelevant. If somebody copied the news and reproduced it on his own site for free (consider an independently wealthy individual who doesn't need to make any money), it would still be just as illegal due to copyright infringement.

        A copyright holder has unfettered discretion as to what constitutes an authorized copy. If they want to consider copying news for AI training unauthorized, that's their righ

  • there is another lawsuit! We are going to make everything a $billion lawsuit!
    • there is another lawsuit! We are going to make everything a $billion lawsuit!

      When you really think about it, lawyers are in marketing. Fucking top notch sales staff to keep growing year over year like that.

      When your industry takes 40% off the top, why not make everything a $$$billion lawsuit.

  • People are getting sick of the blatent way the news is lying and manipulating, so their coffers in the future seem to be a risk. They went after facebook and dropped them, so they are looking for somewhere else now to shit.

  • by upuv ( 1201447 ) on Friday November 29, 2024 @08:09PM (#64980333) Journal

    Right now "AI" is at a stage where we can't cleanly create correlation between an original creative work and the derivative stew of AI output.

    If you are old enough when DJing just started really hitting the scene one of the biggest problems was crediting original source authors of work. Those source works were sampled, manipulated and mixed into a derivative result. We still see court cases today where so and so is suing this other person over derivative works. And in these cases it's only a handful as in single digit numbers of source tracks merged into a new output.

    AI on the other hand is taking millions of sources at a time to make a derivative work. EG training. It's not even possible to list all the sources that contributed to a result any more. Let alone say this much of that one and this much of another contributed.

    So what we are seeing is litigation target not the outputs. But rather the massive soup before the output. The LLM. An LLM that hasn't produced anything is still fair game for litigation at the moment. So there are all sorts of things that are being targeted by litigation.
    - The uncredited use of sources.
    - The damage to business
    - The monopolies that are inevitable as a result of crushing volumes of content for AI
    - The dangerous potential for harm AI, EG wrong medical recommendations, Bad financial advice, Out right lying and hallucinations.

    We are very much in the early days of this. People only move at a certain speed. In 10 years the accepted legal stance for AI as it stands today will be in place. However AI will have advanced so drastically by then that the damage will be done. Right AI is 100% all about greed. 0% has been put into ultraism. Corporations have no mandate at all to improve average peoples lives.

    Should the governments of the world demand that X% of AI computation resources be put towards solving actual problems and improving the standard of living of all? I think so. Obviously certain measures would have to be put into place. Essentially to ensure the effectiveness of that computation expenditure. Otherwise corps will just direct AI to solve non-sense problems. A computational tax if you like.

    This AI tax would then be used to effectively compensate those that have been marginalised by AI. Compensation in the form of new opportunities for those impacted rather than money. Let people be creative again in new ways with AI assistance. Let AI help ensure that this is possible.

    Because it is very unlikely we can litigate our way to a new normal with AI. With AI consuming all human output with out compensation.

  • by Baron_Yam ( 643147 ) on Friday November 29, 2024 @10:07PM (#64980453)

    It used to be that the Fourth Estate was the gatekeeper of public information. You waited until after dinner and listened to what the news anchor had to say on television, or you read what made it into the morning paper the next day.

    The people involved were somewhat, but not entirely, 'establishment'. In a Western democracy, they'd happily chase a good story that threatened the careers of a powerful person, but they generally bought in to Western values and social stability and that influenced what stories they chose to tell and how they told them.

    Today, though... journalistic ethics don't really exist. They publish what their billionaire owner lets them publish, and an awful lot of it is deliberate propaganda. What isn't is 'click-bait'. Either way, it's rarely what you need to know to be an informed citizen. The Internet has dropped the signal:noise ratio through the floor and precious few even care about the signal any longer.

    Preventing the rise of AI isn't about protecting honest journalism. It's about protecting the jobs of the people writing articles. The war for truth is lost. It got burned to the ground and and they're just fighting over who gets to keep working the ashes.

  • Are they aware that law is one of the domains where LLMs have significant impact? I wouldn't be surprised if an OpenAI's robo-lawyer demolished human lawyers, discredited the case and made fun of organizations behind it.

I'd rather just believe that it's done by little elves running around.

Working...