Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Books Sci-Fi

Science Fiction and Fantasy Writers Take Aim At AI Freeloading (torrentfreak.com) 73

An anonymous reader quotes a report from TorrentFreak: Members of the Science Fiction and Fantasy Writers Association have no trouble envisioning an AI-centered future, but developments over the past year are reason for concern. The association takes offense when AI models exploit the generosity of science fiction writers, who share their work without DRM and free of charge. [...] Over the past few months, we have seen a variety of copyright lawsuits, many of which were filed by writers. These cases target ChatGPT's OpenAI but other platforms are targeted as well. A key allegation in these complaints is that the AI was trained using pirated books. For example, several authors have just filed an amended complaint against Meta, alleging that the company continued to train its AI on pirated books despite concerns from its own legal team. This clash between AI and copyright piqued the interest of the U.S. Copyright Office which launched an inquiry asking the public for input. With more than 10,000 responses, it is clear that the topic is close to the hearts of many people. It's impossible to summarize all opinions without AI assistance, but one submission stood out to us in particular; it encourages the free sharing of books while recommending that AI tools shouldn't be allowed to exploit this generosity for free.

The submission was filed by the Science Fiction and Fantasy Writers Association (SFWA), which represents over 2,500 published writers. The association is particularly concerned with the suggestion that its members' works can be used for AI training under a fair use exception. SFWA sides with many other rightsholders, concluding that pirated books shouldn't be used for AI training, adding that the same applies to books that are freely shared by many Science Fiction and Fantasy writers. [...] Many of the authors strongly believe that freely sharing stories is a good thing that enriches mankind, but that doesn't automatically mean that AI has the same privilege if the output is destined for commercial activities. The SFWA stresses that it doesn't take offense when AI tools use the works of its members for non-commercial purposes, such as research and scholarship. However, turning the data into a commercial tool goes too far.

AI freeloading will lead to unfair competition and cause harm to licensing markets, the writers warn. The developers of the AI tools have attempted to tone down these concerns but the SFWA is not convinced. [...] The writers want to protect their rights but they don't believe in the extremely restrictive position of some other copyright holders. They don't subscribe to the idea that people will no longer buy books because they can get the same information from an AI tool, for example. However, authors deserve some form of compensation. SFWA argues that all stakeholders should ultimately get together to come up with a plan that works for everyone. This means fair compensation and protection for authors, without making it financially unviable for AI to flourish.
"Questions of 'how' and 'when' and 'how much money' all come later; first and foremost the author must have the right to say how their work is used," their submission reads.

"So long as authors retain the right to say 'no' we believe that equitable solutions to the thorny problems of licensing, scale, and market harm can be found. But that right remains the cornerstone, and we insist upon it," SFWA concludes.
This discussion has been archived. No new comments can be posted.

Science Fiction and Fantasy Writers Take Aim At AI Freeloading

Comments Filter:
  • by ranton ( 36917 ) on Thursday December 14, 2023 @08:18AM (#64081073)

    They don't subscribe to the idea that people will no longer buy books because they can get the same information from an AI tool, for example. However, authors deserve some form of compensation.

    While I believe AI should be able to train on copyrighted content without permission from the copyright owner, this comment made me think about how the AI trainer gained access to the copyrighted material. If the AI trainer is using a copyrighted book, it seems reasonable to expect the AI trainer to pay for a copy of that book. Just like how a human reader would have to buy the book to read it. Then again, you can donate a book you bought to someone else to read, so as soon as a book has been digitized into a training set then I also believe that training set should be able to be made public for the purpose of AI training (or other research purposes).

    If a training set was created with originally pirated content, I believe that is a problem where the courts should get involved. At least without new legislation to make this issue more clear.

    • Just like how a human reader would have to buy the book to read it.

      If you go to a library, you can read books for free.

      • by iAmWaySmarterThanYou ( 10095012 ) on Thursday December 14, 2023 @08:32AM (#64081087)

        But you can't copy it which is what these things are doing as we saw in the article 2 weeks ago where researchers were able to get an LLM to cough up original copyright content -and- PII in bulk from its training data.

        You can -read- all you want but you can't -use- that material in your own works without permission from the creators.

        • Re: (Score:2, Flamebait)

          by drinkypoo ( 153816 )

          You can -read- all you want but you can't -use- that material in your own works without permission from the creators.

          There's an explicit carve-out in the law specifically for this purpose, so it's not clear what you're complaining about. Copyright is not a natural right, in its current form it was created by man for the purposes of profit. The earliest known copyright law was in the ports of Alexandria, and while it did facilitate profit (as people would pay to visit the libraries) it also existed for the purpose of the preservation of knowledge, as if modern copyright law required the submission of your work to the libra

          • This isn't Ancient Greece.

            Please provide the legal reference to the carve out in US copyright law you claim allows for profit corporations to copy and use copyrighted material in their own projects in bulk for profit.

            • Why are you asking me when there are dozens or hundreds of articles on this subject that go over the whole thing?

              https://termly.io/resources/ar... [termly.io]

              What is it with you? I don't fucking get it. As someone who was active and creating dynamic websites before public web search engines existed I saw the potential right away and have been successfully finding things out with them ever since. What is your damage?

              • That is not a legal reference.

                What is it with me? As if you're the only person here who pre-dates the web?

                What is with me is you claimed there is a legal carve out. I asked for a reference. Should be easy to find. Still waiting.

                Maybe if you didn't say wrong things you wouldn't get challenged so often.

              • by whitroth ( 9367 )

                So, are you saying you were stealing stuff that was under copyright?

            • by cpt kangarooski ( 3773 ) on Thursday December 14, 2023 @11:35AM (#64081623) Homepage

              There's several, but the one relevant to this discussion is fair use, at 17 USC 107. While the for-profit nature of the use is a factor to be considered in the fair use analysis, that does not, by itself, indicate that the use is unfair.

              I would suggest taking a look at Author's Guild, Inc. v. Google, Inc., 804 F.3d 202 (2nd Cir. 2013) for a similar project in which a for-profit corporation copied and used copyrighted materials in their own project, in bulk, for profit, and was held not to have infringed on copyright in the process.

              Given that AI training is not some sort of magic compression that breaks Shannon's Law, the use of works to train software is, if anything, even more fair than Google Book Search, which does copy entire works and store them permaently, even if they are not fully made available to the user.

        • by Rei ( 128717 )

          Did you actually read the paper, or just a headline blurb? Because I read the paper [slashdot.org]. It's not what it was hyped up to be.

          • In fairness it's pretty hard to generate the volume of posts that he does if you had to actually stop and get something, then think about what to write.

        • by B0mb1tll ( 8539805 ) on Thursday December 14, 2023 @08:53AM (#64081137)

          in a sense you CAN use material you read in the libraries. One idea begets another etc. So in a sense, we are always building upon the material others develop before us.

          The question is, are these LLM synthesizing NEW material based on previous material?
          Seems that isn't what the writers are claiming, if I read the article correctly. It seems the LLM's are taking whole portions of copyrighted material and essentially plagiarizing.

          I just wanted to point out the two possibilities, synthesizing new materials (not what article is talking about) and straight out plagiarizing.

          Which of the two is happening the most?

          • Re: (Score:2, Flamebait)

            by DarkOx ( 621550 )

            I enjoy Sci-Fi and Fantasy novels.

            I will say that a lot of them are 'really unremarkable' to the point that you might not be able to distinguish the plot in one from another, if you change the names of the characters and places around.

            While many of them make fine 'airplane books' much of the genre (and its true of other genres as well) is very derivative. While there is no way that the current crops of LLMs could generate something that would be even half way decent from a single prompt. There is a good ch

            • by Geoffrey.landis ( 926948 ) on Thursday December 14, 2023 @10:26AM (#64081347) Homepage

              I will say that a lot of them are 'really unremarkable' to the point that you might not be able to distinguish the plot in one from another, if you change the names of the characters and places around.

              I'd say you're reading the wrong books. In response to a critic stating "90% of science fiction is junk," Theodore Sturgeon replied, "90% of everything is junk." (*). This is now known as Sturgeon's law: Ninety percent of everything is junk.

              --
              * footnote: in later conversations, Sturgeon said that the actual word used was by the critic was not "junk", but a different word that was not allowed to be printed in the early 1960s, a four-letter word for fecal matter.

            • by whitroth ( 9367 )

              Bullshit. You're just making excuses for theft. If you want to read rehashed crap, printed - I won't say written - by a chatbot, you're obviously not reading anything that challenges you, or is more than eye candy.

              Oh, and yes, I'm a writer, and a member of SFWA.

          • by AmiMoJo ( 196126 )

            There was recently a little plagurism scandal on YouTube, where it turned out that some creators were just reading bits of articles and blogs that they googled. The occasional paragraph from a book made it in too.

            They did a bit of rewording sometimes, but it was still quite obvious what the source was.

            There is a big difference between learning and creating an original work, and rephrasing what you read. AI seems to mostly do the latter. Ripping off two opposing opinions doesn't make it any better.

        • Humans too can copy those things in their memory... surely you can provide verbatim quotes from books or poems.

        • But you can't copy it which is what these things are doing as we saw in the article 2 weeks ago where researchers were able to get an LLM to cough up original copyright content -and- PII in bulk from its training data.

          I'll believe you when Google's search engine is shut down for copyright infringement.

          You can -read- all you want but you can't -use- that material in your own works without permission from the creators.

          What copyright law requires permission for is performing/producing derivatives of fixed works. Copyright does not extend to the underlying information that comprise a work or production of transformative works. You can't copyright facts, ideas, concepts, styles..etc anymore than a phone book can copyright the phone numbers in its pages.

          • These LLMs have already been proven to consume and spit out full verbatim copies of copyrighted works not just concepts based on the consumed materials.

            • These LLMs have already been proven to consume and spit out full verbatim copies of copyrighted works not just concepts based on the consumed materials.

              I suspect everyone agrees LLMs are trained in part on copyrighted works and they are entirely capable of recalling portions or entire works. I certainly believe this to be true because I've witnessed it with my own eyes. LLMs are able to recite well known copyrighted lyrics just as easily as it is able to recite the US constitution.

              Less known works they can perhaps paraphrase or give you a gist yet most likely they would not be able to recall with great fidelity and are more likely to fill in the gaps wit

          • by tlhIngan ( 30335 ) <slashdot AT worf DOT net> on Thursday December 14, 2023 @03:06PM (#64082229)

            But you can't copy it which is what these things are doing as we saw in the article 2 weeks ago where researchers were able to get an LLM to cough up original copyright content -and- PII in bulk from its training data.

            I'll believe you when Google's search engine is shut down for copyright infringement.

            Except what Google does is substantially transformative - you don't use Google to access copyrighted works. You use Google to help you find copyrighted works. Google then dutifully points you to where the copyrighted work exists, snowing you a snippet to help you identify if it's potentially relevant.

            Now, the fact that Google has a cached copy of the page you can view is more tenuous, but is generally accepted as Google needs it for their internal purposes and doesn't claim to be an original work of Google.

            I mean, the SF writers do have valid concerns. If we ignore writers, and change it to "programmers", and change "books" to "software", then we have a similar situation.

            There are AIs that can browse through free code repositories - after all, the whole F/OSS movement makes lots of source code readily available to be "read" by an AI. And a lot of those AIs can spit out new source code - you can ask ChatGPT to help you write some simple Arduino code, or C code, or Commode 64 BASIC code, or whatever.

            The question becomes - what is the copyright status of that code that was generated? And even more preplexing, what is its license?

            Would a LLM be a way to end-run around licenses like the GPL - if I have an LLM ingest in Linux and other GPL code, and have it spit out code for a driver, did I just obtain the driver for free?

            And before you answer that - realize that if the output of the LLM is not copyrightable, I don't care what the license is - because if copyright law doesn't apply to the output, the GPL becomes toothless. The GPL requires copyright law. It works especially well with "all rights reserved". That's why it's "copyleft" - it offers a software license that is an alternative to standard copyright. So without copyright protection, users of the AI generated code are no longer bound by the GPL.

            So the same concerns apply - could Microsoft ingest the source code to Ubuntu and then use it in Windows?

            And remember, if you say the AI output is copyrighted copyright law applies because of the input, then the same goes for written works, art and other works that AI models are generating - they too are copyrighted by the original copyright holders.

            But if they aren't, then hey, free workaround for GPL.

            You actually can't have it both ways.

            • Except what Google does is substantially transformative - you don't use Google to access copyrighted works. You use Google to help you find copyrighted works.

              I sometimes use Google to access copyrighted works via the Google cache because the source site is being blocked by various filters or the data has changed. Sometimes all I am after is the little blurb responsive to keywords that appears in the link and I never click it. Many times I can't find the text that caused me to click in the document itself. I use the Internet archive for the same reasons to retrieve copyrighted information that has changed or typically been withdrawn.

              I wouldn't use an LLM to gi

        • by RobinH ( 124750 )
          Did anyone ever determine if the PII generated by the LLM was actually true personally identifiable information, or just something that it generated that looked like PII?
        • by Joviex ( 976416 )

          But you can't copy it which is what these things are doing as we saw in the article 2 weeks ago where researchers were able to get an LLM to cough up original copyright content -and- PII in bulk from its training data.

          One example of a crappy LLM and you think all AI engines are copy-pasta. First, even the crappy ones arent copy-pasta. Please educate yourself.

        • The article 2 weeks ago said it could get it to recite the source material, however just because you can get it to state SOME of the source material verbatim, doesn't mean that is any different to how people read books. I am sure most people can sing song lyrics, recite poems word for word if they read it enough times.

          The way I see it, if it stores a statistical model, if you get it to repeat something enough times the probability that will match what is stored will get quite high. Just like if you get enou

      • The rules are different in different countries, but libraries pay authors sometimes by the numbers of loans sometimes just for the purchase. And libraries are often financed with public funds.

        I sure hope that I will be able to read new original fiction written by creative humans in the future.
      • by whitroth ( 9367 )

        Yes, asshole, but you can't copy it or use it. They're taking it, without paying for it, and copying it - I mean, what do you think "training" is?

      • I'm in college right now and I need to write an essay about some science fiction. I'm not a very creative person, so I decided to use AI for this, but I didn't like the quality of the text. Then I went to a resource that specializes in this https://eduhelper/essay-samples/migration [eduhelper], I liked the text here much more. But I think that very soon AI will become much smarter. Although I doubt it will be free then.
    • Re: (Score:2, Flamebait)

      by Rei ( 128717 )

      They use web crawls, just like Google.
      Automated processing of the data from those web crawls to provide new transformative products and services is legal, just like it is for Google.
      Which is why these anti-AI suits keep failing.

      Also, this whole "authors deserve to be compensated" thing is based on some misconceptions. First off, that you can actually trace back a generation to a specific individual (unless it's like some major thing that's all over the internet and the user specifically asked about that thi

      • by whitroth ( 9367 )

        Wrong. You seem to have missed where they used a data dump of *pirated* works of thousands of authors. That makes it receiving stolen goods, on top of copyright violation.

        • by Rei ( 128717 )

          You seem to have missed where it's perfectly legal for them to possess copyrighted works for the purpose of automatic processing to create new products and services, as has been the case for ages, and is a key part of how many major internet companies function. "Pirated" requires the notion of copyright being violated, but copyright does not protect against automated processing.

          • by whitroth ( 9367 )

            What part of "pirated" do you not understand? These were NOT posted to the Web, they were ebooks stolen and posted in a data dump.

            • by Rei ( 128717 )

              What part of "pirated" do you not understand?

              The part where you think it applies to something that is perfectly legal for you to download and possess.

    • Should any university student be found to have used a book without legally
      Purchasing or borrowing from the library any credits, certifications, diplomas or degrees given should be withdrawn and the student should be hence required to be barred from the industry in which they utilize said skills.

    • They don't subscribe to the idea that people will no longer buy books because they can get the same information from an AI tool, for example. However, authors deserve some form of compensation.

      While I believe AI should be able to train on copyrighted content without permission from the copyright owner, this comment made me think about how the AI trainer gained access to the copyrighted material.

      Just to be clear, the article we're discussing [torrentfreak.com] is about AIs being trained on stories and novels that the authors have put on the internet for human readers to read for free.

      There are a number of SF writers (and at least one major SF publisher) who hold as an article of faith that putting their material on the web for free is not only a social good, but helps their material be seen, and hence is in the long run good for their visibility and sales. The article is complaining that the AI trainers are abusing t

      • by whitroth ( 9367 )

        Geoffrey, you know perfectly well that's not the case. They trained them on the pirated data dump.

        Yes, I'm a SFWA member also.

        • Geoffrey, you know perfectly well that's not the case. They trained them on the pirated data dump.

          Both.

          From the article [torrentfreak.com] in question:

          Many of the authors strongly believe that freely sharing stories is a good thing that enriches mankind, but that doesn’t automatically mean that AI has the same privilege if the output is destined for commercial activities. ...
          “The current content-scraping regime preys on that good-faith sharing of art as a connection between human minds and the hard work of building a common culture. The decision to publish creative work online to read and share for free ... is

          • by whitroth ( 9367 )

            Yes, but stories published online are still under copyright.

            One novel out - 11,000 Years, and my next, Becoming Terran, is due early next year (they're just sending out the ARCs). All straight sf, not mil-sf.

  • ... on AI and writing. Hennen is one of Germanys leading Fantasy authors and roughly at rank 50 globally. I bumped into him on a fantasy con a few months back and had met him occasionally in the decades prior. I also know one of is editors who is a dance partner of mine.

    We had adjacent rooms on the Hotel near the fantasy con we both attended and got stuck in the hallway talking and nerding about for 90 minutes.

    He's been a prolific and successful writer for a few decades and is mentally preparing for AI to eventually take over and he being the one prompting it and then proofreading the output. He's got enough bestseller material for a bot to train on and copy his style of writing, so that part would be covered.

    We both agreed that making a living of writing could soon be a thing of the past, even for a considerable portion of the very few authors that can actually live off their trade.

  • by WaffleMonster ( 969671 ) on Thursday December 14, 2023 @08:56AM (#64081149)

    "first and foremost the author must have the right to say how their work is used"

    No you don't. You don't have the right to tell anyone what they can or can't do with the information learned from your works. Copyright is an exclusive grant of authority over production / performances of fixed works and their derivatives. Nothing more.

    • by nightflameauto ( 6607976 ) on Thursday December 14, 2023 @09:04AM (#64081171)

      "first and foremost the author must have the right to say how their work is used"

      No you don't. You don't have the right to tell anyone what they can or can't do with the information learned from your works. Copyright is an exclusive grant of authority over production / performances of fixed works and their derivatives. Nothing more.

      This is one of those things about being a creator that a lot of creators don't seem to understand. You create, and however you go about distribution, whether it be free downloads from a website, sold downloads, sold books, sold paintings, etc. That's the end of your control over the content of your creation. It can be sold again by the original purchaser. It can be bought by anyone from any legal outlet. It's out there. It's outside your control.

      People scared that AI is going to take your ideas and build on it? What about other authors? I know damn good and well that I incorporate ideas from all the books I've read when I write, along with all the TV and movies I've watched. Why should AI not be allowed this avenue of "learning" so long as they do so legally? If you offer free content? AI should be allowed to "read" it. If you offer it for sale? The developers should have to purchase a copy to allow the AI to read it. So long as the AI doesn't copy-pasta (or copy-pastay, as my coworker say), there's no true copyright infringement.

      I do think taking non-publicly-available content for training material should be punished / data removed, but if it's out there? It's out there for everyone and everything.

      • You can also horde other people's work in a data base and claim it as your own then call it constitutional privilege when someone questions your claims...It use to work that way at least.
      • "That's the end of your control over the content of your creation. It can be sold again by the original purchaser. It can be bought by anyone from any legal outlet. It's out there."

        -1, strawman.

        Sure, you can't control what they do with the original physical instance that they purchased. But that is not what is being claimed.

        You can even make duplicates. But that is not what is being claimed.

        The claim is that duplicates are not just made but distributed. While this claim may not in the end be proven, it is a

        • "That's the end of your control over the content of your creation. It can be sold again by the original purchaser. It can be bought by anyone from any legal outlet. It's out there."

          -1, strawman.

          Sure, you can't control what they do with the original physical instance that they purchased. But that is not what is being claimed.

          You can even make duplicates. But that is not what is being claimed.

          The claim is that duplicates are not just made but distributed. While this claim may not in the end be proven, it is a claim where if proven the tights holder would have control over what the purchaser does.

          Sure, but that's covered by normal copyright laws and isn't, at all, something that needs a special circumstance law to cover. If AI is shown to by copy-pasting a lot of text? Yeah, that's a problem. If they can prove it, they'd have a real case. But the concept that "they might copy us maybe" is already out there for humans, and is covered by the laws that exist.

    • by MobyDisk ( 75490 )

      A thorny issue is how to handle when the AI reproduces copyrighted content directly, or generates derived work from the content. At that point, is it distribution, and thus copyright infringement? Does the AI need to be trained on fair use, so that it only reproduces small sections?

      • Does the AI need to be trained on fair use, so that it only reproduces small sections?

        If it's "simply" not overtrained then it can only reproduce small sections. But the quotes are around simply of course because it's not simple to avoid that. A training set not carefully curated is going to contain a bunch of duplicates. Perhaps AI could be used to solve that problem too, though; characterizing content, storing a structured and scored description, then comparing descriptions to determine which works to check for duplication.

        The goal is not to reproduce existing copyrighted works as they can

  • by Rei ( 128717 ) on Thursday December 14, 2023 @09:05AM (#64081175) Homepage

    ... to the Luddites. I mean, I know 'Luddite' today is mainly just used as a generic insult for technophobes, but I mean in the literal sense, the early 18th century textile workers named after the fictional wrecker "Ned Ludd".

    They spent their entire lives honing their craft. Creating their own styles, their "art". Then their managers started buying power looms and use them to produce their workers' styles (of a quality that the workers considered inferior) en masse and lay off the excess workers, only needing people to tend the machines.

    They were FURIOUS. They thought it both grossly immoral and illegal and that the managers were destroying the economy and countless people's lives for their own benefit. They broke windows. They burned down buildings. They sent death threats. They carried out actual attacks. It was practically an insurrection.

    But in the end, the industrial revolution was an unambiguously good thing by pretty much every metric. Poverty plunged. Hunger plunged. Homelessness plunged. Mean hours worked per week, previously rising, started dropping. The jobs freed up led to a surge of people working in the arts, science, medicine, etc, and led to explosions of advancement in those fields. Despite popular misconceptions today, quality of life unambigously surged forward thanks to the industrial revolution.

    • Both things were true, in that a lot of those people went hungry, and the industrial revolution has released a lot of pollution which has killed orders of magnitude more people, and (oh yeah, by the way) has also caused AGW which is working on killing still more orders of magnitude more people.

      The message of the Luddites, which from your post you clearly failed to understand, was that technology in service to a few will destroy us. They were right. It is in the process of doing so. Technology must be employed in the service of The People, not a few rich people who don't care if the world burns (or actively want it to because they want to meet Jesus.)

      You can't just say this process has enhanced the quality of our lives without also looking at the way it's detracted from the same, without being an empty-headed cheerleader for progress. Progress is not only inevitable but even desirable, but progress for the sake of profit destroys human lives and livelihoods.

      Which is all to say that the authors are both right and wrong. In a world in which your value to society is measured by the number of dollars you collect, and the people who collect the fewest don't have their basic needs met they're right to be worried about their income stream. And this, once again, was the message of the Luddites that you're not hearing.

      • Both things were true, in that a lot of those people went hungry, and the industrial revolution has released a lot of pollution which has killed orders of magnitude more people, and (oh yeah, by the way) has also caused AGW which is working on killing still more orders of magnitude more people.

        The message of the Luddites, which from your post you clearly failed to understand, was that technology in service to a few will destroy us. They were right. It is in the process of doing so. Technology must be employed in the service of The People, not a few rich people who don't care if the world burns (or actively want it to because they want to meet Jesus.)

        You can't just say this process has enhanced the quality of our lives without also looking at the way it's detracted from the same, without being an empty-headed cheerleader for progress. Progress is not only inevitable but even desirable, but progress for the sake of profit destroys human lives and livelihoods.

        Which is all to say that the authors are both right and wrong. In a world in which your value to society is measured by the number of dollars you collect, and the people who collect the fewest don't have their basic needs met they're right to be worried about their income stream. And this, once again, was the message of the Luddites that you're not hearing.

        I think this is a hurdle that humanity either has to figure out how to cross, or we will forever be stuck in these cycles of upheaval that, when viewed from the future may be seen as a good thing, but at the time caused poverty, death, illness, and mass psychological trauma among all but a few. A few who massively benefit from it at the same moment in time. Our focus on greed first, profit above all, is not a positive way to move humanity forward. Yes, a few ultra-elite get to live much, MUCH better lives,

    • by Rei ( 128717 )

      ** early 1800s, not 18th century.

    • by whitroth ( 9367 )

      "Mean hours worked per week started dropping"? How are you coming up with crap like that? Until unions *forced* them, working hours in factories and mills and mines was often 16 hours/day.

    • Who's gonna tell you how to make a real living here? Anything that might suit you, all the jobs are already taken. You can only try bookmakers https://betting-kenya.net/ [betting-kenya.net] . I know that if you figure it out, you can earn good money there. These sites should be promoted. For beginners there are good bonuses.
  • What do you think is going to happen? Nothing!
  • This is the creative equivalent of taking the blueprints for the latest iphone and making a very similar set of competing products. Try infringing apple's IP and you'll be sued out of existence. Try that on with schematics that you manage to obtain and you'll probably end up with a lengthy prison sentence. With engineering projects it's always been easy to copy IP because it has to be precisely documented, so legal protection is very well established. Now AI has made it possible to effectively document and
  • I feel like this concern is overblown. Writers are scared of AI taking away their source of income and jobs. The natural response is to try to interfere with the progress of AI via lawsuits, etc.

    But this is the "white collar" equivalent of the idea of the factory workers rioting and getting baseball bats out to smash up the robots.

    I mean, yes, the authors have a legal leg to stand on if they find the developers of the AI systems used a pirated copy of one of their books to train their systems. But we all kn

    • by Tailhook ( 98486 )

      To date, AI has been awful at mastering ANY of the arts

      You can find AI generated images appearing in media right now.

      DALLE et al. are going to wipe work-a-day "graphics artists" right out. And fast.

  • Just build a website that crawls the web searching for sentences you've written everywhere. Find enough in one work, sue for copyright violation. Enough of those and you can make bank off people stealing from you.
  • It's impossible to summarize all opinions without AI assistance.

    Impossible! NO possibility. Not with all the time and people and money. Simply undoable without AI. Got it? Now steep in your new victimhood of impossibility, loser.

  • If readers don't care whether their awesome story that costs less to buy came from a human or an AI, then it will be hard for anyone to listen to these voices. Micah

The relative importance of files depends on their cost in terms of the human effort needed to regenerate them. -- T.A. Dolotta

Working...