Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI The Media

How An AI-Written 'Star Wars' Story Created Chaos at Gizmodo (msn.com) 91

G/O Media is the owner of top sites like Gizmodo, Kotaku, Quartz, and the Onion. Last month they announced "modest tests" of AI-generated content on their sites — and it didn't go over well within the company, reports the Washington Post.

Soon the Deputy Editor of Gizmodo's science fiction section io9 was flagging 18 "concerns, corrections and comments" about an AI-generated story by "Gizmodo Bot" on the chronological order of Star Wars movies and TV shows. "I have never had to deal with this basic level of incompetence with any of the colleagues that I have ever worked with," James Whitbrook told the Post in an interview. "If these AI [chatbots] can't even do something as basic as put a Star Wars movie in order one after the other, I don't think you can trust it to [report] any kind of accurate information." The irony that the turmoil was happening at Gizmodo, a publication dedicated to covering technology, was undeniable... Merrill Brown, the editorial director of G/O Media, wrote that because G/O Media owns several sites that cover technology, it has a responsibility to "do all we can to develop AI initiatives relatively early in the evolution of the technology." "These features aren't replacing work currently being done by writers and editors," Brown said in announcing to staffers that the company would roll out a trial to test "our editorial and technological thinking about use of AI."

"There will be errors, and they'll be corrected as swiftly as possible," he promised... In a Slack message reviewed by The Post, Brown told disgruntled employees Thursday that the company is "eager to thoughtfully gather and act on feedback..." The note drew 16 thumbs down emoji, 11 wastebasket emoji, six clown emoji, two face palm emoji and two poop emoji, according to screenshots of the Slack conversation...

Earlier this week, Lea Goldman, the deputy editorial director at G/O Media, notified employees on Slack that the company had "commenced limited testing" of AI-generated stories on four of its sites, including A.V. Club, Deadspin, Gizmodo and The Takeout, according to messages The Post viewed... Employees quickly messaged back with concern and skepticism. "None of our job descriptions include editing or reviewing AI-produced content," one employee said. "If you wanted an article on the order of the Star Wars movies you ... could've just asked," said another. "AI is a solution looking for a problem," a worker said. "We have talented writers who know what we're doing. So effectively all you're doing is wasting everyone's time."

The Post spotted four AI-generated stories on the company's sites, including io9, Deadspin, and its food site The Takeout.

At least two of those four stories had to be corrected after publication.
This discussion has been archived. No new comments can be posted.

How An AI-Written 'Star Wars' Story Created Chaos at Gizmodo

Comments Filter:
  • by JBMcB ( 73720 ) on Sunday July 09, 2023 @09:56PM (#63672639)

    For a website that deals with technology, they don't seem to understand technology.

    Large text models work by statistically choosing words out of a subset of words decided by a neural network. It has absolutely no understanding of anything it's spitting out. It's literally a statistical model choosing words that seem to go together nicely. Sometimes those words are correct. Sometimes they aren't. All the model knows is that certain sets of words are more likely to be related to Star Wars than Star Trek, or Star Search, or star fish. Unless you are checking where it's grabbing stuff from in the debug log, you have no idea where it's getting it's information from.

    • by 93 Escort Wagon ( 326346 ) on Sunday July 09, 2023 @10:07PM (#63672661)

      For a website that deals with technology, they don't seem to understand technology.

      While the site's writers occasionally demonstrate less understanding of technology than I'd like to see, it's the peope making these decisions who demonstrably have no understanding of the technology.

    • Large text models work by statistically choosing words out of a subset of words decided by a neural network. It has absolutely no understanding of anything it's spitting out.

      An understanding can be faked. The issue here is a question of new ideas vs rehashing of something that is understood by someone else. This latter case should be possible for an AI based bot and also makes up a good 90% of shit on the internet today.

      You can: a) List all the Starwars movies. Understand chronological means to sort by order of the story rather than release date. Apply your thought and knowledge of the plot lines, ... or.
      b) Use your language model trained on a website written by someone with th

      • Understanding (Score:5, Insightful)

        by JBMcB ( 73720 ) on Monday July 10, 2023 @07:17AM (#63673315)

        An understanding can be faked.

        Maybe, but large text models aren't even faking. They simply don't understand. If a large text model got the chronology of Star Wars right, it's because somewhere in it's text model there is a correct chronology of the movies that somebody wrote. There might be a wrong one in there, too. You might randomly get one or the other.

        A couple of weeks ago I asked ChatGPT how many baby elephants could fit on an average-sized adult human's lap. It said it wasn't sure, as elephants and humans come in different sizes. While that's a true statement, it's the wrong answer. If you take a four year old child to the zoo and show them a baby elephant, then point to someone sitting on bench and ask them how many would fit on their lap, they'd say none. The reason GPT didn't understand is because nobody has wrote down that information and put it in a book or on the internet, because nobody *needs* to write it down. It can be deduced by a young child. However, GPT does not understand. The only reason it understands size at all, is because it was primed with a human-generated ontology of concepts and keywords, so it understands you are asking about a common property of two physical objects.

        • so it understands you are asking about a common property of two physical objects.

          It *knows* about the concept. It doesn't understand them.

        • Maybe, but large text models aren't even faking. They simply don't understand. If a large text model got the chronology of Star Wars right, it's because somewhere in it's text model there is a correct chronology of the movies that somebody wrote. There might be a wrong one in there, too. You might randomly get one or the other.

          That was exactly my point. But the reality is if you can weed out the false one (and in many cases even if you can't) LLM are actually quite useful in faking this understanding. Much of what we commit to text (or image) is nothing more than something else regurgitated or repackaged. It stands to reason that an LLM very much could write a story on the chronology of Starwars, and then there's the question of statistics and cost. Is it cheaper to pay people to manually do everything and get it right 99% of the

    • Dude, you just don't understand AI. It's going to replace lawyers and programmers. I read a lot of Twitter and that makes me an expert on AI.
    • If the product of AI is only faintly less good than the shit spewed by their cadre of barely-educated (or, ironically, overeducated) writers such that the correction of such text to their minimal publishing standards is in total less effort than having all the humans in the first place, well, I hope those humans' resumes are up to date.

    • Large text models work by statistically choosing words out of a subset of words decided by a neural network. It has absolutely no understanding of anything it's spitting out.

      Just don't know how to square this with reality. I run models feeding them a bunch of unique knowledge and instructions in plain English, the computer fans spin up .. some time goes by and out comes the reply. The answer may not always be right yet given degrees of freedom involved I find "absolutely no understanding" to be akin to asserting the earth is flat. Something somewhere had to have applied enough knowledge across a number of domains to be able to answer my unique questions and form a coherent E

  • Show me the AI request and I'll show you where you messed up...
    • by youn ( 1516637 )

      Sure, you can always provide a better prompt but at this point, a lot of the content generated is hit or miss, particularly as the prompts gets large and complex.

      in my experience, even when you prompt for specific thing, explicitly specifying a must have parameter , it will sometimes randomly ignore everything and generate something completely unrelated. It can provide good starting points but if you are looking for accuracy, the technology is not there yet

      At the end of the day, AI will generate very convin

    • by Calydor ( 739835 ) on Monday July 10, 2023 @01:49AM (#63672905)

      Is it better to spend two hours fabricating the perfect prompt for the AI to write what you want it to, or to spend two hours just writing it yourself?

      • Is it better to spend two hours fabricating the perfect prompt for the AI to write what you want it to, or to spend two hours just writing it yourself?

        You're begging the question. It does not take two hours to generate the perfect prompt. In many cases that prompt is generated in minutes. The final question then is: does it take you 1hour and 50 minutes to check and publish the result or not. If the answer is no, you've won using AI.

        The wife is currently doing this for exams. Rather than sitting staring blankly for 10 minutes trying to come up with a creatively worded exam question she just smashes a prompt into ChatGPT "write a creative exam question to

    • by narcc ( 412956 ) on Monday July 10, 2023 @02:42AM (#63672959) Journal

      Show me the AI request and I'll show you where you messed up...

      Is it the where they decided to use a LLM for anything other than personal entertainment?

    • by XXongo ( 3986865 )

      Show me the AI request and I'll show you where you messed up...

      You have to stir the pile. https://xkcd.com/1838/ [xkcd.com]

  • by sound+vision ( 884283 ) on Sunday July 09, 2023 @10:04PM (#63672659) Journal

    In the AI's defense, the Star Wars movies are a mess in release order. I considered myself a fan, having seen the first 6 or 7 of them, and completing a few of the games, like Knights of the Old Republic.

    Last one I saw, one of the factory-farmed Disney-era movies in 2019. I thought I knew, going in, where it fit chronologically in the series. Yet I was totally lost, it turns out there was another movie or two that snuck in somewhere between what I thought was the last one, chronologically in story-time. (I know there were like 10 that snuck in between in terms of real-world release order, as in, in the Year of our Lord, or the Common Era.)

    I think the only people who could make sense of it are people who are literally paid, as their job, to track these movies continually every month of the year. Like a Gizmodo writer. Or possibly those weirdos who paid $8,000 to stay in Disney's Star Wars hotel.

  • Someone discovered that generative AI makes stuff up? Is this news?
  • > list the star wars movies in chronological order

    Here is the list of Star Wars movies in chronological order:

    Episode I: The Phantom Menace (1999)
    Episode II: Attack of the Clones (2002)
    Episode III: Revenge of the Sith (2005)
    Episode IV: A New Hope (1977)
    Episode V: The Empire Strikes Back (1980)
    Episode VI: Return of the Jedi (1983)
    Episode VII: The Force Awakens (2015)
    Episode VIII: The Last Jedi (2017)
    Episode IX: The Rise of Skywalker (2019)
    Rogue One: A Star Wars Story (2016)
    Solo: A Star Wars Story (2018)
    Pl

    • My ordering:

    • My ordering:

      Episode I: The Phantom Menace (1999)
      Episode II: Attack of the Clones (2002)
      Episode III: Revenge of the Sith (2005)
      Solo: A Star Wars Story (2018)
      Rogue One: A Star Wars Story (2016)
      Episode IV: A New Hope (1977)
      Episode V: The Empire Strikes Back (1980)
      Episode VI: Return of the Jedi (1983)
      Episode VII: The Force Awakens (2015)
      Episode VIII: The Last Jedi (2017)
      Episode IX: The Rise of Skywalker (2019)

    • Shouldn't it be:
      Star Wars (1977)
      Episode V: The Empire Strikes Back (1980)
      Episode IV: A New Hope (1981) - first of a long list of edits, I suppose
      Episode VI: Return of the Jedi (1983)
      Episode I: The Phantom Menace (1999)
      Episode II: Attack of the Clones (2002)
      Episode III: Revenge of the Sith (2005)
      Episode VII: The Force Awakens (2015)
      Rogue One: A Star Wars Story (2016)
      Episode VIII: The Last Jedi (2017)
      Solo: A Star Wars Story (2018)
      Episode IX: The Rise of Skywalker (2019)

      • If you count Episode IV from 1981, then I think you should count all the Special Editions, like the 1997 version that featured important content changes.

      • I would go something like this for current audiences (with a lot of time to burn and also this is Slashdot, so lets have a food ol' fashioned Star wars rant)

        Rogue One (this is mostly because if they aren't familiar with star wars, you should start them on something modern, otherwise watch whenever)
        Episode IV (also this transition between a 2016 and 1977 film is funny)
        Episode V
        Episode I
        Parts of Episode II (I just find this one hard to sit through... sorry)
        Episode III
        Solo
        Episode VI (end on a high note)

        Skip t

  • by mkwan ( 2589113 ) on Sunday July 09, 2023 @11:59PM (#63672793)

    It's the editor's job to correct mistakes in submitted stories. The question is whether cost of additional editing exceeds the savings from not hiring a writer. I suspect the AI is cheaper.

    But if this is the new normal, where will future editors come from? What's the career path, if all the junior jobs are automated?

    • It's the editor's job to correct mistakes in submitted stories.

      There are mistakes, and then there's fundamental failure of comprehension in the underlying article. An editor's job is not to go and re-research everything the writer did and confirm correctness of fundamental pieces of information. An editor's job is to check the article flows coherently, is readable, and follows the basic rules for spelling and grammar.

    • by PJ6 ( 1151747 )

      It's the editor's job to correct mistakes in submitted stories. The question is whether cost of additional editing exceeds the savings from not hiring a writer. I suspect the AI is cheaper.

      But if this is the new normal, where will future editors come from? What's the career path, if all the junior jobs are automated?

      Look at the problems with the editors here, you have to wonder if they're paid anything at all.

    • But if this is the new normal, where will future editors come from? What's the career path, if all the junior jobs are automated?

      Future editors will be AIs too.

    • It's the editor's job to correct mistakes in submitted stories. The question is whether cost of additional editing exceeds the savings from not hiring a writer. I suspect the AI is cheaper.

      But if this is the new normal, where will future editors come from? What's the career path, if all the junior jobs are automated?

      Editors edit, true. But there's a MASSIVE difference between getting a decent, factual story to go through and line-edit, and getting a word-salad mess of failure that you're better off re-writing from the ground up, which is pretty much what these AI text generators are doing with any subject that needs to have actual, real-life, factual information integrated into the prose.

    • "It's the editor's job to correct mistakes in submitted stories."

      But when the editor has to correct so many mistakes that he's basically rewriting the story, it's time to wonder if you hired the right writer.

  • by dromgodis ( 4533247 ) on Monday July 10, 2023 @03:01AM (#63672977)

    The note drew 16 thumbs down emoji, 11 wastebasket emoji, six clown emoji, two face palm emoji and two poop emoji, according to screenshots of the Slack conversation...

    This is what news reporting is turning into. Emoji stats.

    Yay!

    • This sort of thing gives insight into what Gizmodo is.

      Trash, mostly.

    • This is what news reporting is turning into. Emoji stats.

      No, this is what life is turning into. The news is just reporting what actually happened in a Slack conversation. Forget complaining about the news, the question you need to be asking is why the fuck we are in a professional workplace using an animated fucking poop emoji to communicate.

  • Isn't this stuff the holy grail? Imagine an AI continuously spouting clickbait articles and videos on any trendy topic. People are too stupid to tell the difference and already click on all the garbage put together by underpaid people in India already.

    Zero employees, automatic cash generation.

  • by gtall ( 79522 ) on Monday July 10, 2023 @04:13AM (#63673075)

    it seems to me there is a paradox at the heart of using LLM for articles that are thought of as social constructs. In order to get the "social" part in there, the LLMs need lots of what can seem like irrelevant data. So they are caught "lying". If the input is restricted to only curated previous stories, then they will produce output that is banal that no one would read. And this won't prevent the lying either.

    AI in the guise of LLM seems to be a solution in search of a problem. PHBs read about it and get dollar signs in their eyes. AI restricted to a small well-defined domain of discourse, say protein folding, can be quite useful.

  • "We have talented writers who know what we're doing."
    They have their writers spy on them? Creepy.

  • The irony that the turmoil was happening at Gizmodo, a publication dedicated to covering technology, was undeniable...

    What better place to publicly test new tech?

  • https://archive.li/K4BJ7 [archive.li]

    and the current page which has been updated:
    https://gizmodo.com/a-chronolo... [gizmodo.com]

    • by XXongo ( 3986865 )
      Thanks; I was looking for that... but I get an error message on that link. Maybe the problem's on my end, but could you check for a typo?
      • by ac22 ( 7754550 )

        I checked again, and both are working fine at my end ... some ISPs block archive.xyz style links, unfortunately. A VPN will probably resolve the issue, if you are sufficiently motivated :)

  • An LLM just generates text that looks like text a human wrote. The more mundane the topic is, the more easily the LLM can mimic human text. And you can't find anything more mundane than pop culture articles. But just because I can mimic dolphin squeaks, and even pick statistically likely responses when they squeak back at me, doesn't mean I understand what they or I am saying, nor does it mean that what I'm saying is true.
  • by skinfaxi ( 212627 ) on Monday July 10, 2023 @07:38AM (#63673351) Journal
    Quora used to be a place you could get interesting answers from real people, often ones with genuine expertise in their fields. Now they have grafted AI on top, which will answer your questions just like you'd expect an AI to - with random bullshit strung together. It can't even be called lying, it's literally just random shit. It gave me a list of all the albums that Frank Zappa and Spark collaborated on, for instance. In reality, of course, none of that is true.

    Now I keep running in to other web sites that, after reading a couple sentences, are obviously written by AI. "Thermoplastics are made of molecules." A year of AI scraping AI will reward us with a WWW that is just a sea of grey goo.

    It scares the hell out of me that they are planning to use this for treating human health problems.

  • Hey ChatGPT, write me a story about Star Wars. With a Wookie who's really a Furry, Leia in the metal bikini, Sarlacc tentacles, and Natalie Portman in hot grits.

    • Using your exact prompt, the wise Panzer of the lake responded thusly:

      Once upon a time in a galaxy far, far away, a thrilling adventure unfolded within the universe of Star Wars. In this particular tale, our heroes found themselves facing an unusual series of events that would challenge their skills and test their resolve.

      The story begins on the forested moon of Endor, where the Rebel Alliance has successfully destroyed the second Death Star and defeated the Emperor. Amidst the celebration, a peculiar Wooki

  • by zmollusc ( 763634 ) on Monday July 10, 2023 @08:05AM (#63673407)

    With all the problems of the world, overpopulation, famine, pollution, resource depletion etc, AI shows there is still some optimism left.
      "We are building a machine which can do the same things as a human consciousness, only cheaper and faster, despite not knowing how human consciousness works!"
    It reminds me of all the sceptics and nay-sayers who said that alchemists could not transmute elements and were proved wrong when the alchemists eventually built particle accellerators.

  • "There will be errors, and they'll be corrected as swiftly as possible," he promised... In a Slack message reviewed by The Post, Brown told disgruntled employees Thursday that the company is "eager to thoughtfully gather and act on feedback..." The note drew 16 thumbs down emoji, 11 wastebasket emoji, six clown emoji, two face palm emoji and two poop emoji, according to screenshots of the Slack conversation..

    A few years back this would have been in a parody article about the trends of the day making it into the business world. Now? Here it is. Right there. This is the level of discourse in the business world today. I'm shocked it was only two poop emojis. Must be one classy place to work.

  • This seems like an unnecessary slam of the writers at Gizmodo. They work really hard to maintain an industry leading level of incompetence.

  • The other two were opinion pieces or a top 10 list ... ... there are no corrections to be done, but who's opinion is it ...?

  • It doesn't matter, because Star Wars is invariant under all permutations. The AI correctly recognized this fact, showing greater intelligence than the editor.

  • I think this story is actually a pretty interesting hidden lesson in how language models produce output.

    If you ask an LLM for an ordered list of Star Wars movies, there's a very high chance it will start to make use of many, many watch order lists - which are often not at all chronological, just a persons opinion on which order Star Wars movies should be watched in for maximum dramatic effect.

    Expanding on that, it makes you realize that any subject that may have a lot of opinions, you are probably going to

  • If so, then waht

    and two poop emoji, according to screenshots of the Slack conversation...

    the fsck is it doing having "poop emojis" in their character set?

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...