How An AI-Written 'Star Wars' Story Created Chaos at Gizmodo (msn.com) 91
G/O Media is the owner of top sites like Gizmodo, Kotaku, Quartz, and the Onion. Last month they announced "modest tests" of AI-generated content on their sites — and it didn't go over well within the company, reports the Washington Post.
Soon the Deputy Editor of Gizmodo's science fiction section io9 was flagging 18 "concerns, corrections and comments" about an AI-generated story by "Gizmodo Bot" on the chronological order of Star Wars movies and TV shows. "I have never had to deal with this basic level of incompetence with any of the colleagues that I have ever worked with," James Whitbrook told the Post in an interview. "If these AI [chatbots] can't even do something as basic as put a Star Wars movie in order one after the other, I don't think you can trust it to [report] any kind of accurate information." The irony that the turmoil was happening at Gizmodo, a publication dedicated to covering technology, was undeniable... Merrill Brown, the editorial director of G/O Media, wrote that because G/O Media owns several sites that cover technology, it has a responsibility to "do all we can to develop AI initiatives relatively early in the evolution of the technology." "These features aren't replacing work currently being done by writers and editors," Brown said in announcing to staffers that the company would roll out a trial to test "our editorial and technological thinking about use of AI."
"There will be errors, and they'll be corrected as swiftly as possible," he promised... In a Slack message reviewed by The Post, Brown told disgruntled employees Thursday that the company is "eager to thoughtfully gather and act on feedback..." The note drew 16 thumbs down emoji, 11 wastebasket emoji, six clown emoji, two face palm emoji and two poop emoji, according to screenshots of the Slack conversation...
Earlier this week, Lea Goldman, the deputy editorial director at G/O Media, notified employees on Slack that the company had "commenced limited testing" of AI-generated stories on four of its sites, including A.V. Club, Deadspin, Gizmodo and The Takeout, according to messages The Post viewed... Employees quickly messaged back with concern and skepticism. "None of our job descriptions include editing or reviewing AI-produced content," one employee said. "If you wanted an article on the order of the Star Wars movies you ... could've just asked," said another. "AI is a solution looking for a problem," a worker said. "We have talented writers who know what we're doing. So effectively all you're doing is wasting everyone's time."
The Post spotted four AI-generated stories on the company's sites, including io9, Deadspin, and its food site The Takeout.
At least two of those four stories had to be corrected after publication.
Soon the Deputy Editor of Gizmodo's science fiction section io9 was flagging 18 "concerns, corrections and comments" about an AI-generated story by "Gizmodo Bot" on the chronological order of Star Wars movies and TV shows. "I have never had to deal with this basic level of incompetence with any of the colleagues that I have ever worked with," James Whitbrook told the Post in an interview. "If these AI [chatbots] can't even do something as basic as put a Star Wars movie in order one after the other, I don't think you can trust it to [report] any kind of accurate information." The irony that the turmoil was happening at Gizmodo, a publication dedicated to covering technology, was undeniable... Merrill Brown, the editorial director of G/O Media, wrote that because G/O Media owns several sites that cover technology, it has a responsibility to "do all we can to develop AI initiatives relatively early in the evolution of the technology." "These features aren't replacing work currently being done by writers and editors," Brown said in announcing to staffers that the company would roll out a trial to test "our editorial and technological thinking about use of AI."
"There will be errors, and they'll be corrected as swiftly as possible," he promised... In a Slack message reviewed by The Post, Brown told disgruntled employees Thursday that the company is "eager to thoughtfully gather and act on feedback..." The note drew 16 thumbs down emoji, 11 wastebasket emoji, six clown emoji, two face palm emoji and two poop emoji, according to screenshots of the Slack conversation...
Earlier this week, Lea Goldman, the deputy editorial director at G/O Media, notified employees on Slack that the company had "commenced limited testing" of AI-generated stories on four of its sites, including A.V. Club, Deadspin, Gizmodo and The Takeout, according to messages The Post viewed... Employees quickly messaged back with concern and skepticism. "None of our job descriptions include editing or reviewing AI-produced content," one employee said. "If you wanted an article on the order of the Star Wars movies you ... could've just asked," said another. "AI is a solution looking for a problem," a worker said. "We have talented writers who know what we're doing. So effectively all you're doing is wasting everyone's time."
The Post spotted four AI-generated stories on the company's sites, including io9, Deadspin, and its food site The Takeout.
At least two of those four stories had to be corrected after publication.
Re:Huh? (Score:5, Insightful)
The point of inane bullshit and listicles istories s that they can't be too inane or too bullshit. If they end up being too bad then the viewers will notice and head elsewhere for crap stories that are only vaguely believable.
This stuff is not Artificial Intelligence. It's just random collection of words, sentences, and paragraphs formed through a process of association. The learning and training only teaches it to create association, but not how to make use of the associations or what they mean. Even if you ask "put the stories in order" it won't understand what you mean, it'll just run the association process again. People have tried this with simple lists and it has failed every time. All the AI is doing is taking "once upon a time" and finding a likely association about what comes next in all the training data it has seen. All these people being convinced that they need this AI are falling victim to a scam - and a scam of their own creation, since the AI creators never intended this to be their end result, they were working on language processing of input, not on the generation of output. It's just a bunch of overly credulous morons who tried to take this to the next step.
Re: Huh? (Score:1)
So have an AI writing Red Dwarf episodes.
They are crazy enough for nobody to notice that there isn't a human author.
Re: (Score:2)
Power Sauce!
Re: (Score:1)
Re: (Score:1)
The whole business model of clickbait is the audience sharing it all like "look how bad this is" and feeling all smug. A lot of stock photo uses the same form of bait. AI seems perfectly suited for this sort of thing.
Re: (Score:3)
Does anybody seriously believe that employers are not being sold generative AI as a way to reduce labour costs?
We tolerated the proles' work being shipped overseas to countries
Re: Huh? (Score:3)
For certain tasks that require language manipulation skills, ChatGPT is worth is weight in gold.
People focus on the things its is bad at. Fine, nice to know. Meanwhile, people are making huge amounts of money focusing on the things it is very good at.
One example: language levels. If you have a corporate policy to publish at B1 and you generally write at C2, it can be very hard to transcribe it. So there are companies who do that, for $100-$200 per article. ChatGPT does it for a few dollars at most.
Ditto jus
Re: (Score:2)
It's more valid & reliable to do statistical analyses directly on texts to evaluate their comprehensibility for likely target audiences. I wouldn't trust evaluations by ChatGPT; that's not how it works. Of course, still the most valid & reliable method is to give a representative sample of your target audience a cloze deletion test generated from the text(s) in question. That method's been around for decades & has yet to be beaten, unless you count more recent d
Fail = low level, succeed= IFs (Score:2)
>> This stuff is not Artificial Intelligence. It's just random collection of words
That is the thing with AI.
When it fails, it's because you did not use enough levels of AI.
When it succeeds, it is sometimes because people fakes the AI using a bunch of IF instructions...
Re: (Score:2)
Re: (Score:1)
Even if you ask "put the stories in order" it won't understand what you mean, it'll just run the association process again.
Just tried this. I wrote:
I'm 49. last year I saw a pig. 10 years ago I ate breakfast. in 2021 I watched a movie. last week I washed my car. When I was 5 I got a cat. yesterday I ate breakfast. Can you list these events in chronological order?
It replied:
Sure, I can do that. Here is the list of events in chronological order:
Re: (Score:2)
Just tried this. I wrote:
I'm 49. last year I saw a pig. 10 years ago I ate breakfast. in 2021 I watched a movie. last week I washed my car. When I was 5 I got a cat. yesterday I ate breakfast. Can you list these events in chronological order?
It replied:
Sure, I can do that. Here is the list of events in chronological order:
It got everything correct except for the one that required knowing the current numerical year to recognize that 2021 was two years ago. It seems to be consistently the case that these systems have no idea what year it is, and give inconsistent answers when asked. The notion of there being one right answer to that question seems to be missing.
In the grand scheme of things, that's way better than I would have expected.
Re: (Score:1)
Re: (Score:2)
They're going to keep their old white viewers happy by not exposing them to modern politics.
That's the activist blogger's biased speculation, not what Pitaro said.
What's hard to understand about that?
Nothing.
Re: (Score:1)
The management likely drinks the same activist kool aid, part of which is worship of "science". Those who believe in generative AI the most seem to be the same people who "believe" in, say, covid vaccines the most. The activists are cannibalized by the top layer of the section of the society they look up to.
I for one welcome that kind of negative feedback loop.
And this is unexpected? (Score:5, Insightful)
For a website that deals with technology, they don't seem to understand technology.
Large text models work by statistically choosing words out of a subset of words decided by a neural network. It has absolutely no understanding of anything it's spitting out. It's literally a statistical model choosing words that seem to go together nicely. Sometimes those words are correct. Sometimes they aren't. All the model knows is that certain sets of words are more likely to be related to Star Wars than Star Trek, or Star Search, or star fish. Unless you are checking where it's grabbing stuff from in the debug log, you have no idea where it's getting it's information from.
Re:And this is unexpected? (Score:5, Insightful)
For a website that deals with technology, they don't seem to understand technology.
While the site's writers occasionally demonstrate less understanding of technology than I'd like to see, it's the peope making these decisions who demonstrably have no understanding of the technology.
Re: (Score:2)
Iconic Brands, Massive Audience, Intelligent Data [slashdotmedia.com]
Yep.
Re:And this is unexpected? (Score:5, Insightful)
By calling it 'just a statistical model' you ignore the emergent property of intelligence.
It is just a statistical model. By calling it something else, you ignore reality. You're using the word 'emergence' here in place of the word 'magic'.
These things really do generate text one token at a time. No internal state is maintained between tokens. The only state the model gets is the prompt and any previously produced text. Each new output token is selected randomly. The token the model identifies as the most probable isn't always selected! It is objectively impossible for these things to plan out a response, consider alternatives, or do any of the things we associate with intelligence.
The hype is finally starting to die down and reality is starting to settle in. We're going to continue to see stories like this as reality starts to set in and more and more people realize that LLMs really can't do all the things they imagined.
Re: (Score:1)
The emergence is, that a set of simple rules result in complex behavior. A nice example is Conway's game of life. Simple rules, complex behavior. We consider current GPT models to be intelligent, and so i say we have intelligence as emergent behavior.
And yes, it is just a model. Cause that is what computers do. Meanwhile it's pretty useful. Once we toss enough hardware and electricity to the problem, we can even chat with it and it can do things for us.
Re:And this is unexpected? (Score:5, Insightful)
The emergence is, that a set of simple rules result in complex behavior. A nice example is Conway's game of life. Simple rules, complex behavior.
Right so far...
We consider current GPT models to be intelligent,
What do you mean "we"? I certainly do not consider current GPT models to be intelligent. They are pattern matching machines. They are not intelligent.
and so i say we have intelligence as emergent behavior.
Intelligence may indeed be emergent behavior, but it has not emerged from the Chat GPT language models.
Re:And this is unexpected? (Score:4, Insightful)
"We consider current GPT models to be intelligent,"
What you mean, *we*, kemosabe?
Re: (Score:1)
Kind of strange to say no internal state is maintained and then to say that there is a bunch of state based on the prompt/question and all the text its generated. That could be thousands of words.
Re: (Score:2)
The hype is finally starting to die down and reality is starting to settle in. We're going to continue to see stories like this as reality starts to set in and more and more people realize that LLMs really can't do all the things they imagined.
You're saying this dismissively as if what it is doing right now isn't ... FUCKING AMAZING. Sure it may just be statistical models but damn they are impressive. And to take a direct jab at TFS about it's factual incorrectness about the order of Starwars movies you can just ask ChatGPT or New-Bing (whatever MS calls it) and get the correct answer.
We've not even begun to scratch the surface of how we can use these AI models, the hype won't die down for a long time to come.
Re: (Score:2)
You're saying this dismissively as if what it is doing right now isn't ... FUCKING AMAZING
They're certainly impressive ... compared to models that came before them. They're an incredible disappointment, however, if you compare them to the science fiction fantasy that people believe them to be.
We've not even begun to scratch the surface of how we can use these AI models,
No, we've seen about all they can do. The next step, as you'll see, will be to use these or smaller models in combination with databases and traditional algorithms. Trying to get trustworthy output out of the model alone is a fools errand.
the hype won't die down for a long time to come.
It's already starting to die down. This was inevitable. As I said
Re: (Score:2)
Large text models work by statistically choosing words out of a subset of words decided by a neural network. It has absolutely no understanding of anything it's spitting out.
An understanding can be faked. The issue here is a question of new ideas vs rehashing of something that is understood by someone else. This latter case should be possible for an AI based bot and also makes up a good 90% of shit on the internet today.
You can: a) List all the Starwars movies. Understand chronological means to sort by order of the story rather than release date. Apply your thought and knowledge of the plot lines, ... or.
b) Use your language model trained on a website written by someone with th
Understanding (Score:5, Insightful)
An understanding can be faked.
Maybe, but large text models aren't even faking. They simply don't understand. If a large text model got the chronology of Star Wars right, it's because somewhere in it's text model there is a correct chronology of the movies that somebody wrote. There might be a wrong one in there, too. You might randomly get one or the other.
A couple of weeks ago I asked ChatGPT how many baby elephants could fit on an average-sized adult human's lap. It said it wasn't sure, as elephants and humans come in different sizes. While that's a true statement, it's the wrong answer. If you take a four year old child to the zoo and show them a baby elephant, then point to someone sitting on bench and ask them how many would fit on their lap, they'd say none. The reason GPT didn't understand is because nobody has wrote down that information and put it in a book or on the internet, because nobody *needs* to write it down. It can be deduced by a young child. However, GPT does not understand. The only reason it understands size at all, is because it was primed with a human-generated ontology of concepts and keywords, so it understands you are asking about a common property of two physical objects.
Correction (Score:2)
so it understands you are asking about a common property of two physical objects.
It *knows* about the concept. It doesn't understand them.
Re: (Score:2)
Maybe, but large text models aren't even faking. They simply don't understand. If a large text model got the chronology of Star Wars right, it's because somewhere in it's text model there is a correct chronology of the movies that somebody wrote. There might be a wrong one in there, too. You might randomly get one or the other.
That was exactly my point. But the reality is if you can weed out the false one (and in many cases even if you can't) LLM are actually quite useful in faking this understanding. Much of what we commit to text (or image) is nothing more than something else regurgitated or repackaged. It stands to reason that an LLM very much could write a story on the chronology of Starwars, and then there's the question of statistics and cost. Is it cheaper to pay people to manually do everything and get it right 99% of the
Re: (Score:2)
Re: (Score:1)
If the product of AI is only faintly less good than the shit spewed by their cadre of barely-educated (or, ironically, overeducated) writers such that the correction of such text to their minimal publishing standards is in total less effort than having all the humans in the first place, well, I hope those humans' resumes are up to date.
Re: (Score:2)
Large text models work by statistically choosing words out of a subset of words decided by a neural network. It has absolutely no understanding of anything it's spitting out.
Just don't know how to square this with reality. I run models feeding them a bunch of unique knowledge and instructions in plain English, the computer fans spin up .. some time goes by and out comes the reply. The answer may not always be right yet given degrees of freedom involved I find "absolutely no understanding" to be akin to asserting the earth is flat. Something somewhere had to have applied enough knowledge across a number of domains to be able to answer my unique questions and form a coherent E
Show me the AI request (Score:1)
Re: (Score:2)
Sure, you can always provide a better prompt but at this point, a lot of the content generated is hit or miss, particularly as the prompts gets large and complex.
in my experience, even when you prompt for specific thing, explicitly specifying a must have parameter , it will sometimes randomly ignore everything and generate something completely unrelated. It can provide good starting points but if you are looking for accuracy, the technology is not there yet
At the end of the day, AI will generate very convin
Re:Show me the AI request (Score:5, Insightful)
Is it better to spend two hours fabricating the perfect prompt for the AI to write what you want it to, or to spend two hours just writing it yourself?
Re: (Score:2)
Is it better to spend two hours fabricating the perfect prompt for the AI to write what you want it to, or to spend two hours just writing it yourself?
You're begging the question. It does not take two hours to generate the perfect prompt. In many cases that prompt is generated in minutes. The final question then is: does it take you 1hour and 50 minutes to check and publish the result or not. If the answer is no, you've won using AI.
The wife is currently doing this for exams. Rather than sitting staring blankly for 10 minutes trying to come up with a creatively worded exam question she just smashes a prompt into ChatGPT "write a creative exam question to
Re:Show me the AI request (Score:4, Insightful)
Show me the AI request and I'll show you where you messed up...
Is it the where they decided to use a LLM for anything other than personal entertainment?
Re: (Score:2)
Show me the AI request and I'll show you where you messed up...
You have to stir the pile. https://xkcd.com/1838/ [xkcd.com]
In AI's defense (Score:3)
In the AI's defense, the Star Wars movies are a mess in release order. I considered myself a fan, having seen the first 6 or 7 of them, and completing a few of the games, like Knights of the Old Republic.
Last one I saw, one of the factory-farmed Disney-era movies in 2019. I thought I knew, going in, where it fit chronologically in the series. Yet I was totally lost, it turns out there was another movie or two that snuck in somewhere between what I thought was the last one, chronologically in story-time. (I know there were like 10 that snuck in between in terms of real-world release order, as in, in the Year of our Lord, or the Common Era.)
I think the only people who could make sense of it are people who are literally paid, as their job, to track these movies continually every month of the year. Like a Gizmodo writer. Or possibly those weirdos who paid $8,000 to stay in Disney's Star Wars hotel.
What? (Score:1)
using chatgpt 3.5 (Score:2)
> list the star wars movies in chronological order
Here is the list of Star Wars movies in chronological order:
Episode I: The Phantom Menace (1999)
Episode II: Attack of the Clones (2002)
Episode III: Revenge of the Sith (2005)
Episode IV: A New Hope (1977)
Episode V: The Empire Strikes Back (1980)
Episode VI: Return of the Jedi (1983)
Episode VII: The Force Awakens (2015)
Episode VIII: The Last Jedi (2017)
Episode IX: The Rise of Skywalker (2019)
Rogue One: A Star Wars Story (2016)
Solo: A Star Wars Story (2018)
Pl
Re: using chatgpt 3.5 (Score:2)
My ordering:
Re: using chatgpt 3.5 (Score:2)
Touchscreen flunk
Re: using chatgpt 3.5 (Score:3)
My ordering:
Episode I: The Phantom Menace (1999)
Episode II: Attack of the Clones (2002)
Episode III: Revenge of the Sith (2005)
Solo: A Star Wars Story (2018)
Rogue One: A Star Wars Story (2016)
Episode IV: A New Hope (1977)
Episode V: The Empire Strikes Back (1980)
Episode VI: Return of the Jedi (1983)
Episode VII: The Force Awakens (2015)
Episode VIII: The Last Jedi (2017)
Episode IX: The Rise of Skywalker (2019)
Re: (Score:2)
Shouldn't it be:
Star Wars (1977)
Episode V: The Empire Strikes Back (1980)
Episode IV: A New Hope (1981) - first of a long list of edits, I suppose
Episode VI: Return of the Jedi (1983)
Episode I: The Phantom Menace (1999)
Episode II: Attack of the Clones (2002)
Episode III: Revenge of the Sith (2005)
Episode VII: The Force Awakens (2015)
Rogue One: A Star Wars Story (2016)
Episode VIII: The Last Jedi (2017)
Solo: A Star Wars Story (2018)
Episode IX: The Rise of Skywalker (2019)
Re: (Score:2)
If you count Episode IV from 1981, then I think you should count all the Special Editions, like the 1997 version that featured important content changes.
Re: (Score:2)
I would go something like this for current audiences (with a lot of time to burn and also this is Slashdot, so lets have a food ol' fashioned Star wars rant)
Rogue One (this is mostly because if they aren't familiar with star wars, you should start them on something modern, otherwise watch whenever)
Episode IV (also this transition between a 2016 and 1977 film is funny)
Episode V
Episode I
Parts of Episode II (I just find this one hard to sit through... sorry)
Episode III
Solo
Episode VI (end on a high note)
Skip t
So the editor had to do his job? (Score:3, Insightful)
It's the editor's job to correct mistakes in submitted stories. The question is whether cost of additional editing exceeds the savings from not hiring a writer. I suspect the AI is cheaper.
But if this is the new normal, where will future editors come from? What's the career path, if all the junior jobs are automated?
Re: (Score:3)
It's the editor's job to correct mistakes in submitted stories.
There are mistakes, and then there's fundamental failure of comprehension in the underlying article. An editor's job is not to go and re-research everything the writer did and confirm correctness of fundamental pieces of information. An editor's job is to check the article flows coherently, is readable, and follows the basic rules for spelling and grammar.
Re: (Score:2)
It's the editor's job to correct mistakes in submitted stories. The question is whether cost of additional editing exceeds the savings from not hiring a writer. I suspect the AI is cheaper.
But if this is the new normal, where will future editors come from? What's the career path, if all the junior jobs are automated?
Look at the problems with the editors here, you have to wonder if they're paid anything at all.
Re: (Score:2)
But if this is the new normal, where will future editors come from? What's the career path, if all the junior jobs are automated?
Future editors will be AIs too.
Re: (Score:2)
It's the editor's job to correct mistakes in submitted stories. The question is whether cost of additional editing exceeds the savings from not hiring a writer. I suspect the AI is cheaper.
But if this is the new normal, where will future editors come from? What's the career path, if all the junior jobs are automated?
Editors edit, true. But there's a MASSIVE difference between getting a decent, factual story to go through and line-edit, and getting a word-salad mess of failure that you're better off re-writing from the ground up, which is pretty much what these AI text generators are doing with any subject that needs to have actual, real-life, factual information integrated into the prose.
Re: (Score:2)
"It's the editor's job to correct mistakes in submitted stories."
But when the editor has to correct so many mistakes that he's basically rewriting the story, it's time to wonder if you hired the right writer.
None of our job descriptions in this stable... (Score:1)
... include car maintenance.
News reporting of the future (Score:5, Informative)
The note drew 16 thumbs down emoji, 11 wastebasket emoji, six clown emoji, two face palm emoji and two poop emoji, according to screenshots of the Slack conversation...
This is what news reporting is turning into. Emoji stats.
Yay!
Re: News reporting of the future (Score:2)
This sort of thing gives insight into what Gizmodo is.
Trash, mostly.
Re: (Score:2)
This is what news reporting is turning into. Emoji stats.
No, this is what life is turning into. The news is just reporting what actually happened in a Slack conversation. Forget complaining about the news, the question you need to be asking is why the fuck we are in a professional workplace using an animated fucking poop emoji to communicate.
The Holy Grail? (Score:2)
Isn't this stuff the holy grail? Imagine an AI continuously spouting clickbait articles and videos on any trendy topic. People are too stupid to tell the difference and already click on all the garbage put together by underpaid people in India already.
Zero employees, automatic cash generation.
paradox? (Score:3)
it seems to me there is a paradox at the heart of using LLM for articles that are thought of as social constructs. In order to get the "social" part in there, the LLMs need lots of what can seem like irrelevant data. So they are caught "lying". If the input is restricted to only curated previous stories, then they will produce output that is banal that no one would read. And this won't prevent the lying either.
AI in the guise of LLM seems to be a solution in search of a problem. PHBs read about it and get dollar signs in their eyes. AI restricted to a small well-defined domain of discourse, say protein folding, can be quite useful.
Writers, am i right? (Score:2)
"We have talented writers who know what we're doing."
They have their writers spy on them? Creepy.
Not so ironic, I think (Score:1)
The irony that the turmoil was happening at Gizmodo, a publication dedicated to covering technology, was undeniable...
What better place to publicly test new tech?
Link to the original AI-generated story (Score:2)
https://archive.li/K4BJ7 [archive.li]
and the current page which has been updated:
https://gizmodo.com/a-chronolo... [gizmodo.com]
Re: (Score:2)
Re: (Score:2)
I checked again, and both are working fine at my end ... some ISPs block archive.xyz style links, unfortunately. A VPN will probably resolve the issue, if you are sufficiently motivated :)
Gizmodo doesn't understand how LLMs work? (Score:2)
worse than useless (Score:3)
Now I keep running in to other web sites that, after reading a couple sentences, are obviously written by AI. "Thermoplastics are made of molecules." A year of AI scraping AI will reward us with a WWW that is just a sea of grey goo.
It scares the hell out of me that they are planning to use this for treating human health problems.
New movie pitch to Disney+ (Score:2)
Hey ChatGPT, write me a story about Star Wars. With a Wookie who's really a Furry, Leia in the metal bikini, Sarlacc tentacles, and Natalie Portman in hot grits.
Re: (Score:1)
Using your exact prompt, the wise Panzer of the lake responded thusly:
Once upon a time in a galaxy far, far away, a thrilling adventure unfolded within the universe of Star Wars. In this particular tale, our heroes found themselves facing an unusual series of events that would challenge their skills and test their resolve.
The story begins on the forested moon of Endor, where the Rebel Alliance has successfully destroyed the second Death Star and defeated the Emperor. Amidst the celebration, a peculiar Wooki
AI is definitely a good thing (Score:3)
With all the problems of the world, overpopulation, famine, pollution, resource depletion etc, AI shows there is still some optimism left.
"We are building a machine which can do the same things as a human consciousness, only cheaper and faster, despite not knowing how human consciousness works!"
It reminds me of all the sceptics and nay-sayers who said that alchemists could not transmute elements and were proved wrong when the alchemists eventually built particle accellerators.
This paragraph makes my skin crawl. (Score:2)
"There will be errors, and they'll be corrected as swiftly as possible," he promised... In a Slack message reviewed by The Post, Brown told disgruntled employees Thursday that the company is "eager to thoughtfully gather and act on feedback..." The note drew 16 thumbs down emoji, 11 wastebasket emoji, six clown emoji, two face palm emoji and two poop emoji, according to screenshots of the Slack conversation..
A few years back this would have been in a parody article about the trends of the day making it into the business world. Now? Here it is. Right there. This is the level of discourse in the business world today. I'm shocked it was only two poop emojis. Must be one classy place to work.
Oh, come now... (Score:2)
This seems like an unnecessary slam of the writers at Gizmodo. They work really hard to maintain an industry leading level of incompetence.
4 articles - 2 needed extensive corrections ... (Score:2)
The other two were opinion pieces or a top 10 list ... ... there are no corrections to be done, but who's opinion is it ...?
doesn' matter. (Score:2)
It doesn't matter, because Star Wars is invariant under all permutations. The AI correctly recognized this fact, showing greater intelligence than the editor.
How it got confused (Score:1)
I think this story is actually a pretty interesting hidden lesson in how language models produce output.
If you ask an LLM for an ordered list of Star Wars movies, there's a very high chance it will start to make use of many, many watch order lists - which are often not at all chronological, just a persons opinion on which order Star Wars movies should be watched in for maximum dramatic effect.
Expanding on that, it makes you realize that any subject that may have a lot of opinions, you are probably going to
I thought "Slack" was meant for work? (Score:2)
the fsck is it doing having "poop emojis" in their character set?