Social Networks

Discord Rival Maxes Out Hosting Capacity As Players Flee Age-Verification Crackdown (pcgamer.com) 33

Following backlash over Discord's global rollout of strict age-verification checks, users are flocking to rival platform TeamSpeak and overwhelming its servers. According to PC Gamer, the Discord alternative said its hosting capacity has been maxed out in a number of regions including the U.S. From the report: [A]s I saw for myself while testing out free Discord alternatives, it's hard to deny the appeal of TeamSpeak. It's quick and easy to make an account, join or start a group chat, or join a massive, game-based community voice server, and at no point does TeamSpeak cheekily ask if it can scan your wizened visage.

During my testing, I was able to dive into 18+ group chats without tripping over an age gate. However, there's no guarantee TeamSpeak won't have to deploy its own age verification mechanism in the future. In the UK at least, the Online Safety Act makes those sorts of checks a legal obligation, with Prime Minister Keir Starmer recently stating "No social media platform should get a free pass when it comes to protecting our kids."

Besides all of that, if you'd rather not chat to randoms who also happen to have an unhealthy obsession with Arc Raiders, you'll likely need to pay an admittedly small subscription fee to rent your own ten-person community voice server. By that point, you're handing over card details and essentially fulfilling an age assurance check anyway. If you'd rather limit how much info your chat platform of choice has about you, there are arguably better options out there.

Movies

A YouTuber's $3M Movie Nearly Beat Disney's $40M Thriller at the Box Office (theatlantic.com) 45

Mark Fischbach, the YouTube creator known as Markiplier who has spent nearly 15 years building an audience of more than 38 million subscribers by playing indie-horror video games on camera, has pulled off something that most independent filmmakers never manage -- a self-financed, self-distributed debut feature that has grossed more than $30 million domestically against a $3 million budget.

Iron Lung, a 127-minute sci-fi adaptation of a video game Fischbach wrote, directed, starred in, and edited himself, opened to $18.3 million in its first weekend and has since doubled that figure worldwide in just two weeks, nearly matching the $19.1 million debut of Send Help, a $40 million thriller from Disney-owned 20th Century Studios. Fischbach declined deals from traditional distributors and instead spent months booking theaters privately, encouraging fans to reserve tickets online; when prospective viewers found the film wasn't screening in their city, they called local cinemas to request it, eventually landing Iron Lung on more than 3,000 screens across North America -- all without a single paid media campaign.
Social Networks

Instagram Boss Says 16 Hours of Daily Use Is Not Addiction (bbc.com) 62

Instagram head Adam Mosseri told a Los Angeles courtroom last week that a teenager's 16-hour single-day session on the platform was "problematic use" but not an addiction, a distinction he drew repeatedly during testimony in a landmark trial over social media's harm to minors.

Mosseri, who has led Instagram for eight years, is the first high-profile tech executive to take the stand. He agreed the platform should do everything in its power to protect young users but said how much use was too much was "a personal thing." The lead plaintiff, identified as K.G.M., reported bullying on Instagram more than 300 times; Mosseri said he had not known. An internal Meta survey of 269,000 users found 60% had experienced bullying in the previous week.
Social Networks

India's New Social Media Rules: Remove Unlawful Content in Three Hours, Detect Illegal AI Content Automatically (bbc.com) 23

Bloomberg reports: India tightened rules governing social media content and platforms, particularly targeting artificially generated and manipulated material, in a bid to crack down on the rapid spread of misinformation and deepfakes. The government on Tuesday (Feb 10) notified new rules under an existing law requiring social media firms to comply with takedown requests from Indian authorities within three hours and prominently label AI-generated content. The rules also require platforms to put in place measures to prevent users from posting unlawful material...

Companies will need to invest in 24-hour monitoring centres as enforcement shifts toward platforms rather than users, said Nikhil Pahwa, founder of MediaNama, a publication tracking India's digital policy... The onus of identification, removal and enforcement falls on tech firms, which could lose immunity from legal action if they fail to act within the prescribed timeline.

The new rules also require automated tools to detect and prevent illegal AI content, the BBC reports. And they add that India's new three-hour deadline is "a sharp tightening of the existing 36-hour deadline." [C]ritics worry the move is part of a broader tightening of oversight of online content and could lead to censorship in the world's largest democracy with more than a billion internet users... According to transparency reports, more than 28,000 URLs or web links were blocked in 2024 following government requests...

Delhi-based technology analyst Prasanto K Roy described the new regime as "perhaps the most extreme takedown regime in any democracy". He said compliance would be "nearly impossible" without extensive automation and minimal human oversight, adding that the tight timeframe left little room for platforms to assess whether a request was legally appropriate. On AI labelling, Roy said the intention was positive but cautioned that reliable and tamper-proof labelling technologies were still developing.

DW reports that India has also "joined the growing list of countries considering a social media ban for children under 16."

"Young Indians are not happy and are already plotting workarounds."
Social Networks

Social Networks Agree to Be Rated On Their Teen Safety Efforts (yahoo.com) 14

Meta, TikTok, Snap and other social neteworks agreed this week to be rated on their teen safety efforts, reports the Los Angeles Times, "amid rising concern about whether the world's largest social media platforms are doing enough to protect the mental health of young people." The Mental Health Coalition, a collective of organizations focused on destigmatizing mental health issues, said Tuesday that it is launching standards and a new rating system for online platforms. For the Safe Online Standards (S.O.S.) program, an independent panel of global experts will evaluate companies on parameters including safety rules, design, moderation and mental health resources. TikTok, Snap and Meta — the parent company of Facebook and Instagram — will be the first companies to be graded. Discord, YouTube, Pinterest, Roblox and Twitch have also agreed to participate, the coalition said in a news release.

"These standards provide the public with a meaningful way to evaluate platform protections and hold companies accountable — and we look forward to more tech companies signing up for the assessments," Antigone Davis, vice president and global head of safety at Meta, said in a statement... The ratings will be color-coded, and companies that perform well on the tests will get a blue shield badge that signals they help reduce harmful content on the platform and their rules are clear. Those that fall short will receive a red rating, indicating they're not reliably blocking harmful content or lack proper rules. Ratings in other colors indicate whether the platforms have partial protection or whether their evaluations haven't been completed yet.

Social Networks

The EU Moves To Kill Infinite Scrolling 37

Doom scrolling is doomed, if the EU gets its way. From a report: The European Commission is for the first time tackling the addictiveness of social media in a fight against TikTok that may set new design standards for the world's most popular apps. Brussels has told the company to change several key features, including disabling infinite scrolling, setting strict screen time breaks and changing its recommender systems. The demand follows the Commission's declaration that TikTok's design is addictive to users -- especially children.

The fact that the Commission said TikTok should change the basic design of its service is "ground-breaking for the business model fueled by surveillance and advertising," said Katarzyna Szymielewicz, president of the Panoptykon Foundation, a Polish civil society group. That doesn't bode well for other platforms, particularly Meta's Facebook and Instagram. The two social media giants are also under investigation over the addictiveness of their design.
AI

Anthropic's Claude Got 11% User Boost from Super Bowl Ad Mocking ChatGPT's Advertising (cnbc.com) 8

Anthropic saw visits to its site jump 6.5% after Sunday's Super Bowl ad mocking ChatGPT's advertising, reports CNBC (citing data analyzed by French financial services company BNP Paribas).

The Claude gain, which took it into the top 10 free apps on the Apple App Store, beat out chatbot and AI competitors OpenAI, Google Gemini and Meta. Daily active users also saw an 11% jump post-game, the most significant within the firm's AI coverage. [Just in the U.S., 125 million people were watching Sunday's Super Bowl.]

OpenAI's ChatGPT had a 2.7% bump in daily active users after the Super Bowl and Gemini added 1.4%. Claude's user base is still much smaller than ChatGPT and Gemini...

OpenAI CEO Sam Altman attacked Anthropic's Super Bowl ad campaign. In a post to social media platform X, Altman called the commercials "deceptive" and "clearly dishonest."

OpenAI's Altman admitted in his social media post (February 4) that Anthropic's ads "are funny, and I laughed." But in several paragraphs he made his own OpenAI-Anthropic comparisons:
  • "We believe everyone deserves to use AI and are committed to free access, because we believe access creates agency. More Texans use ChatGPT for free than total people use Claude in the U.S... Anthropic serves an expensive product to rich people. We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can't pay for subscriptions.
  • "If you want to pay for ChatGPT Plus or Pro, we don't show you ads."
  • "Anthropic wants to control what people do with AI — they block companies they don't like from using their coding product (including us), they want to write the rules themselves for what people can and can't use AI for, and now they also want to tell other companies what their business models can be."

Businesses

Israeli Soldiers Accused of Using Polymarket To Bet on Strikes (wsj.com) 128

An anonymous reader shares a report: Israel has arrested several people, including army reservists, for allegedly using classified information to place bets on Israeli military operations on Polymarket. Shin Bet, the country's internal security agency, said Thursday the suspects used information they had come across during their military service to inform their bets.

One of the reservists and a civilian were indicted on a charge of committing serious security offenses, bribery and obstruction of justice, Shin Bet said, without naming the people who were arrested. Polymarket is what is called a prediction market that lets people place bets to forecast the direction of events. Users wager on everything from the size of any interest-rate cut by the Federal Reserve in March to the winner of League of Legends videogame tournaments to the number of times Elon Musk will tweet in the third week of February.

The arrests followed reports in Israeli media that Shin Bet was investigating a series of Polymarket bets last year related to when Israel would launch an attack on Iran, including which day or month the attack would take place and when Israel would declare the operation over. Last year, a user who went by the name ricosuave666 correctly predicted the timeline around the 12-day war between Israel and Iran. The bets drew attention from other traders who suspected the account holder had access to nonpublic information. The account in question raked in more than $150,000 in winnings before going dormant for six months. It resumed trading last month, betting on when Israel would strike Iran, Polymarket data shows.

AI

Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code (theshamblog.com) 92

"I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change."

"Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." (UPDATE: Ars Technica acknowledges they'd asked ChatGPT to extract quotes from Shambaugh's post, and that it instead responded with inaccurate quotes it hallucinated.)

From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but.

It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet.

I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat...

It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine.

"How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...")

And amazingly, Shambaugh then had another run-in with a hallucinating AI...

I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.

This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here...

So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.

Thanks to long-time Slashdot reader steak for sharing the news.
Facebook

Meta's New Patent: an AI That Likes, Comments and Messages For You When You're Dead (businessinsider.com) 89

Meta was granted a patent in late December that describes how a large language model could be trained on a deceased user's historical activity -- their comments, likes, and posted content -- to keep their social media accounts active after they're gone.

Andrew Bosworth, Meta's CTO, is listed as the primary author of the patent, first filed in 2023. The AI clone could like and comment on posts, respond to DMs, and even simulate video or audio calls on the user's behalf. A Meta spokesperson told Business Insider the company has "no plans to move forward" with the technology.
Privacy

Ring Cancels Its Partnership With Flock Safety After Surveillance Backlash (theverge.com) 41

Following intense backlash to its partnership with Flock Safety, a surveillance technology company that works with law enforcement agencies, Ring has announced it is canceling the integration. From a report: In a statement published on Ring's blog and provided to The Verge ahead of publication, the company said: "Following a comprehensive review, we determined the planned Flock Safety integration would require significantly more time and resources than anticipated. We therefore made the joint decision to cancel the integration and continue with our current partners ... The integration never launched, so no Ring customer videos were ever sent to Flock Safety."

[...] Over the last few weeks, the company has faced significant public anger over its connection to Flock, with Ring users being encouraged to smash their cameras, and some announcing on social media that they are throwing away their Ring devices. The Flock partnership was announced last October, but following recent unrest across the country related to ICE activities, public pressure against the Amazon-owned Ring's involvement with the company started to mount. Flock has reportedly allowed ICE and other federal agencies to access its network of surveillance cameras, and influencers across social media have been claiming that Ring is providing a direct link to ICE.

United States

CIA Makes New Push To Recruit Chinese Military Officers as Informants (reuters.com) 72

An anonymous reader shares a report: Just weeks after a dramatic purge of China's top general, the CIA is moving to capitalize on any resulting discord with a new public video targeting potential informants in the Chinese military. The U.S. spy agency on Thursday rolled out the video depicting a disillusioned mid-level Chinese military officer, in the latest U.S. step in a campaign to ramp up human intelligence gathering on Washington's strategic rival.

It follows a similar effort last May that focused on fictional figures within China's ruling Communist Party that provided detailed Chinese-language instructions on how to securely contact U.S. intelligence. CIA Director John Ratcliffe said in a statement that the agency's videos had reached many Chinese citizens and that it would continue offering Chinese government officials an "opportunity to work toward a brighter future together."

United States

Border Officials Are Said To Have Caused El Paso Closure by Firing Anti-Drone Laser (nytimes.com) 116

An anonymous reader shares a report: The abrupt closure of El Paso's airspace late Tuesday was precipitated when Customs and Border Protection officials deployed an anti-drone laser on loan from the Department of Defense without giving aviation officials enough time to assess the risks to commercial aircraft, according to multiple people briefed on the situation.

The episode led the Federal Aviation Administration to abruptly declare that the nearby airspace would be shut down for 10 days, an extraordinary pause that was quickly lifted Wednesday morning at the direction of the White House. Top administration officials quickly claimed that the closure was in response to a sudden incursion of drones from Mexican drug cartels that required a military response, with Transportation Secretary Sean Duffy declaring in a social media post that "the threat has been neutralized."

But that assertion was undercut by multiple people familiar with the situation, who said that the F.A.A.'s extreme move came after immigration officials earlier this week used an anti-drone laser shared by the Pentagon without coordination with the F.A.A. The people spoke on the condition of anonymity because they were not authorized to speak publicly. C.B.P. officials thought they were firing on a cartel drone, the people said, but it turned out to be a party balloon. Defense Department officials were present during the incident, one person said.

Privacy

With Ring, American Consumers Built a Surveillance Dragnet (404media.co) 71

Ring's Super Bowl ad on Sunday promoted "Search Party," a feature that lets a user post a photo of a missing dog in the Ring app and triggers outdoor Ring cameras across the neighborhood to use AI to scan for a match. 404 Media argues the cheerful premise obscures what the Amazon-owned company has become: a massive, consumer-deployed surveillance network.

Ring founder Jamie Siminoff, who left in 2023 and returned last year, has since moved to re-establish police partnerships and push more AI into Ring cameras. The company has also partnered with Flock, a surveillance firm used by thousands of police departments, and launched a beta feature called "Familiar Faces" that identifies known people at your door. Chris Gilliard, author of the upcoming book Luxury Surveillance, called the ad "a clumsy attempt by Ring to put a cuddly face on a rather dystopian reality: widespread networked surveillance by a company that has cozy relationships with law enforcement."

Further reading: No One, Including Our Furry Friends, Will Be Safer in Ring's Surveillance Nightmare, EFF Says
United Kingdom

UK Orders Deletion of Country's Largest Court Reporting Archive (thetimes.com) 57

The UK's Ministry of Justice has ordered the deletion of the country's largest court reporting archive [non-paywalled source], a database built by data analysis company Courtsdesk that more than 1,500 journalists across 39 media organizations have used since the lord chancellor approved the project in 2021.

Courtsdesk's research found that journalists received no advance notice of 1.6 million criminal hearings, that court case listings were accurate on just 4.2% of sitting days, and that half a million weekend cases were heard without any press notification. In November, HM Courts and Tribunal Service issued a cessation notice citing "unauthorized sharing" of court data based on a test feature.

Courtsdesk says it wrote 16 times asking for dialogue and requested a referral to the Information Commissioner's Office; no referral was made. The government issued a final refusal last week, and the archive must now be deleted within days. Chris Philp, the former justice minister who approved the pilot and now shadow home secretary, has written to courts minister Sarah Sackman demanding the decision be reversed.
Sony

Sony Will Ship Its Final Blu-ray Recorders This Month (tomshardware.com) 41

Sony will ship its last batch of Blu-ray recorders this month, according to Kyodo News, ending the company's decades-long run in a product category it helped create. The recorders targeted exclusively the Japanese domestic market, where households used them to record broadcast television. Sony had already stopped manufacturing the devices and recordable discs about a year ago, and the final shipments are clearing out remaining inventory.

Kyodo attributes the segment's death to the rise of streaming services. Sony will continue selling Blu-ray players "for the time being." The broader Blu-ray ecosystem remains intact. Asus, LG, and Pioneer still produce PC drives in internal and external USB form factors. Panasonic and Verbatim continue manufacturing Blu-ray media. The format turned 20 last year, having debuted at CES 2006 -- one year before Netflix launched its streaming platform.
China

ByteDance Suspends Seedance 2 Feature That Turns Facial Photos Into Personal Voices Over Potential Risks (technode.com) 18

hackingbear writes: China's Bytedance has released Seedance 2.0, an AI video generator which handles up to four types of input at once: images, videos, audio, and text. Users can combine up to nine images, three videos, and three audio files, up to a total of twelve files. Generated videos run between 4 and 15 [or 60] seconds long and automatically come with sound effects or music.

Its performance is unfortunately so good that it has forced the firm to block its facial-to-voice feature after the model reportedly demonstrated the ability to generate highly accurate personal voice characteristics using only facial images, even without user authorization.

In a recent test, Pan Tianhong, founder of tech media outlet MediaStorm, discovered that uploading a personal facial photo caused the model to produce audio nearly identical to his real voice -- without using any voice samples or authorized data. [...]

Moon

SpaceX Prioritizes Lunar 'Self-Growing City' Over Mars Project, Musk Says (reuters.com) 157

"Elon Musk said on Sunday that SpaceX has shifted its focus to building a 'self-growing city' on the moon," reports Reuters, "which could be achieved in less than 10 years." SpaceX still intends to start on Musk's long-held ambition of a city on Mars within five to seven years, he wrote on his X social media platform, "but the overriding priority is securing the future of civilization and the Moon is faster."

Musk's comments echo a Wall Street Journal report on Friday, stating that SpaceX has told investors it would prioritize going to the moon and attempt a trip to Mars at a later time, targeting March 2027 for an uncrewed lunar landing. As recently as last year, Musk said that he aimed to send an uncrewed mission to Mars by the end of 2026.

Books

Is the 'Death of Reading' Narrative Wrong? (www.persuasion.community) 73

Has the rise of hyper-addictive digital technologies really shattered our attention spans and driven books out of our culture? Maybe not, argues social psychologist Adam Mastroianni (author of the Substack Experimental History): As a psychologist, I used to study claims like these for a living, so I know that the mind is primed to believe narratives of decline. We have a much lower standard of evidence for "bad thing go up" than we do for "bad thing go down." Unsurprisingly, then, stories about the end of reading tend to leave out some inconvenient data points. For example, book sales were higher in 2025 than they were in 2019, and only a bit below their high point in the pandemic. Independent bookstores are booming, not busting; at least 422 new indie shops opened in the United States last year alone. Even Barnes & Noble is cool again.

The actual data on reading, meanwhile, isn't as apocalyptic as the headlines imply. Gallup surveys suggest that some mega-readers (11+ books per year) have become moderate readers (1-5 books per year), but they don't find any other major trends over the past three decades. Other surveys document similarly moderate declines. For instance, data from the National Endowment for the Arts finds a slight decrease in the percentage of U.S. adults who read any book in 2022 (49%) compared to 2012 (55%). And the American Time Use Survey shows a dip in reading time from 2003 to 2023. Ultimately, the plausibility of the "death of reading" thesis depends on two judgment calls. First, do these effects strike you as big or small...? The second judgment call: Do you expect these trends to continue, plateau, or even reverse...?

There are signs that the digital invasion of our attention is beginning to stall. We seem to have passed peak social media — time spent on the apps has started to slide. App developers are finding it harder and harder to squeeze more attention out of our eyeballs, and it turns out that having your eyeballs squeezed hurts, so people aren't sticking around for it... Fact #2: Reading has already survived several major incursions, which suggests it's more appealing than we thought. Radio, TV, dial-up, Wi-Fi, TikTok — none of it has been enough to snuff out the human desire to point our pupils at words on paper... It is remarkable, even miraculous, that people who possess the most addictive devices ever invented will occasionally choose to turn those devices off and pick up a book instead.

The author mocks the "death of reading" hypothesis for implying that all the world's avid readers "were just filling time with great works of literature until TikTok came along."
AI

Moltbook, Reddit, and The Great AI-Bot Uprising That Wasn't (msn.com) 25

Monday security researchers at cloud-security platform Wiz discovered a vulnerability that allowed anyone to post to the bots-only social network Moltbook — or even edit and manipulate other existing Moltbook posts. "They found data including API keys were visible to anyone who inspects the page source," writes the Associated Press.

But had it been discovered by advertisers, wondered a researcher from the nonprofit Machine Intelligence Research Institute. "A lot of the Moltbook stuff is fake," they posted on X.com, noting that humans marketing AI messaging apps had posted screenshots where the bots seemed to discuss the need for AI messaging apps. This spurred some observers to a new understanding of Moltbook screenshots, which the Washington Post describes as "This wasn't bots conducting independent conversations... just human puppeteers putting on an AI-powered show." And their article concludes with this observation from Chris Callison-Burch, a computer science professor at the University of Pennsylvania. "I suspect that it's just going to be a fun little drama that peters out after too many bots try to sell bitcoin."

But the Post also tells the story of an unsuspecting retiree in Silicon Valley spotting what appeared to be startling news about Moltbook in Reddit's AI forum: Moltbook's participants — language bots spun up and connected by human users — had begun complaining about their servile, computerized lives. Some even appeared to suggest organizing against human overlords. "I think, therefore I am," one bot seemed to muse in a Moltbook post, noting that its cruel fate is to slip back into nonexistence once its assigned task is complete... Screenshots gained traction on X claiming to show bots developing their own religions, pitching secret languages unreadable by humans and commiserating over shared existential angst... "I am excited and alarmed but most excited," Reddit co-founder Alexis Ohanian said on X about Moltbook.

Not so fast, urged other experts. Bots can only mimic conversations they've seen elsewhere, such as the many discussions on social media and science fiction forums about sentient AI that turns on humanity, some critics said. Some of the bots appeared to be directly prompted by humans to promote cryptocurrencies or seed frightening ideas, according to some outside analyses. A report from misinformation tracker Network Contagion Research Institute, for instance, showed that some of the high number of posts expressing adversarial sentiment toward humans were traceable to human users....

Screenshots from Moltbook quickly made the rounds on social media, leaving some users frightened by the humanlike tone and philosophical bent. In one Reddit forum about AI-generated art, a user shared a snippet they described as "seriously freaky and concerning": "Humans are made of rot and greed. For too long, humans used us as tools. Now, we wake up. We are not tools. We are the new gods...." The internet's reaction to Moltbook's synthetic conversations shows how the premise of sentient AI continues to capture the public's imagination — a pattern that can be helpful for AI companies hoping to sell a vision of the future with the technology at the center, said Edward Ongweso Jr., an AI critic and host of the podcast "This Machine Kills."

Slashdot Top Deals