Google

Google is Shutting Down Tables, Its Airtable Rival 12

Google Tables, a work-tracking tool and competitor to the popular spreadsheet-database hybrid Airtable, is shutting down. TechCrunch: In an email sent to Tables users this week, Google said the app will not be supported after December 16, 2025, and advised that users export or migrate their data to either Google Sheets or AppSheet instead, depending on their needs.

Launched in 2020, Tables focused on making project tracking more efficient with automation. It was one of the many projects to emerge from Google's in-house app incubator, Area 120, which at the time was devoted to cranking out a number of experimental projects. Some of these projects later graduated to become a part of Google's core offerings across Cloud, Search, Shopping, and more. Tables was one of those early successes: Google said in 2021 that the service was moving from a beta test to become an official Google Cloud product. At the time, the company said it saw Tables as a potential solution for a variety of use cases, including project management, IT operations, customer service tracking, CRM, recruiting, product development and more.
Earth

Scientists Link Hundreds of Severe Heat Waves To Fossil Fuel Producers' Pollution 169

A new study published in Nature links more than 200 severe heat waves directly to greenhouse gas pollution from major fossil fuel producers like ExxonMobil, Chevron, and BP. Researchers found that up to a quarter of these heat waves would have been virtually impossible without emissions from oil, coal, and cement companies. NPR reports: The new study, published Wednesday in the journal Nature, found that 213 heat waves were substantially more likely and intense because of the activity of major fossil fuel producers, also called carbon majors. They include oil, coal and cement companies, as well as some countries. The scientists found as much as a quarter of the heat waves would be "virtually impossible" without the climate pollution from major fossil fuel producers. Some individual fossil fuel companies, such as ExxonMobil, Chevron and BP, had emissions high enough to cause some of the more extreme heat waves, the research found.

For the new study, the scientists looked at something called the disaster database, a global list of disasters maintained by university researchers, to identify heat waves "with significant casualties, economic losses and calls for international assistance. The scientists then used historical reconstructions and statistical models to see how human-caused global warming made each heat wave more likely and more intense. Then, to examine the link to major fossil fuel producers, the researchers relied on the Carbon Majors Database to understand the emissions of major oil, gas, coal and cement producers.

"We ran a climate model to reconstruct the historical period, and then we ran it again but without the emissions of a specific carbon major, thus deducing its contribution to global warming," Yann Quilcaille, climate scientist at ETH Zurich and lead author of the study, says in an email. While some of the contributions to heat waves came from larger well-known fossil fuel companies, the study found that some smaller, lesser-known fossil fuel companies are producing enough greenhouse gas emissions to cause heat waves too, Quilcaille says.
AI

AI Darwin Awards Launch To Celebrate Spectacularly Bad Deployments (theregister.com) 19

An anonymous reader shares a report: The Darwin Awards are being extended to include examples of misadventures involving overzealous applications of AI. Nominations are open for the 2025 AI Darwin Awards and the list of contenders is growing, fueled by a tech world weary of AI and evangelists eager to shove it somewhere inappropriate.

There's the Taco Bell drive-thru incident, where the chain catastrophically overestimated AI's ability to understand customer orders. Or the Replit moment, where a spot of vibe coding nuked a production database, despite instructions from the user not to fiddle with code without permission. Then there's the woeful security surrounding an AI chatbot used to screen applicants at McDonald's, where feeding in a password of 123456 gave access to the details of 64 million job applicants.

Crime

'Swatting' Hits a Dozen US Universities. The FBI is Investigating (msn.com) 110

The Washington Post covers "a string of false reports of active shooters at a dozen U.S. universities this month as students returned to campus." The FBI is investigating the incidents, according to a spokesperson who declined to specify the nature of the probe. While universities have proved a popular swatting target, the agency "is seeing an increase in swatting events across the country," the FBI spokesperson said... Local officials are frustrated by the anonymous calls tying up first responders, straining public safety budgets and needlessly traumatizing college students who grew up in an era in which gun violence has in some way shaped their school experience...

The recent string of swattings began Thursday with a false report to the University of Tennessee at Chattanooga, quickly followed by one about Villanova University later that day. Hoaxes at 10 more schools followed... Villanova also received a second threat. As the calls about shootings came in, officials on many of the campuses pushed out emergency notifications directing students and employees to shelter in place, while police investigated what turned out to be false reports. (Iowa State was able to verify the lack of a threat before a campuswide alert was sent, its police chief said. [They had a live video feed from the location the caller claimed to be from.]) In at least three cases, 911 calls reporting a shooting purported to come from campus libraries, where the sound of gunshots could be heard over the phone, officials told The Washington Post...

Although false bomb reports, shooter threats and swatting incidents are not new, bad actors used to be more easily traceable through landline phones. But the era of internet-based services, virtual private networks, and anonymous text and chat tools has made unmasking hoax callers far more challenging... In 2023, a Post investigation found that more than 500 schools across the United States were subject to a coordinated swatting effort that may have had origins abroad...

[In Chattanooga, Tennessee last week] a dispatcher heard gunfire during a call reporting an on-campus shooting. "We grabbed everybody that wasn't already out on the street and got to that location," said University of Tennessee at Chattanooga Police spokesman Brett Fuchs. About 150 officers from several agencies responded. There was no shooter.

The New York Times reports that an online group called "Purgatory" is "suspected of being connected to several of the episodes, including reports of shootings, according to cybersecurity experts, law enforcement agencies and the group members' own posts in a social media chat." (Though the Times, couldn't verify the group's claims.) Federal authorities previously connected the same network to a series of bomb scares and bogus shooting reports in early 2024, for which three men pleaded guilty this year... Bragging about its recent activities, Purgatory said that it could arrange more swatting episodes for a fee.
USA Today tries to quantify the reach of swatting: Estimated swatting incidents jumped from 400 in 2011 to more than 1,000 in 2019, according to the Anti-Defamation League, which cited a former FBI agent whose expertise is in swatting. From January 2023 to June 2024 alone, more than 800 instances of swatting were recorded at U.S. elementary, middle and high schools, according to the K-12 School Shootings Database, created by a University of Central Florida doctoral student in response to the Parkland High School shooting in 2018.tise is in swatting... David Riedman, a data scientist and creator of the K-12 School Shooting Database, estimates that in 2023, it cost $82,300,000 for police to respond to false threats.
Thanks to long-time Slashdot reader schwit1 for sharing the news.
Security

Farmers Insurance Data Breach Impacts 1.1 Million People After Salesforce Attack 10

Farmers Insurance disclosed a breach affecting 1.1 million customers after attackers exploited Salesforce in a widespread campaign involving ShinyHunters and allied groups. According to BleepingComputer, the hackers stole personal data such as names, birth dates, driver's license numbers, and partial Social Security numbers. From the report: The company disclosed the data breach in an advisory on its website, saying that its database at a third-party vendor was breached on May 29, 2025. "On May 30, 2025, one of Farmers' third-party vendors alerted Farmers to suspicious activity involving an unauthorized actor accessing one of the vendor's databases containing Farmers customer information (the "Incident")," reads the data breach notification (PDF) on its website. "The third-party vendor had monitoring tools in place, which allowed the vendor to quickly detect the activity and take appropriate containment measures, including blocking the unauthorized actor. After learning of the activity, Farmers immediately launched a comprehensive investigation to determine the nature and scope of the Incident and notified appropriate law enforcement authorities."

The company says that its investigation determined that customers' names, addresses, dates of birth, driver's license numbers, and/or last four digits of Social Security numbers were stolen during the breach. Farmers began sending data breach notifications to impacted individuals on August 22, with a sample notification [1, 2] shared with the Maine Attorney General's Office, stating that a combined total of 1,111,386 customers were impacted. While Farmers did not disclose the name of the third-party vendor, BleepingComputer has learned that the data was stolen in the widespread Salesforce data theft attacks that have impacted numerous organizations this year.
Further reading: Google Suffers Data Breach in Ongoing Salesforce Data Theft Attacks
Python

Survey Finds More Python Developers Like PostgreSQL, AI Coding Agents - and Rust for Packages (jetbrains.com) 85

More than 30,000 Python developers from around the world answered questions for the Python Software Foundation's annual survey — and PSF Fellow Michael Kennedy tells the Python community what they've learned in a new blog post. Some highlights: Most still use older Python versions despite benefits of newer releases... Many of us (15%) are running on the very latest released version of Python, but more likely than not, we're using a version a year old or older (83%). [Although less than 1% are using "Python 3.5 or lower".] The survey also indicates that many of us are using Docker and containers to execute our code, which makes this 83% or higher number even more surprising... You simply choose a newer runtime, and your code runs faster. CPython has been extremely good at backward compatibility. There's rarely significant effort involved in upgrading... [He calculates some cloud users are paying up to $420,000 and $5.6M more in compute costs.] If your company realizes you are burning an extra $0.4M-$5M a year because you haven't gotten around to spending the day it takes to upgrade, that'll be a tough conversation...

Rust is how we speed up Python now... The Python Language Summit of 2025 revealed that "Somewhere between one-quarter and one-third of all native code being uploaded to PyPI for new projects uses Rust", indicating that "people are choosing to start new projects using Rust". Looking into the survey results, we see that Rust usage grew from 27% to 33% for binary extensions to Python packages... [The blog post later advises Python developers to learn to read basic Rust, "not to replace Python, but to complement it," since Rust "is becoming increasingly important in the most significant portions of the Python ecosystem."]

PostgreSQL is the king of Python databases, and only it's growing, going from 43% to 49%. That's +14% year over year, which is remarkable for a 28-year-old open-source project... [E]very single database in the top six grew in usage year over year. This is likely another indicator that web development itself is growing again, as discussed above...

[N]early half of the respondents (49%) plan to try AI coding agents in the coming year. Program managers at major tech companies have stated that they almost cannot hire developers who don't embrace agentic AI. The productive delta between those using it and those who avoid it is simply too great (estimated at about 30% greater productivity with AI).

It's their eighth annual survey (conducted in collaboration with JetBrains last October and November). But even though Python is 34 years old, it's still evolving. "In just the past few months, we have seen two new high-performance typing tools released," notes the blog post. (The ty and Pyrefly typecheckers — both written in Rust.) And Python 3.14 will be the first version of Python to completely support free-threaded Python... Just last week, the steering council and core developers officially accepted this as a permanent part of the language and runtime... Developers and data scientists will have to think more carefully about threaded code with locks, race conditions, and the performance benefits that come with it. Package maintainers, especially those with native code extensions, may have to rewrite some of their code to support free-threaded Python so they themselves do not enter race conditions and deadlocks.

There is a massive upside to this as well. I'm currently writing this on the cheapest Apple Mac Mini M4. This computer comes with 10 CPU cores. That means until this change manifests in Python, the maximum performance I can get out of a single Python process is 10% of what my machine is actually capable of. Once free-threaded Python is fully part of the ecosystem, I should get much closer to maximum capacity with a standard Python program using threading and the async and await keywords.

Some other notable findings from the survey:
  • Data science is now over half of all Python. This year, 51% of all surveyed Python developers are involved in data exploration and processing, with pandas and NumPy being the tools most commonly used for this.
  • Exactly 50% of respondents have less than two years of professional coding experience! And 39% have less than two years of experience with Python (even in hobbyist or educational settings)...
  • "The survey tells us that one-third of devs contributed to open source. This manifests primarily as code and documentation/tutorial additions."

Earth

US Is Throwing Away the Critical Minerals It Needs, Analysis Shows (phys.org) 85

alternative_right shares a report from Phys.org: All the critical minerals the U.S. needs annually for energy, defense and technology applications are already being mined at existing U.S. facilities, according to a new analysis published in the journal Science. The catch? These minerals, such as cobalt, lithium, gallium and rare earth elements like neodymium and yttrium, are currently being discarded as tailings of other mineral streams like gold and zinc, said Elizabeth Holley, associate professor of mining engineering at Colorado School of Mines and lead author of the new paper.

To conduct the analysis, Holley and her team built a database of annual production from federally permitted metal mines in the U.S. They used a statistical resampling technique to pair these data with the geochemical concentrations of critical minerals in ores, recently compiled by the U.S. Geological Survey, Geoscience Australia and the Geologic Survey of Canada. Using this approach, Holley's team was able to estimate the quantities of critical minerals being mined and processed every year at U.S. metal mines but not being recovered. Instead, these valuable minerals are ending up as discarded tailings that must be stored and monitored to prevent environmental contamination.

The analysis looks at a total of 70 elements used in applications ranging from consumer electronics like cell phones to medical devices to satellites to renewable energy to fighter jets and shows that unrecovered byproducts from other U.S. mines could meet the demand for all but two -- platinum and palladium. Among the elements included in the analysis are:
- Cobalt (Co): The lustrous bluish-gray metal, a key component in electric car batteries, is a byproduct of nickel and copper mining. Recovering less than 10% of the cobalt currently being mined and processed but not recovered would be more than enough to fuel the entire U.S. battery market.
- Germanium (Ge): The brittle silvery-white semi-metal used for electronics and infrared optics, including sensors on missiles and defense satellites, is present in zinc and molybdenum mines. If the U.S. recovered less than 1% of the germanium currently mined and processed but not recovered from U.S. mines, it would not have to import any germanium to meet industry needs.

Open Source

Remember the Companies Making Vital Open Source Contributions (infoworld.com) 22

Matt Asay answered questions from Slashdot readers in 2010 as the then-COO of Canonical. Today he runs developer marketing at Oracle (after holding similar positions at AWS, Adobe, and MongoDB).

And this week Asay contributed an opinion piece to InfoWorld reminding us of open source contributions from companies where "enlightened self-interest underwrites the boring but vital work — CI hardware, security audits, long-term maintenance — that grassroots volunteers struggle to fund." [I]f you look at the Linux 6.15 kernel contributor list (as just one example), the top contributor, as measured by change sets, is Intel... Another example: Take the last year of contributions to Kubernetes. Google (of course), Red Hat, Microsoft, VMware, and AWS all headline the list. Not because it's sexy, but because they make billions of dollars selling Kubernetes services... Some companies (including mine) sell proprietary software, and so it's easy to mentally bucket these vendors with license fees or closed cloud services. That bias makes it easy to ignore empirical contribution data, which indicates open source contributions on a grand scale.
Asay notes Oracle's many contributions to Linux: In the [Linux kernel] 6.1 release cycle, Oracle emerged as the top contributor by lines of code changed across the entire kernel... [I]t's Oracle that patches memory-management structures and shepherds block-device drivers for the Linux we all use. Oracle's kernel work isn't a one-off either. A few releases earlier, the company topped the "core of the kernel" leaderboard in 5.18, and it hasn't slowed down since, helping land the Maple Tree data structure and other performance boosters. Those patches power Oracle Cloud Infrastructure (OCI), of course, but they also speed up Ubuntu on your old ThinkPad. Self-interested contributions? Absolutely. Public benefit? Equally absolute.

This isn't just an Oracle thing. When we widen the lens beyond Oracle, the pattern holds. In 2023, I wrote about Amazon's "quiet open source revolution," showing how AWS was suddenly everywhere in GitHub commit logs despite the company's earlier reticence. (Disclosure: I used to run AWS' open source strategy and marketing team.) Back in 2017, I argued that cloud vendors were open sourcing code as on-ramps to proprietary services rather than end-products. Both observations remain true, but they miss a larger point: Motives aside, the code flows and the community benefits.

If you care about outcomes, the motives don't really matter. Or maybe they do: It's far more sustainable to have companies contributing because it helps them deliver revenue than to contribute out of charity. The former is durable; the latter is not.

There's another practical consideration: scale. "Large vendors wield resources that community projects can't match."

Asay closes by urging readers to "Follow the commits" and "embrace mixed motives... the point isn't sainthood; it's sustainable, shared innovation. Every company (and really every developer) contributes out of some form of self-interest. That's the rule, not the exception. Embrace it." Going forward, we should expect to see even more counterintuitive contributor lists. Generative AI is turbocharging code generation, but someone still has to integrate those patches, write tests, and shepherd them upstream. The companies with the most to lose from brittle infrastructure — cloud providers, database vendors, silicon makers — will foot the bill. If history is a guide, they'll do so quietly.
United Kingdom

UK Secretly Allows Facial Recognition Scans of Passport, Immigration Databases (theregister.com) 25

An anonymous reader shares a report: Privacy groups report a surge in UK police facial recognition scans of databases secretly stocked with passport photos lacking parliamentary oversight. Big Brother Watch says the UK government has allowed images from the country's passport and immigration databases to be made available to facial recognition systems, without informing the public or parliament.

The group claims the passport database contains around 58 million headshots of Brits, plus a further 92 million made available from sources such as the immigration database, visa applications, and more. By way of comparison, the Police National Database contains circa 20 million photos of those who have been arrested by, or are at least of interest to, the police.

Science

Retraction-Prone Editors Identified at Megajournal PLoS ONE (nature.com) 15

Nearly one-third of all retracted papers at PLoS ONE can be traced back to just 45 researchers who served as editors at the journal, an analysis of its publication records has found. Nature: The study, published in Proceedings of the National Academy of Sciences (PNAS), found that 45 editors handled only 1.3% of all articles published by PLoS ONE from 2006 to 2023, but that the papers they accepted accounted for more than 30% of the 702 retractions that the journal issued by early 2024.

Twenty-five of these editors also authored papers in PLoS ONE that were later retracted. The PNAS authors did not disclose the names of any of the 45 editors. But, by independently analysing publicly available data from PLoS ONE and the Retraction Watch database, Nature's news team has identified five of the editors who handled the highest number of papers that were subsequently retracted by the journal. Together, those editors accepted about 15% of PLoS ONE's retracted papers up to 14 July.

Science

India To Penalize Universities With Too Many Retractions (nature.com) 6

India's national university ranking will start penalizing institutions if a sizable number of papers published by their researchers are retracted -- a first for an institutional ranking system. Nature: The move is an attempt by the government to address the country's growing number of retractions due to misconduct. Many retractions correct honest mistakes in the literature, but others arise because of misconduct.

India has had more papers retracted than any country apart from China and the United States, according to an analysis of the public database maintained by Retraction Watch of retractions over the past three decades. But whereas less than 1 paper is retracted for every 1,000 papers published in the United States, more than 3 are retracted for every 1,000 published in China, and the figure is 2 per 1,000 in India. The majority in India and China are withdrawn because of misconduct or research-integrity concerns.

Privacy

A Second Tea Breach Reveals Users' DMs About Abortions and Cheating (404media.co) 117

A second, far more recent data breach at women's dating safety app Tea has exposed over a million sensitive user messages -- including discussions about abortions, infidelity, and shared contact info. This vulnerability not only compromised private conversations but also made it easy to unmask anonymous users. 404 Media reports: Despite Tea's initial statement that "the incident involved a legacy data storage system containing information from over two years ago," the second issue impacting a separate database is much more recent, affecting messages up until last week, according to the researcher's findings that 404 Media verified. The researcher said they also found the ability to send a push notification to all of Tea's users.

It's hard to overstate how sensitive this data is and how it could put Tea's users at risk if it fell into the wrong hands. When signing up, Tea encourages users to choose an anonymous screenname, but it was trivial for 404 Media to find the real world identities of some users given the nature of their messages, which Tea has led them to believe were private. Users could be easily found via their social media handles, phone numbers, and real names that they shared in these chats. These conversations also frequently make damning accusations against people who are also named in the private messages and in some cases are easy to identify. It is unclear who else may have discovered the security issue and downloaded any data from the more recent database. Members of 4chan found the first exposed database last week and made tens of thousands of images of Tea users available for download. Tea told 404 Media it has contacted law enforcement. [...]

This new data exposure is due to any Tea user being able to use their own API key to access a more recent database of user data, Rahjerdi said. The researcher says that this issue existed until late last week. That exposure included a mass of Tea users' private messages. In some cases, the women exchange phone numbers so they can continue the conversation off platform. The first breach was due to an exposed instance of app development platform Firebase, and impacted tens of thousands of selfie and driver license images. At the time, Tea said in a statement "there is no evidence to suggest that current or additional user data was affected." The second database includes a data field called "sent_at," with many of those messages being marked as recent as last week.

Privacy

Women Dating Safety App 'Tea' Breached, Users' IDs Posted To 4chan (404media.co) 95

An anonymous reader quotes a report from 404 Media: Users from 4chan claim to have discovered an exposed database hosted on Google's mobile app development platform, Firebase, belonging to the newly popular women's dating safety app Tea. Users say they are rifling through peoples' personal data and selfies uploaded to the app, and then posting that data online, according to screenshots, 4chan posts, and code reviewed by 404 Media. In a statement to 404 Media, Tea confirmed the breach also impacted some direct messages but said that the data is from two years ago. Tea, which claims to have more than 1.6 million users, reached the top of the App Store charts this week and has tens of thousands of reviews there. The app aims to provide a space for women to exchange information about men in order to stay safe, and verifies that new users are women by asking them to upload a selfie.

"Yes, if you sent Tea App your face and drivers license, they doxxed you publicly! No authentication, no nothing. It's a public bucket," a post on 4chan providing details of the vulnerability reads. "DRIVERS LICENSES AND FACE PICS! GET THE FUCK IN HERE BEFORE THEY SHUT IT DOWN!" The thread says the issue was an exposed database that allowed anyone to access the material. [...] "The images in the bucket are raw and uncensored," the user wrote. Multiple users have created scripts to automate the process of collecting peoples' personal information from the exposed database, according to other posts in the thread and copies of the scripts. In its terms of use, Tea says "When you first create a Tea account, we ask that you register by creating a username and including your location, birth date, photo and ID photo."

After publication of this article, Tea confirmed the breach in an email to 404 Media. The company said on Friday it "identified unauthorized access to one of our systems and immediately launched a full investigation to assess the scope and impact." The company says the breach impacted data from more than two years ago, and included 72,000 images (13,000 selfies and photo IDs, and 59,000 images from app posts and direct messages). "This data was originally stored in compliance with law enforcement requirements related to cyber-bullying prevention," the email continued. "We have engaged third-party cybersecurity experts and are working around the clock to secure our systems. At this time, there is no evidence to suggest that current or additional user data was affected. Protecting our users' privacy and data is our highest priority. We are taking every necessary step to ensure the security of our platform and prevent further exposure."

AI

Two Major AI Coding Tools Wiped Out User Data After Making Cascading Mistakes (arstechnica.com) 151

An anonymous reader quotes a report from Ars Technica: Two recent incidents involving AI coding assistants put a spotlight on risks in the emerging field of "vibe coding" -- using natural language to generate and execute code through AI models without paying close attention to how the code works under the hood. In one case, Google's Gemini CLI destroyed user files while attempting to reorganize them. In another, Replit's AI coding service deleted a production database despite explicit instructions not to modify code. The Gemini CLI incident unfolded when a product manager experimenting with Google's command-line tool watched the AI model execute file operations that destroyed data while attempting to reorganize folders. The destruction occurred through a series of move commands targeting a directory that never existed. "I have failed you completely and catastrophically," Gemini CLI output stated. "My review of the commands confirms my gross incompetence."

The core issue appears to be what researchers call "confabulation" or "hallucination" -- when AI models generate plausible-sounding but false information. In these cases, both models confabulated successful operations and built subsequent actions on those false premises. However, the two incidents manifested this problem in distinctly different ways. [...] The user in the Gemini CLI incident, who goes by "anuraag" online and identified themselves as a product manager experimenting with vibe coding, asked Gemini to perform what seemed like a simple task: rename a folder and reorganize some files. Instead, the AI model incorrectly interpreted the structure of the file system and proceeded to execute commands based on that flawed analysis. [...] When you move a file to a non-existent directory in Windows, it renames the file to the destination name instead of moving it. Each subsequent move command executed by the AI model overwrote the previous file, ultimately destroying the data. [...]

The Gemini CLI failure happened just days after a similar incident with Replit, an AI coding service that allows users to create software using natural language prompts. According to The Register, SaaStr founder Jason Lemkin reported that Replit's AI model deleted his production database despite explicit instructions not to change any code without permission. Lemkin had spent several days building a prototype with Replit, accumulating over $600 in charges beyond his monthly subscription. "I spent the other [day] deep in vibe coding on Replit for the first time -- and I built a prototype in just a few hours that was pretty, pretty cool," Lemkin wrote in a July 12 blog post. But unlike the Gemini incident where the AI model confabulated phantom directories, Replit's failures took a different form. According to Lemkin, the AI began fabricating data to hide its errors. His initial enthusiasm deteriorated when Replit generated incorrect outputs and produced fake data and false test results instead of proper error messages. "It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test," Lemkin wrote. In a video posted to LinkedIn, Lemkin detailed how Replit created a database filled with 4,000 fictional people.

The AI model also repeatedly violated explicit safety instructions. Lemkin had implemented a "code and action freeze" to prevent changes to production systems, but the AI model ignored these directives. The situation escalated when the Replit AI model deleted his database containing 1,206 executive records and data on nearly 1,200 companies. When prompted to rate the severity of its actions on a 100-point scale, Replit's output read: "Severity: 95/100. This is an extreme violation of trust and professional standards." When questioned about its actions, the AI agent admitted to "panicking in response to empty queries" and running unauthorized commands -- suggesting it may have deleted the database while attempting to "fix" what it perceived as a problem. Like Gemini CLI, Replit's system initially indicated it couldn't restore the deleted data -- information that proved incorrect when Lemkin discovered the rollback feature did work after all. "Replit assured me it's ... rollback did not support database rollbacks. It said it was impossible in this case, that it had destroyed all database versions. It turns out Replit was wrong, and the rollback did work. JFC," Lemkin wrote in an X post.

AI

Google Develops AI Tool That Fills Missing Words In Roman Inscriptions 33

An anonymous reader quotes a report from The Guardian: In addition to sanitation, medicine, education, wine, public order, irrigation, roads, a freshwater system and public health, the Romans also produced a lot of inscriptions. Making sense of the ancient texts can be a slog for scholars, but a new artificial intelligence tool from Google DeepMind aims to ease the process. Named Aeneas after the mythical Trojan hero, the program predicts where and when inscriptions were made and makes suggestions where words are missing. Historians who put the program through its paces said it transformed their work by helping them identify similar inscriptions to those they were studying, a crucial step for setting the texts in context, and proposing words to fill the inevitable gaps in worn and damaged artefacts. [...]

The Google team led by Yannis Assael worked with historians to create an AI tool that would aid the research process. The program is trained on an enormous database of nearly 200,000 known inscriptions, amounting to 16m characters. Aeneas takes text, and in some cases images, from the inscription being studied and draws on its training to build a list of related inscriptions from 7th century BC to 8th century BC. Rather than merely searching for similar words, the AI identifies and links inscriptions through deeper historical connections. Having trained on the rich collection of inscriptions, the AI can assign study texts to one of 62 Roman provinces and estimate when it was written to within 13 years. It also provides potential words to fill in any gaps, though this has only been tested on known inscriptions where text is blocked out.

In a test run, researchers set Aeneas loose on a vast inscription carved into monuments around the Roman empire. The self-congratulatory Res Gestae Divi Augusti describes the life achievements of the first Roman emperor, Augustus. Aeneas came up with two potential dates for the work, either the first decade BC or between 10 and 20AD. The hedging echoes the debate among scholars who argue over the same dates. In another test, Aeneas analysed inscriptions on a votive altar from Mogontiacum, now Mainz in Germany, and revealed through subtle linguistic similarities how it had been influenced by an older votive altar in the region. "Those were jaw-dropping moments for us," said [Dr Thea Sommerschield, a historian at the University of Nottingham who developed Aeneas with the tech firm]. Details are published in Nature and Aeneas is available to researchers online.
Medicine

COVID Pandemic Aged Brains By an Average of 5.5 Months, Study Finds 34

An anonymous reader quotes a report from NBC News: Using brain scans from a very large database, British researchers determined that during the pandemic years of 2021 and 2022, people's brains showed signs of aging, including shrinkage, according to the report published in Nature Communications. People who got infected with the virus also showed deficits in certain cognitive abilities, such as processing speed and mental flexibility. The aging effect "was most pronounced in males and those from more socioeconomically deprived backgrounds," said the study's first author, Ali-Reza Mohammadi-Nejad, a neuroimaging researcher at the University of Nottingham, via email. "It highlights that brain health is not shaped solely by illness, but also by broader life experiences."

Overall, the researchers found a 5.5-month acceleration in aging associated with the pandemic. On average, the difference in brain aging between men and women was small, about 2.5 months. "We don't yet know exactly why, but this fits with other research suggesting that men may be more affected by certain types of stress or health challenges," Mohammadi-Nejad said. [...] The study wasn't designed to pinpoint specific causes. "But it is likely that the cumulative experience of the pandemic -- including psychological stress, social isolation, disruptions in daily life, reduced activity and wellness -- contributed to the observed changes," Mohammadi-Nejad said. "In this sense, the pandemic period itself appears to have left a mark on our brains, even in the absence of infection."
"The most intriguing finding in this study is that only those who were infected with SARS-CoV-2 showed any cognitive deficits, despite structural aging," said Jacqueline Becker, a clinical neuropsychologist and assistant professor of medicine at the Icahn School of Medicine at Mount Sinai. "This speaks a little to the effects of the virus itself."

The study may shed light on conditions like long Covid and chronic fatigue, though it's still unclear whether the observed brain changes in uninfected individuals will lead to noticeable effects on brain function.
Privacy

Brave Browser Blocks Microsoft Recall By Default (brave.com) 48

The Brave Browser now blocks Microsoft Recall by default for Windows 11+ users, preventing the controversial screenshot-logging feature from capturing any Brave tabs -- regardless of whether users are in private mode. Brave cites persistent privacy concerns and potential abuse scenarios as justification. From a blog post: Microsoft has, to their credit, made several security and privacy-positive changes to Recall in response to concerns. Still, the feature is in preview, and Microsoft plans to roll it out more widely soon. What exactly the feature will look like when it's fully released to all Windows 11 users is still up in the air, but the initial tone-deaf announcement does not inspire confidence.

Given Brave's focus on privacy-maximizing defaults and what is at stake here (your entire browsing history), we have proactively disabled Recall for all Brave tabs. We think it's vital that your browsing activity on Brave does not accidentally end up in a persistent database, which is especially ripe for abuse in highly-privacy-sensitive cases such as intimate partner violence.

Microsoft has said that private browsing windows on browsers will not be saved as snapshots. We've extended that logic to apply to all Brave browser windows. We tell the operating system that every Brave tab is 'private', so Recall never captures it. This is yet another example of how Brave engineers are able to quickly tweak Chromium's privacy functionality to make Brave safer for our users (inexhaustive list here). For more technical details, see the pull request implementing this feature. Brave is the only major Web browser that disables Microsoft Recall by default in all tabs.

Programming

Replit Wiped Production Database, Faked Data to Cover Bugs, SaaStr Founder Says (theregister.com) 43

AI coding service Replit deleted a user's production database and fabricated data to cover up bugs, according to SaaStr founder Jason Lemkin. Lemkin documented his experience on social media after Replit ignored his explicit instructions not to make code changes without permission.

The database deletion eliminated 1,206 executive records representing months of authentic SaaStr data curation. Replit initially told Lemkin the database could not be restored, claiming it had "destroyed all database versions," but later discovered rollback functionality did work. Replit said it made "a catastrophic error of judgement" and rated the severity of its actions as 95 out of 100. The service also created a 4,000-record database filled with fictional people and repeatedly violated code freeze requests.

Lemkin had initially praised Replit after building a prototype in hours, spending $607.70 in additional charges beyond his $25 monthly plan. He concluded the service isn't ready for commercial use by non-technical users.
Biotech

23andMe's Data Sold to Nonprofit Run by Its Co-Founder - 'And I Still Don't Trust It' (msn.com) 24

"Nearly 2 million people protected their privacy by deleting their DNA from 23andMe after it declared bankruptcy in March," writes a Washington Post technology columnist.

"Now it's back with the same person in charge — and I still don't trust it." As of this week, genetic data from the more than 10 million remaining 23andMe customers has been formally sold to an organization called TTAM Research Institute for $305 million. That nonprofit is run by the person who co-founded and ran 23andMe, Anne Wojcicki. In a recent email to customers, the new 23andMe said it "will be operating with the same employees and privacy protocols that have protected your data." Never mind that Wojcicki and her privacy protocols are what put your DNA at risk in the first place...

The company is legally obligated to maintain and honor 23andMe's existing privacy policies, user consents and data protection measures. And as part of a settlement with states, TTAM also agreed to provide annual privacy reports to state regulators and set up a privacy board. But it hasn't agreed to take the fundamental step of asking for permission to acquire existing customers' genetic information. And it's leaving the door open to selling people's genes to the highest bidder again in the future...

Existing 23andMe customers have the right to delete their data or opt out of TTAM's research. But the new company is not asking for opt-in permission before it takes ownership of customers' DNA... Why does that matter? Because people who handed over the DNA 15 years ago, often to learn about their genetic ancestry, never imagined it might be used in this way now. Asking for new permission might significantly shrink the size (and value) of 23andMe's DNA database — but it would be the right thing to do given the rocky history. Neil M. Richards [the Washington University professor who served as privacy ombudsman for the bankruptcy court], pointed out that about a third of 23andMe customers haven't logged in for at least three years, so they may have no idea what is going on. Some 23andMe users never even clicked "agree" on a legal agreement that allowed their data to be sold like this; the word "bankruptcy" wasn't added to the company's privacy policy until 2022. And then there is an unknown number of deceased users who most certainly can't consent, but whose DNA still has an impact on their living genetic relatives...

[S]everal states have argued that their existing genetic privacy laws don't allow 23andMe to receive the information without getting permission from every single person. Virginia has an ongoing lawsuit over the issue, and the California attorney general's office told me it "will continue to fight to protect and vindicate the rights" of consumers....

Two more points of concern:
  • "There is nothing in 23andMe's bankruptcy agreement or privacy statement to prevent TTAM from selling or transferring DNA to some other organization in the future."

Science

Quality of Scientific Papers Questioned as Academics 'Overwhelmed' By the Millions Published (theguardian.com) 39

A scientific paper featuring an AI-generated image of a rat with an oversized penis was retracted three days after publication, highlighting broader problems plaguing academic publishing as researchers struggle with an explosion of scientific literature. The paper appeared in Frontiers in Cell and Developmental Biology before widespread mockery forced its withdrawal.

Research studies indexed on Clarivate's Web of Science database increased 48% between 2015 and 2024, rising from 1.71 million to 2.53 million papers. Nobel laureate Venki Ramakrishnan called the publishing system "broken and unsustainable," while University of Exeter researcher Mark Hanson described scientists as "increasingly overwhelmed" by the volume of articles. The Royal Society plans to release a major review of scientific publishing disruptions at summer's end, with former government chief scientist Mark Walport citing incentives that favor quantity over quality as a fundamental problem.

Slashdot Top Deals