Privacy

'TotalRecall Reloaded' Tool Finds a Side Entrance To Windows 11 Recall Database (arstechnica.com) 29

An anonymous reader quotes a report from Ars Technica: Two years ago, Microsoft launched its first wave of "Copilot+" Windows PCs with a handful of exclusive features that could take advantage of the neural processing unit (NPU) hardware being built into newer laptop processors. These NPUs could enable AI and machine learning features that could run locally rather than in someone's cloud, theoretically enhancing security and privacy. One of the first Copilot+ features was Recall, a feature that promised to track all your PC usage via screenshot to help you remember your past activity. But as originally implemented, Recall was neither private nor secure; the feature stored its screenshots plus a giant database of all user activity in totally unencrypted files on the user's disk, making it trivial for anyone with remote or local access to grab days, weeks, or even months of sensitive data, depending on the age of the user's Recall database.

After journalists and security researchers discovered and detailed these flaws, Microsoft delayed the Recall rollout by almost a year and substantially overhauled its security. All locally stored data would now be encrypted and viewable only with Windows Hello authentication; the feature now did a better job detecting and excluding sensitive information, including financial information, from its database; and Recall would be turned off by default, rather than enabled on every PC that supported it. The reconstituted Recall was a big improvement, but having a feature that records the vast majority of your PC usage is still a security and privacy risk. Security researcher Alexander Hagenah was the author of the original "TotalRecall" tool that made it trivially simple to grab the Recall information on any Windows PC, and an updated "TotalRecall Reloaded" version exposes what Hagenah believes are additional vulnerabilities.

The problem, as detailed by Hagenah on the TotalRecall GitHub page, isn't with the security around the Recall database, which he calls "rock solid." The problem is that, once the user has authenticated, the system passes Recall data to another system process called AIXHost.exe, and that process doesn't benefit from the same security protections as the rest of Recall. "The vault is solid," Hagenah writes. "The delivery truck is not." The TotalRecall Reloaded tool uses an executable file to inject a DLL file into AIXHost.exe, something that can be done without administrator privileges. It then waits in the background for the user to open Recall and authenticate using Windows Hello. Once this is done, the tool can intercept screenshots, OCR'd text, and other metadata that Recall sends to the AIXHost.exe process, which can continue even after the user closes their Recall session.

"The VBS enclave won't decrypt anything without Windows Hello," Hagenah writes. "The tool doesn't bypass that. It makes the user do it, silently rides along when the user does it, or waits for the user to do it." A handful of tasks, including grabbing the most recent Recall screenshot, capturing select metadata about the Recall database, and deleting the user's entire Recall database, can be done with no Windows Hello authentication. Once authenticated, Hagenah says the TotalRecall Reloaded tool can access both new information recorded to the Recall database as well as data Recall has previously recorded.
"We appreciate Alexander Hagenah for identifying and responsibly reporting this issue. After careful investigation, we determined that the access patterns demonstrated are consistent with intended protections and existing controls, and do not represent a bypass of a security boundary or unauthorized access to data," a Microsoft spokesperson told Ars. "The authorization period has a timeout and anti-hammering protection that limit the impact of malicious queries."
AI

OpenAI's Big Codex Update Is a Direct Shot At Claude Code (theverge.com) 5

OpenAI is updating Codex with more agent-like capabilities, positioning it as a more direct rival to Anthropic's Claude Code. Some of the new features include the ability to operate macOS desktop apps, browse the web inside the app, generate images, use new workplace plug-ins, and remember useful context from past tasks. The Verge reports: Codex will now be able to operate desktop apps on your computer, OpenAI says in a blog post announcing the update. It can work in the background, meaning it won't interfere with your own work in other apps, and multiple agents can work in parallel. For developers, OpenAI says "this is helpful for testing and iterating on frontend changes, testing apps, or working in apps that don't expose an API." The feature will start rolling out to Codex desktop app users signed in with ChatGPT today and will initially be limited to macOS. OpenAI did not indicate a timeline for when use will expand to other operating systems. EU users will also have to wait, it said, adding that the update will roll out to users there "soon."

Codex is also getting the ability to generate and iterate on images with gpt-image-1.5, new plug-ins for tools like GitLab, Atlassian Rovo, and Microsoft Suite, and native web browsing through an in-app browser, "where you can comment directly on pages to provide precise instructions to the agent." OpenAI also said it will also be easier to automate tasks, with users able to re-use existing conversation threads and Codex now able to schedule future work for itself and wake up automatically to continue on a long-term task. Codex will also be getting a memory feature allowing it to remember useful context from past experience, such as personal preferences, corrections, and information that took time to gather. OpenAI said it hopes the opt-in feature, which will be released as a preview, will help future tasks complete faster and to a quality that previously required detailed custom instructions. The personalization features will roll out to Enterprise, Edu, and EU users "soon."

Government

Google, Pentagon Discuss Classified AI Deal (reuters.com) 19

An anonymous reader quotes a report from Reuters: Alphabet's Google is negotiating an agreement with the Department of Defense that would allow the Pentagon to deploy its Gemini AI models in classified settings, the Information reported on Thursday, citing two people with direct knowledge of the discussions. The two parties are discussing an agreement that would allow the Pentagon to use Google's AI for all lawful uses, according to the report.

During the negotiations, Google has proposed additional language in its contract with the department to prevent its AI from being used for domestic mass surveillance or autonomous weapons without appropriate human control, the Information reported. The Pentagon will continue to deploy frontier AI capabilities through strong industry partnerships across all classification levels, a Pentagon official said, without confirming any talks with Google.

AI

Anthropic Rolls Out Claude Opus 4.7, an AI Model That Is Less Risky Than Mythos 40

Anthropic released Claude Opus 4.7, calling it its strongest generally available model and an improvement over Opus 4.6 in areas like software engineering, instruction-following, tool use, and agentic coding. But the company says it is "less broadly capable" than the restricted Claude Mythos Preview, "which Anthropic rolled out to a select group of companies as part of a new cybersecurity initiative called Project Glasswing earlier this month," reports CNBC. From the report: The launch of Claude Opus 4.7 on Thursday comes after Anthropic launched Claude Opus 4.6 in February. Anthropic said the new model outperforms Claude Opus 4.6 across many use cases, including industry benchmarks for agentic coding, multidisciplinary reasoning, scaled tool use and agentic computer use, according to a release. Anthropic said it experimented with efforts to "differentially reduce" Claude Opus 4.7's cyber capabilities during training.

The company encouraged security professionals who are interested in using the model for "legitimate cybersecurity purposes" to apply through a formal verification program. Claude Opus 4.7 is available across all of Anthropic's Claude products, its application programming interface and through cloud providers Microsoft, Google and Amazon. The new model is the same price as Claude Opus 4.6, Anthropic said.
Technology

Researchers Induce Smells With Ultrasound, No Chemical Cartridges Required (uploadvr.com) 51

An anonymous reader quotes a report from UploadVR: A group of independent researchers built a device that can artificially induce smell using ultrasound, with no consumable cartridges required. [...] The team of four are Lev Chizhov, Albert Yan-Huang, Thomas Ribeiro, Aayush Gupta. Chizhov is a neurotech entrepreneur with a background in math and physics, Yan-Huang is a researcher at Caltech with a background in computation and neural systems, and Ribeiro and Gupta are co-researchers on the project with software engineering and AI expertise.

Instead of targeting your nose at all, the device directly targets the olfactory bulb in your brain with "focused ultrasound through the skull." The researchers say that as far as they're aware, no one has ever done this before, even in animals. A challenge in targeting the olfactory bulb is that it's buried behind the top of your nose, and your nose doesn't provide a flat surface for an emitter. Ultrasound also doesn't travel well through air. The solution the researchers came up with was to place the emitter on your forehead instead, with a "solid, jello-like pad for stability and general comfort," and the ultrasound directed downward towards the olfactory bulb.

To determine the best placement, they say they used an MRI of one of their skulls to "roughly determine where the transducer would point and how the focal region (where ultrasound waves actually concentrate) aligned with the olfactory bulb (the target for stimulation)". [...] According to the researchers, they were able to induce the sensation of fresh air "with a lot of oxygen", the smell of garbage "like few-day-old fruit peels," an ozone-like sensation "like you're next to an air ionizer," and a campfire smell of burning wood. While technically head-mounted, the current device does require being held up with two hands. But as with all such prototypes, it likely could be significantly miniaturized.

Robotics

Boston Dynamics' Robot Dog Can Now Read Gauges, Spot Spills, and Reason (ieee.org) 91

Boston Dynamics has integrated Google DeepMind into its robotic dog Spot, giving it more autonomous reasoning for industrial inspections like spotting spills and reading gauges. Spot can also now recognize when to call on other AI tools. IEEE Spectrum reports: Boston Dynamics is one of the few companies to commercially deploy legged robots at any appreciable scale; there are now several thousand hard at work. Today the company is announcing that its quadruped robot Spot is now equipped with Google DeepMind's Gemini Robotics-ER 1.6, a high-level embodied reasoning model that brings usability and intelligence to complex tasks.

[T]he focus of this partnership is on one of the very few applications where legged robots have proven themselves to be commercially viable: inspection. That is, wandering around industrial facilities, checking to make sure that nothing is imminently exploding. With the new AI onboard, Spot is now able to autonomously look for dangerous debris or spills, read complex gauges and sight glasses, and call on tools like vision-language-action models when it needs help understanding what's going on in the environment around it.
"Advances like Gemini Robotics-ER 1.6 mark an important step toward robots that can better understand and operate in the physical world," Marco da Silva, vice president and general manager of Spot at Boston Dynamics, says in a press release. "Capabilities like instrument reading and more reliable task reasoning will enable Spot to see, understand, and react to real-world challenges completely autonomously."

You can watch a demo of Spot's new capabilities on YouTube.
AI

Cal.com Is Going Closed Source Because of AI 93

Cal is moving its flagship scheduling software from open source to a proprietary license, arguing that AI coding tools now make it much easier for attackers to scan public codebases for vulnerabilities. "Open source security always relied on people to find and fix any problems," said Peer Richelsen, co-founder of Cal. "Now AI attackers are flaunting that transparency." CEO Bailey Pumfleet added: "Open-source code is basically like handing out the blueprint to a bank vault. And now there are 100x more hackers studying the blueprint." The company says it still supports open source and is releasing a separate Cal.diy version for hobbyists, but doesn't want to risk customer booking data in its commercial product. ZDNet reports: When Cal was founded in 2022, Bailey Pumfleet, the CEO and co-founder, wrote, "Cal.com would be an open-source project [because] limitations of existing scheduling products could only be solved by open source." Since Cal was successful and now claims to be the largest Next.js project, he was on to something. Today, however, Pumfleet tells me that AI programs such as "Claude Opus can scour the code to find vulnerabilities," so the company is moving the project from the GNU Affero General Public License (AGPL) to a proprietary license to defend the program's security.

[...] Cal also quoted Huzaifa Ahmad, CEO of Hex Security, "Open-source applications are 5-10x easier to exploit than closed-source ones. The result, where Cal sits, is a fundamental shift in the software economy. Companies with open code will be forced to risk customer data or close public access to their code." "We are committed to protecting sensitive data," Pumfleet said. "We want to be a scheduling company, not a cybersecurity company." He added, "Cal.com handles sensitive booking data for our users. We won't risk that for our love of open source."

While its commercial program is no longer open source, Cal has released Cal.diy. This is a fully open-source version of its platform for hobbyists. The open project will enable experimentation outside the closed application that handles high-stakes data. Pumfleet concluded, "This decision is entirely around the vulnerability that open source introduces. We still firmly love open source, and if the situation were to change, we'd open source again. It's just that right now, we can't risk the customer data."
Businesses

Snapchat Blames AI As It Cuts 1,000 Jobs 43

Snap is laying off about 1,000 employees, or 16% of its workforce, while closing 300 open roles as it tries to cut costs and push toward profitability with more AI-driven efficiency. "While these changes are necessary to realize Snap's long-term potential, we believe that rapid advancements in artificial intelligence enable our teams to reduce repetitive work, increase velocity, and better support our community, partners, and advertisers," CEO Evan Spiegel wrote in a memo, which was included in the company's 8-K filing (PDF). "We have already witnessed small squads leveraging AI tools to drive meaningful progress across several important initiatives." The Verge reports: The changes are expected to save Snap $500 million by the second half of 2026. Snap had about 5,261 full-time employees as of December 2025, and now joins the growing list of tech companies that have already announced significant layoffs this year, including Meta, Amazon, Oracle, GoPro, and Jack Dorsey's Block.

"Last fall, I described Snap as facing a crucible moment, requiring a new way of working that is faster and more efficient, while pivoting towards profitable growth," Spiegel wrote. "Over the past several months, we have carefully reviewed the work required to best serve our community and partners, and made tough choices to prioritize the investments we believe are most likely to create long-term value."
Businesses

Struggling Shoe Retailer Allbirds Pivots To AI, Stock Explodes More Than 700% 76

Allbirds made a surprise announcement this morning: it's pivoting from sustainable shoes to AI compute infrastructure, rebranding as NewBird AI after selling its brand assets and closing its U.S. full-price stores. The move sent shares soaring more than 700%. CNBC reports: The move boosted shares of the miniscule market cap company -- it was valued at about $21 million at Tuesday's close -- by more than 700%. The shares, which were under $3 a day ago, jumped to above $17. [...] The new company, which expects to be called NewBird AI, announced a deal to raise up to $50 million in funding, expected to close in the second quarter of 2026. Allbirds announced a deal with American Exchange Group to sell its intellectual property and other assets for $39 million last month. "The Company will initially seek to acquire high-performance, low-latency AI compute hardware and provide access under long-term lease arrangements, meeting customer demand that spot markets and hyperscalers are unable to reliably service," the company said in the announcement.
The Internet

Audit Finds Google, Microsoft, and Meta Still Tracking Users After Opt-Out (404media.co) 48

alternative_right shares a report from 404 Media: An independent privacy audit of Microsoft, Meta, and Google web traffic in California found that the companies may be violating state regulations and racking up billions in fines. According to the audit from privacy search engine webXray, 55 percent of the sites it checked set ad cookies in a user's browser even if they opted out of tracking. Each company disputed or took issue with the research, with Google saying it was based on a "fundamental misunderstanding" of how its product works.

The webXray California Privacy Audit viewed web traffic on more than 7,000 popular websites in California in the month of March and found that most tech companies ignore when a user asks to opt-out of cookie tracking. California has stringent and well defined privacy legislation thanks to its California Consumer Privacy Act (CCPA) which allows users to, among other things, opt out of the sale of their personal information. There's a system called Global Privacy Control (GPC), which includes a browser extension that indicates to a website when a user wants to opt out of tracking.

According to the webXray audit, Google failed to let users opt out 87 percent of the time. "Google's failure to honor the GPC opt-out signal is easy to find in network traffic. When a browser using GPC connects to Google's servers it encodes the opt-out signal by sending the code 'sec-gpc: 1.' This means Google should not return cookies," the audit said. "However, when Google's server responds to the network request with the opt-out it explicitly responds with a command to create an advertising cookie named IDE using the 'set-cookie' command. This non-compliance is easy to spot, hiding in plain sight."

The audit said that Microsoft fails to opt out users in the same way and has a failure rate of 50 percent in the web traffic webXray viewed. Meta's failure rate was 69 percent and a bit more comprehensive. "Meta instructs publishers to install the following tracking code on their websites. The code contains no check for globally standard opt-out signals -- it loads unconditionally, fires a tracking event, and sets a cookie regardless of the consumer's privacy preferences," the audit said. It showed a copy of Meta's tracking data which contains no GPC check at all.

Chrome

Chrome Now Lets You Turn AI Prompts Into Repeatable 'Skills' 22

Google is rolling out a Chrome feature called "Skills" that lets users save Gemini prompts as reusable one-click workflows they can run across multiple tabs. The feature also includes preset Skills from Google. It's launching first for Chrome desktop users set to US English. The Verge reports: Once you have access to the feature, it can be managed by typing a forward slash ( / ) in Gemini and clicking the compass icon. AI prompts can be saved as Skills directly from your Gemini chat history on desktop, where they'll then be available to reuse on any other desktop devices that are signed into the same Google account on Chrome.

The aim is to spare Chrome users from having to manually retype frequently used Gemini prompts or having to copy and paste them over from a saved list. Some of the Skills made by early testers include commands for calculating the nutritional information of online recipes and creating a side-by-side comparison of product specifications while shopping across multiple tabs, according to Google.

The company is also launching a library of preset Skills that you can save and use instead of making your own. These ready-to-use Skills can also be customized to better suit your needs, providing a starting point without requiring you to create your own from scratch.
Social Networks

Social Media Platforms Need To Stop Never-Ending Scrolling, UK's Starmer Says (reuters.com) 54

UK Prime Minister Keir Starmer said social media platforms should remove addictive infinite-scroll features for young users as Britain considers new child-safety measures. "We're consulting on whether there should be a ban for under 16s," Starmer told BBC Radio. "But I think equally important, the addictive scrolling mechanisms are really problematic to my mind. They need to go." Reuters reports: Britain, like other countries, is considering restricting access to social media for children and it is testing bans, curfews and app time limits to see how they impact sleep, family life and schoolwork. Social media companies had designed algorithms that were intended to encourage addictive behavior, and parents were asking the government to intervene, Starmer said.

[...] More than 45,000 people had already responded to its consultation on children's online safety, the UK government said, adding that there was still time to contribute before a deadline of May 26. "We want to hear from mums and dads who are worried about the amount of time their children spend online and what they are viewing," Technology Secretary Liz Kendall said on Monday. "We want to hear from teenagers who know better than anyone what it is like to grow up in the age of social media. And we want to hear from families about their views on curfews, AI chatbots and addictive features."

AI

Stanford Report Highlights Growing Disconnect Between AI Insiders and Everyone Else 64

An anonymous reader quotes a report from TechCrunch: AI experts and the public's opinion on the technology are increasingly diverging, according to Stanford University's annual report on the AI industry, which was released Monday. In particular, the report noted a growing trend of anxiety around AI and, in the U.S., concerns about how the technology will impact key societal areas, such as jobs, medical care, and the economy. [...] Stanford's report provides more insight into where all this negativity is coming from, as it summarizes data around public sentiment of AI across various sources. For instance, it pointed to a report from Pew Research published last month, which noted that only 10% of Americans said they were more excited than concerned about the increased use of AI in daily life. Meanwhile, 56% of AI experts said they believed AI would have a positive impact on the U.S. over the next 20 years.

Expert opinion and public sentiment also greatly diverged in particular areas where AI could have a societal impact. Indeed, 84% of experts, the report authors noted, said that AI would have a largely positive impact on medical care over the next 20 years, but only 44% of the U.S. general public said the same. Plus, a majority (73%) of experts felt positive about AI's impact on how people do their jobs, compared with just 23% of the public. And 69% of experts felt that AI would have a positive impact on the economy. Given the supposed AI-fueled layoffs and disruptions to the workplace, it's not surprising that only 21% of the public felt similarly. Other data from Pew Research, cited by the report, noted that AI experts were less pessimistic on AI's impact on the job market, while nearly two-thirds of Americans (or 64%) said they think AI will lead to fewer jobs over the next 20 years.

The U.S. also reported the lowest trust in its government to regulate AI responsibly, compared with other nations, at 31%. Singapore ranked highest at 81%, per data pulled from Ipsos found in Stanford's report. Another source looked at regulation concerns on a state-by-state level and concluded that, nationwide, 41% of respondents said federal AI regulation will not go far enough, while only 27% said it would go "too far." Despite the fears and concerns, AI did get one accolade: Globally, those who feel like AI products and services offer more benefits than drawbacks slightly rose from 55% in 2024 to 59% in 2025. But at the same time, those respondents who said that AI makes them "nervous" grew from 50% to 52% during the same period, per data cited by the report's authors.
Apple

Apple AI Glasses Will Rival Meta's With Several Styles, Oval Cameras (bloomberg.com) 56

Bloomberg's Mark Gurman reports that Apple is developing display-free AI smart glasses aimed at rivaling Meta's Ray-Bans, with multiple frame styles, a distinctive oval camera design, and tight iPhone integration. "The idea is to unveil the product at the end of 2026 or early the following year, with the actual release coming in 2027," writes Gurman. From the report: Like Meta's offering, Apple's glasses will be designed to handle everyday uses: capturing photos and videos, syncing with a smartphone for editing and sharing, handling phone calls, listening to notifications, playing music, and enabling hands-free interaction via a voice assistant. In Apple's case, that assistant will be a significantly upgraded Siri coming in iOS 27. The glasses are part of a broader, three-pronged AI wearables strategy that also includes new AirPods and a camera-equipped pendant. Each device is designed to leverage computer vision to interpret the user's surroundings and feed contextual awareness into Siri and Apple Intelligence. That will enable features like improved turn-by-turn map directions and visual reminders.

When Apple typically enters a new product category, it offers clear advantages over what's currently available. We saw this with the original iPod, iPhone, iPad and Apple Watch -- and, even though it was a flop, the Vision Pro. That approach won't be as obvious with Apple's upcoming foldable iPhone, but we should see it on full display with the glasses. According to employees working on the project, Apple's strategy is to outdo competitors by tightly integrating the glasses with the iPhone and offering a higher-end build. While Meta relies heavily on partner EssilorLuxottica SA for frames, Apple is unsurprisingly planning to go at it alone in terms of design. That also should set it apart from Alphabet Inc.'s Google and Samsung Electronics Co., which are leaning on Warby Parker.

Apple's design team has whipped up at least four different styles and plans to launch some or all of them, I'm told, as well as many color options. The latest units are made from a high-end material called acetate, which is known to be more durable and luxurious than the standard plastic used by many brands. Here are the designs in testing:
- A large rectangular frame, reminiscent of Ray-Ban Wayfarers
- A slimmer rectangular design, similar to the glasses worn by Apple Chief Executive Officer Tim Cook
- Larger oval or circular frames
- A smaller, more refined oval or circular option

Crime

FBI Raids Texas Home of Man Suspected of Firebombing Sam Altman's SF Mansion (sfchronicle.com) 26

The FBI searched the Texas home of a 20-year-old man accused of throwing a Molotov cocktail at Sam Altman's San Francisco residence. Authorities say the suspect also made threats at OpenAI's headquarters, and reports indicate he had written extensively about fears over AI and opposition to AI executives.

The suspect reportedly authored a Substack blog and was a member of the Discord server PauseAI, an activist group focused on banning the development of the most powerful AI models to protect the public. In one post, they wrote: "These machines have already shown themselves to be unaligned with the interest of the people creating them. Models have often been found lying, cheating on tasks, and blackmailing their own creators whenever convenient; let alone the broader question of aligning them to whatever general 'human interest' may be." The Houston Chronicle reports: The search happened hours before the Justice Department charged 20-year-old Daniel Moreno-Gama with possession of an unregistered firearm and damage and destruction of property by means of explosives. An FBI spokesperson on Monday morning confirmed agents were executing a search warrant in Spring, but provided no other information.

Around the same time, FOX News reported the search was being conducted at the home of Daniel Moreno-Gama, 20, who last week was arrested by San Francisco police suspicion of attempted murder, making criminal threats and possession of a destructive device. The charges were first reported by the Associated Press. When Moreno-Gama was arrested Friday, he was carrying a document that "identified views opposed to Artificial Intelligence (AI) and the executives of various AI companies," the Associated Press reported. Moreno-Gama has no criminal history in Harris or Montgomery counties, according to public records. [...] Agents had left the cul-de-sac by 1 p.m. It was unclear if they removed any items from the house.
Another incident occurred outside Sam Altman's residence early Sunday morning. "Early Sunday morning, a car stopped and appears to have fired a gun at the Russian Hill home of OpenAI's CEO," reports The San Francisco Standard, citing reports from the local police department. Two suspects were arrested and booked for negligent discharge.

UPDATE: The suspect has been charged with attempted murder.
AI

Mark Zuckerberg Is Reportedly Building an AI Clone To Replace Him In Meetings 91

According to the Financial Times, Meta is developing an AI avatar of Mark Zuckerberg that could interact with employees using his voice, image, mannerisms, and public statements, "so that employees might feel more connected to the founder through interactions with it." The Verge reports: Meta may start allowing creators to make AI avatars of themselves if the experiment with Zuckerberg succeeds, according to the Financial Times. [...] Zuckerberg is involved in training the AI avatar, the Financial Times reports, and has also started spending five to 10 hours per week coding on Meta's other AI projects and participating in technical reviews.
AI

Californians Sue Over AI Tool That Records Doctor Visits (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: Several Californians sued Sutter Health and MemorialCare this week over allegations that an AI transcription tool was used to record them without their consent, in violation of state and federal law. The proposed class-action lawsuit, filed on Wednesday in federal court in San Francisco, states that, within the past six months, the plaintiffs received medical care at various Sutter and MemorialCare facilities.

During those visits, medical staff used Abridge AI. According to the complaint, this system "captured and processed their confidential physician-patient communications. Plaintiffs did not receive clear notice that their medical conversations would be recorded by an artificial intelligence platform, transmitted outside the clinical setting, or processed through third-party systems." The complaint adds that these recordings "contained individually identifiable medical information, including but not limited to medical histories, symptoms, diagnoses, medications, treatment discussions, and other sensitive health disclosures communicated during confidential medical consultations."

In recent years, Abridge's software and AI service have been rapidly deployed across major health care providers nationwide, including Kaiser Permanente, the Mayo Clinic, Duke Health, and many more. When activated, the software captures, transcribes, and summarizes conversations between patients and doctors, and it turns them into clinical notes. Sutter Health began partnering with Abridge two years ago. Sutter spokesperson Liz Madison said the company is aware of the lawsuit. "We take patient privacy seriously and are committed to protecting the security of our patients' information," Madison said. "Technology used in our clinical settings is carefully evaluated and implemented in accordance with applicable laws and regulations."

Programming

Will Some Programmers Become 'AI Babysitters'? (linkedin.com) 150

Will some programmers become "AI babysitters"? asks long-time Slashdot readertheodp. They share some thoughts from a founding member of Code.org and former Director of Education at Google: "AI may allow anyone to generate code, but only a computer scientist can maintain a system," explained Google.org Global Head Maggie Johnson in a LinkedIn post. So "As AI-generated code becomes more accurate and ubiquitous, the role of the computer scientist shifts from author to technical auditor or expert.

"While large language models can generate functional code in milliseconds, they lack the contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system without a person's oversight. [...] The human-in-the-loop must possess the technical depth to recognize when a piece of code is sub-optimal or dangerous in a production environment. [...] We need computer scientists to perform forensics, tracing the logic of an AI-generated module to identify logical fallacies or security loopholes. Modern CS education should prepare students to verify and secure these black-box outputs."

The NY Times reports that companies are already struggling to find engineers to review the explosion of AI-written code.

AI

Anthropic Asks Christian Leaders for Help Steering Claude's Spiritual Development (msn.com) 162

Anthropic recently "hosted about 15 Christian leaders from Catholic and Protestant churches, academia, and the business world" for a two-day summit , reports the Washington Post: Anthropic staff sought advice on how to steer Claude's moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a "child of God."

"They're growing something that they don't fully know what it's going to turn out as," said Brendan McGuire, a Catholic priest based in Silicon Valley who has written about faith and technology, and participated in the discussions at Anthropic. "We've got to build in ethical thinking into the machine so it's able to adapt dynamically." Attendees also discussed how Claude should engage with users at risk of self-harm, and the right attitude for the chatbot to adopt toward its own potential demise, such as being shut off, said one participant, who spoke on the condition of anonymity to share details of the conversations...

Anthropic has been more vocal than most top tech firms about the potential risks of more powerful AI. Its leaders have suggested that tools like chatbots already raise profound philosophical and moral questions and may even show flickers of consciousness, a fringe idea in tech circles that critics say lacks evidence. The summit signals that Anthropic is willing to keep exploring ideas outside the Silicon Valley mainstream, even as it emerges as one of the most powerful players in the AI race due to Claude's popularity with programmers, businesses, government agencies and the military.... Anthropic chief executive Dario Amodei has said he is open to the idea that Claude may already have some form of consciousness, and company leaders frequently talk about the need to give it a moral character...

Some Anthropic staff at the meeting "really don't want to rule out the possibility that they are creating a creature to whom they owe some kind moral duty," the participant said. Other company representatives present did not find that framework helpful, according to the participant. The discussions appeared to take a toll on some senior Anthropic staff, who became visibly emotional "about how this has all gone so far [and] how they can imagine this going," the participant said.

Anthropic is working to include more voices from different groups, including religious communities, to help shape its AI, a spokesperson told the Washington Post.

"Anthropic's March summit with Christian leaders was billed as the first in a series of gatherings with representatives from different religious and philosophical traditions, said attendee Brian Patrick Green, a practicing Catholic who teaches AI and technology ethics at Santa Clara University."
Crime

Sam Altman's Home Targeted a Second Time, Two Suspects Arrested (sfstandard.com) 44

"Early Sunday morning, a car stopped and appears to have fired a gun at the Russian Hill home of OpenAI's CEO," reportsThe San Francisco Standard, citing reports from the local police department:

The San Francisco Police Department announced the arrest of two suspects, Amanda Tom, 25, and Muhamad Tarik Hussein, 23, who were booked for negligent discharge... [The person in the passenger seat] put their hand out the window and appeared to fire a round on the Lombard side of the property, according to a police report on the incident, which cited surveillance footage and the compound's security personnel, who reported hearing a gunshot. The car then fled, and a camera captured its license plate, which later led police to take possession of the vehicle, according to the report... A search of the residence by officers turned up three firearms, according to police.
The incident follows Friday's arrest of a man who allegedly threw a Molotov cocktail at Altman's house. The San Francisco Standard also notes that in November, "threats from a 27-year-old anti-AI activist prompted the lockdown of OpenAI's San Francisco offices." Sam Kirchner, whose whereabouts have been unknown since Nov. 21, was in the midst of a mental health crisis when he threatened to go to the company's offices to "murder people," according to callers who notified police that day.

Slashdot Top Deals