Microsoft

The Information: Microsoft Engineers Forced To Dig Their Own AI Graves 71

Longtime Slashdot reader theodp writes: In what reads a bit like a Sopranos plot, The Information suggests some of those in the recent batch of terminated Microsoft engineers may have in effect been forced to dig their own AI graves.

The (paywalled) story begins: "Jeff Hulse, a Microsoft vice president who oversees roughly 400 software engineers, told the team in recent months to use the company's artificial intelligence chatbot, powered by OpenAI, to generate half the computer code they write, according to a person who heard the remarks. That would represent an increase from the 20% to 30% of code AI currently produces at the company, and shows how rapidly Microsoft is moving to incorporate such technology. Then on Tuesday, Microsoft laid off more than a dozen engineers on Hulse 's team as part of a broader layoff of 6,000 people across the company that appeared to hit engineers harder than other types of roles, this person said."

The report comes as tech company CEOs have taken to boasting in earnings calls, tech conferences, and public statements that their AI is responsible for an ever-increasing share of the code written at their organizations. Microsoft's recent job cuts hit coders the hardest. So how much credence should one place on CEOs' claims of AI programming productivity gains -- which researchers have struggled to measure for 50+ years -- if engineers are forced to increase their use of AI, boosting the numbers their far-removed-from-programming CEOs are presenting to Wall Street?
Security

Most AI Chatbots Easily Tricked Into Giving Dangerous Responses, Study Finds (theguardian.com) 46

An anonymous reader quotes a report from The Guardian: Hacked AI-powered chatbots threaten to make dangerous knowledge readily available by churning out illicit information the programs absorb during training, researchers say. [...] In a report on the threat, the researchers conclude that it is easy to trick most AI-driven chatbots into generating harmful and illegal information, showing that the risk is "immediate, tangible and deeply concerning." "What was once restricted to state actors or organised crime groups may soon be in the hands of anyone with a laptop or even a mobile phone," the authors warn.

The research, led by Prof Lior Rokach and Dr Michael Fire at Ben Gurion University of the Negev in Israel, identified a growing threat from "dark LLMs", AI models that are either deliberately designed without safety controls or modified through jailbreaks. Some are openly advertised online as having "no ethical guardrails" and being willing to assist with illegal activities such as cybercrime and fraud. [...] To demonstrate the problem, the researchers developed a universal jailbreak that compromised multiple leading chatbots, enabling them to answer questions that should normally be refused. Once compromised, the LLMs consistently generated responses to almost any query, the report states.

"It was shocking to see what this system of knowledge consists of," Fire said. Examples included how to hack computer networks or make drugs, and step-by-step instructions for other criminal activities. "What sets this threat apart from previous technological risks is its unprecedented combination of accessibility, scalability and adaptability," Rokach added. The researchers contacted leading providers of LLMs to alert them to the universal jailbreak but said the response was "underwhelming." Several companies failed to respond, while others said jailbreak attacks fell outside the scope of bounty programs, which reward ethical hackers for flagging software vulnerabilities.

Android

Android XR Glasses Get I/O 2025 Demo (9to5google.com) 20

At I/O 2025, Google revealed new details about Android XR glasses, which will integrate with your phone to deliver context-aware support via Gemini AI. 9to5Google reports: Following the December announcement, Google today shared how all Android XR glasses will have a camera, microphones, and speakers, while an "in-lens display" that "privately provides helpful information right when you need it" is described as being "optional." The glasses will "work in tandem with your phone, giving you access to your apps without ever having to reach in your pocket." Gemini can "see and hear what you do" to "understand your context, remember what's important to you and provide information right when you need it." We see it accessing Google Calendar, Maps, Messages, Photos, Tasks, and Translate.

Google is "working with brands and partners to bring this technology to life," specifically Warby Parker and Gentle Monster. "Stylish glasses" are the goal for Android XR since they "can only truly be helpful if you want to wear them all day." Meanwhile, Google is officially "advancing" the Samsung partnership from headsets to Android XR glasses. They are making a software and reference hardware platform "that will enable the ecosystem to make great glasses." Notably, "developers will be able to start building for this platform later this year." On the privacy front, Google is now "gathering feedback on our prototypes with trusted testers."
Further reading: Google's Brin: 'I Made a Lot of Mistakes With Google Glass'
AI

AI Set To Consume Electricity Equivalent To 22% of US Homes By 2028, New Analysis Says (technologyreview.com) 95

New analysis by MIT Technology Review reveals AI's rapidly growing energy demands, with data centers expected to triple their share of US electricity consumption from 4.4% to 12% by 2028. According to Lawrence Berkeley National Laboratory projections, AI alone could soon consume electricity equivalent to 22% of all US households annually, driven primarily by inference operations that represent 80-90% of AI's computing power.

The carbon intensity of electricity used by data centers is 48% higher than the US average, researchers found, as facilities increasingly turn to dirtier energy sources like natural gas to meet immediate needs.

Tech giants are racing to secure unprecedented energy resources: OpenAI and President Trump announced a $500 billion Stargate initiative, Apple plans to spend $500 billion on manufacturing and data centers, and Google expects to invest $75 billion in AI infrastructure in 2025 alone. Despite their massive energy ambitions, leading AI companies remain largely silent about their per-query energy consumption, leaving researchers struggling to assemble what one expert called "a total black box."
AI

OpenAI Acquires Jony Ive's Startup in $6.5 Billion Deal To Create AI Devices (nytimes.com) 20

Sam Altman, OpenAI's chief executive, said Wednesday his firm was paying $6.5 billion to buy io, a one-year-old start-up created by Jony Ive, a former top Apple executive who designed the iPhone. From a report: The deal, which effectively unites Silicon Valley royalty, is intended to usher in what the two men call "a new family of products" for the age of artificial general intelligence, or A.G.I., which is shorthand for a future technology that achieves human-level intelligence.

The deal, which is OpenAI's biggest acquisition, will bring in Mr. Ive and his team of roughly 55 engineers, designers and researchers. They will assume creative and design responsibilities across OpenAI and build hardware that helps people better interact with the technology. In a joint interview, Mr. Ive and Mr. Altman declined to say what such devices could look like and how they might work, but they said they hoped to share details next year. Mr. Ive, 58, framed the ambitions as galactic, with the aim of creating "amazing products that elevate humanity."

Chrome

Google Is Baking Gemini AI Into Chrome (pcworld.com) 54

An anonymous reader quotes a report from PCWorld: Microsoft famously brought its Copilot AI to the Edge browser in Windows. Now Google is doing the same with Chrome. In a list of announcements that spanned dozens of pages, Google allocated just a single line to the announcement: "Gemini is coming to Chrome, so you can ask questions while browsing the web." Google later clarified what Gemini on Chrome can do: "This first version allows you to easily ask Gemini to clarify complex information on any webpage you're reading or summarize information," the company said in a blog post. "In the future, Gemini will be able to work across multiple tabs and navigate websites on your behalf."

Other examples of what Gemini can do involves coming up with personal quizzes based on material in the Web page, or altering what the page suggests, like a recipe. In the future, Google plans to allow Gemini in Chrome to work on multiple tabs, navigate within Web sites, and automate tasks. Google said that you'll be able to either talk or type commands to Gemini. To access it, you can use the Alt+G shortcut in Windows. [...] You'll see Gemini appear in Chrome as early as this week, Google executives said -- on May 21, a representative clarified. However, you'll need to be a Gemini subscriber to take advantage of its features, a requirement that Microsoft does not apply with Copilot for Edge. Otherwise, Google will let those who participate in the Google Chrome Beta, Dev, and Canary programs test it out.

AI

Google Launches Veo 3, an AI Video Generator That Incorporates Audio 5

Google on Tuesday unveiled Veo 3, an AI video generator that includes synchronized audio -- such as dialogue and animal sounds -- setting it apart from rivals like OpenAI's Sora. The company also launched Imagen 4 for high-quality image generation, Flow for cinematic video creation, and made updates to its Veo 2 and Lyria 2 tools. CNBC reports: "Veo 3 excels from text and image prompting to real-world physics and accurate lip syncing," Eli Collins, Google DeepMind product vice president, said in a blog Tuesday. The video-audio AI tool is available Tuesday to U.S. subscribers of Google's new $249.99 per month Ultra subscription plan, which is geared toward hardcore AI enthusiasts. Veo 3 will also be available for users of Google's Vertex AI enterprise platform.

Google also announced Imagen 4, its latest image-generation tool, which the company said produces higher-quality images through user prompts. Additionally, Google unveiled Flow, a new filmmaking tool that allows users to create cinematic videos by describing locations, shots and style preferences. Users can access the tool through Gemini, Whisk, Vertex AI and Workspace.
Google

Google Is Rolling Out AI Mode To Everyone In the US (engadget.com) 44

Google has unveiled a major overhaul of its search engine with the introduction of A.I. Mode -- a new feature that works like a chatbot, enabling users to ask follow-up questions and receive detailed, conversational answers. Announced at the I/O 2025 conference, the feature is now being rolled out to all Search users in the U.S. Engadget reports: Google first began previewing AI Mode with testers in its Labs program at the start of March. Since then, it has been gradually rolling out the feature to more people, including in recent weeks regular Search users. At its keynote today, Google shared a number of updates coming to AI Mode as well, including some new tools for shopping, as well as the ability to compare ticket prices for you and create custom charts and graphs for queries on finance and sports.

For the uninitiated, AI Mode is a chatbot built directly into Google Search. It lives in a separate tab, and was designed by the company to tackle more complicated queries than people have historically used its search engine to answer. For instance, you can use AI Mode to generate a comparison between different fitness trackers. Before today, the chatbot was powered by Gemini 2.0. Now it's running a custom version of Gemini 2.5. What's more, Google plans to bring many of AI Mode's capabilities to other parts of the Search experience.

Looking to the future, Google plans to bring Deep Search, an offshoot of its Deep Research mode, to AI Mode. [...] Another new feature that's coming to AI Mode builds on the work Google did with Project Mariner, the web-surfing AI agent the company began previewing with "trusted testers" at the end of last year. This addition gives AI Mode the ability to complete tasks for you on the web. For example, you can ask it to find two affordable tickets for the next MLB game in your city. AI Mode will compare "hundreds of potential" tickets for you and return with a few of the best options. From there, you can complete a purchase without having done the comparison work yourself. [...] All of the new AI Mode features Google previewed today will be available to Labs users first before they roll out more broadly.

Books

Chicago Sun-Times Prints Summer Reading List Full of Fake Books (arstechnica.com) 65

An anonymous reader quotes a report from Ars Technica: On Sunday, the Chicago Sun-Times published an advertorial summer reading list containing at least 10 fake books attributed to real authors, according to multiple reports on social media. The newspaper's uncredited "Summer reading list for 2025" supplement recommended titles including "Tidewater Dreams" by Isabel Allende and "The Last Algorithm" by Andy Weir -- books that don't exist and were created out of thin air by an AI system. The creator of the list, Marco Buscaglia, confirmed to 404 Media (paywalled) that he used AI to generate the content. "I do use AI for background at times but always check out the material first. This time, I did not and I can't believe I missed it because it's so obvious. No excuses," Buscaglia said. "On me 100 percent and I'm completely embarrassed."

A check by Ars Technica shows that only five of the fifteen recommended books in the list actually exist, with the remainder being fabricated titles falsely attributed to well-known authors. [...] On Tuesday morning, the Chicago Sun-Times addressed the controversy on Bluesky. "We are looking into how this made it into print as we speak," the official publication account wrote. "It is not editorial content and was not created by, or approved by, the Sun-Times newsroom. We value your trust in our reporting and take this very seriously. More info will be provided soon." In the supplement, the books listed by authors Isabel Allende, Andy Weir, Brit Bennett, Taylor Jenkins Reid, Min Jin Lee, Percival Everett, Delia Owens, Rumaan Alam, Rebecca Makkai, and Maggie O'Farrell are confabulated, while books listed by authors Francoise Sagan, Ray Bradbury, Jess Walter, Andre Aciman, and Ian McEwan are real. All of the authors are real people.
"The Chicago Sun-Times obviously gets ChatGPT to write a 'summer reads' feature almost entirely made up of real authors but completely fake books. What are we coming to?" wrote novelist Rachael King.

A Reddit user also expressed disapproval of the incident. "As a subscriber, I am livid! What is the point of subscribing to a hard copy paper if they are just going to include AI slop too!? The Sun Times needs to answer for this, and there should be a reporter fired."
Google

Google's Gemini 2.5 Models Gain "Deep Think" Reasoning (venturebeat.com) 30

Google today unveiled significant upgrades to its Gemini 2.5 AI models, introducing an experimental "Deep Think" reasoning mode for 2.5 Pro that allows the model to consider multiple hypotheses before responding. The new capability has achieved impressive results on complex benchmarks, scoring highly on the 2025 USA Mathematical Olympiad and leading on LiveCodeBench, a competition-level coding benchmark. Gemini 2.5 Pro also tops the WebDev Arena leaderboard with an ELO score of 1420.

"Based on Google's experience with AlphaGo, AI model responses improve when they're given more time to think," said Demis Hassabis, CEO of Google DeepMind. The enhanced Gemini 2.5 Flash, Google's efficiency-focused model, has improved across reasoning, multimodality, and code benchmarks while using 20-30% fewer tokens. Both models now feature native audio capabilities with support for 24+ languages, thought summaries, and "thinking budgets" that let developers control token usage. Gemini 2.5 Flash is currently available in preview with general availability expected in early June, while Deep Think remains limited to trusted testers during safety evaluations.
Google

Google Brings AI-Powered Live Translation To Meet 19

Google is adding AI-powered live translation to Meet, enabling participants to converse in their native languages while the system automatically translates in real time with the speaker's original vocal characteristics intact. Initially launching with English-Spanish translation this week, the technology processes speech with minimal delay, preserving tone, cadence, and expressions -- creating an effect similar to professional dubbing but with the speaker's own voice, the company announced at its developer conference Tuesday.

In some testings, WSJ found occasional limitations: initial sentences sometimes appear garbled before smoothing out, context-dependent words like "match" might translate imperfectly (rendered as "fight" in Spanish), and the slight delay can create confusing crosstalk with multiple participants. Google plans to extend support to Italian, German, and Portuguese in the coming weeks. The feature is rolling out to Google AI Pro and Ultra subscribers now, with enterprise availability planned later this year. The company says that no meeting data is stored when translation is active, and conversation audio isn't used to train AI models.
Businesses

Adobe Forces Creative Cloud Users Into Pricier AI-Focused Plan (theverge.com) 59

Adobe will rebrand its Creative Cloud All Apps subscription to "Creative Cloud Pro" on June 17 for North American users, making significant price increases while bundling AI features. Individual annual subscribers will see monthly rates jump from $59.99 to $69.99, while monthly non-contracted subscribers face a $15 hike to $104.99.

The revamped plan includes unlimited generative AI image credits, 4,000 monthly "premium" AI video and audio credits, access to third-party models like OpenAI's GPT, and the beta Firefly Boards collaborative whiteboard. Adobe will also offer a cheaper "Creative Cloud Standard" option at $54.99 monthly with severely reduced AI capabilities, but this plan remains exclusive to existing subscribers -- forcing new customers into the pricier AI-focused tier.
Microsoft

Microsoft is Putting AI Actions Into the Windows File Explorer (theverge.com) 67

Microsoft is starting to integrate AI shortcuts, or what it calls AI actions, into the File Explorer in Windows 11. From a report: These shortcuts let you right-click on a file and quickly get to Windows AI features like blurring the background of a photo, erasing objects, or even summarizing content from Office files.

Four image actions are currently being tested in the latest Dev Channel builds of Windows 11, including Bing visual search to find similar images on the web, the blur background and erase objects features found in the Photos app, and the remove background option in Paint.
Similar AI actions will soon be tested with Office files, The Verge added.
Stats

The Quiet Collapse of Surveys: Fewer Humans (and More AI Agents) Are Answering Survey Questions (substack.com) 68

Survey response rates have collapsed from 30-50% in the 1970s to as low as 5% today, while AI agents now account for an estimated 20% of survey responses, according to a new analysis.

The UK's Office for National Statistics has seen response rates drop from 40% to 13%, with some labor market questions receiving only five human responses. The U.S. Current Population Survey hit a record low 12.7% response rate, down from 50% historically.
Businesses

Tech Job Market Is Shrinking as AI Reshapes Industry Requirements (msn.com) 72

The US tech sector shed 214,000 jobs in April amid continuing economic uncertainty, according to CompTIA analysis of Bureau of Labor Statistics data. Companies are extending hiring timelines to two or three times longer than last year while significantly raising skill requirements, particularly for AI competencies.

"It's the great hesitation," said George Denlinger of Robert Half, noting employers now demand 10-12 skills instead of 6-7 previously. Entry-level programming positions are disappearing as AI assumes those functions, with Janco Associates CEO Victor Janulaitis observing that "a job that has been eliminated from almost all IT departments is an entry-level IT programmer."
Star Wars Prequels

SAG-AFTRA Calls Out Fortnite Over Darth Vader AI Voice 102

SAG-AFTRA has filed a labor complaint against Fortnite developer Epic Games, alleging the game improperly used AI to replicate James Earl Jones' Darth Vader voice without bargaining with the union, despite the estate's approval. Gizmodo reports: The union has now filed an unfair labor practice charge (link to the PDF is on the SAG-AFTRA website) that calls out "Fortnite's signatory company, Llama Productions" for "[replacing] the work of human performers with AI technology" without "providing any notice of their intent to do this and without bargaining with us over appropriate terms."

The union notes that it's not against the general idea here: "We celebrate the right of our members and their estates to control the use of their digital replicas and welcome the use of new technologies to allow new generations to share in the enjoyment of those legacies and renowned roles." The problem is that the AI being used here makes human voice actors obsolete, and "we must protect our right to bargain terms and conditions around uses of voice that replace the work of our members, including those who previously did the work of matching Darth Vader's iconic rhythm and tone in video games."

So far there's been no response from Epic Games on the filing. The Hollywood Reporter notes that despite the SAG-AFTRA's still-ongoing Interactive Media Agreement strike, which has been stuck for months on negotiating "AI protections for voice actors in video games," actors can actually work on Fortnite without violating the strike, since the game falls under an exception for titles that were in production before August 2023.
AI

Apple's Next-Gen Version of Siri Is 'On Par' With ChatGPT 41

According to Bloomberg's Mark Gurman (paywalled), Apple has big plans to turn Siri into a true ChatGPT competitor. "A next-generation, chatbot version of Siri has reportedly made significant progress during testing over the past six months; some executives allegedly now see it as 'on par' with recent versions of ChatGPT," reports MacRumors. "Apple is also apparently discussing giving Siri the ability to access the internet to gather and synthesize data from multiple sources, just like ChatGPT." From the report: The report added that Apple now has artificial intelligence offices in Zurich, where employees are working on an all-new software architecture for Siri. This "monolithic model" is entirely built on an LLM engine that will eventually replace Siri's current "hybrid" architecture that has been incoherently layered up with different functionality over many years. The new model will make Siri more conversational and better at synthesizing information.

Google's Gemini is expected to be added to iOS 19 as an alternative to ChatGPT in Siri, but Apple is also apparently in talks with Perplexity to add their AI service as another option in the future, for both Siri and Safari search.
AI

Qualcomm To Launch Data Center Processors That Link To Nvidia Chips 6

Qualcomm announced plans to re-enter the data center market with custom CPUs designed to integrate with Nvidia GPUs and software. As CNBC reports, the move supports Qualcomm's broader strategy to diversify beyond smartphones and into high-growth areas like data centers, PCs, and automotive chips. From the report: "I think we see a lot of growth happening in this space for decades to come, and we have some technology that can add real value added," Cristiano Amon, CEO of Qualcomm, told CNBC in an interview on Monday. "So I think we have a very disruptive CPU." Amon said the company will make an announcement about the CPU roadmap and the timing of its release "very soon," without offering specifics. The data center CPU market remains highly competitive. Big cloud computing players like Amazon and Microsoft already design and deploy their own custom CPUs. AMD and Intel also have a strong presence.

Addressing the competition, Amon said that there will be a place for Qualcomm in the data center CPU space. "As long as ... we can build a great product, we can bring innovation, and we can add value with some disruptive technology, there's going to be room for Qualcomm, especially in the data center," Amon said. "[It] is a very large addressable market that will that will see a lot of investment for decades to come." Last week, Qualcomm signed a memorandum of understanding with Saudi-based AI frim Humain to develop data centers, joining a slew of U.S. tech companies making deals in the region. Humain will operate under Saudi Arabia's Public Investment Fund.
Google

Google Decided Against Offering Publishers Options In AI Search 14

An anonymous reader quotes a report from Bloomberg: While using website data to build a Google Search topped with artificial intelligence-generated answers, an Alphabet executive acknowledged in an internal document that there was an alternative way to do things: They could ask web publishers for permission, or let them directly opt out of being included. But giving publishers a choice would make training AI models in search too complicated, the company concludes in the document, which was unearthed in the company's search antitrust trial.

It said Google had a "hard red line" and would require all publishers who wanted their content to show up in the search page to also be used to feed AI features. Instead of giving options, Google decided to "silently update," with "no public announcement" about how they were using publishers' data, according to the document, written by Chetna Bindra, a product management executive at Google Search. "Do what we say, say what we do, but carefully."
"It's a little bit damning," said Paul Bannister, the chief strategy officer at Raptive, which represents online creators. "It pretty clearly shows that they knew there was a range of options and they pretty much chose the most conservative, most protective of them -- the option that didn't give publishers any controls at all."

For its part, Google said in a statement to Bloomberg: "Publishers have always controlled how their content is made available to Google as AI models have been built into Search for many years, helping surface relevant sites and driving traffic to them. This document is an early-stage list of options in an evolving space and doesn't reflect feasibility or actual decisions." They added that Google continually updates its product documentation for search online.
Android

Google Launches NotebookLM App For Android and iOS 26

Google has launched the NotebookLM app for Android and iOS, offering a native mobile experience with offline support, audio overviews, and integration into the system share sheet for adding sources like PDFs and YouTube videos. 9to5Google reports: This native experience starts on a homepage of your notebooks with filters at the top for Recent, Shared, Title, and Downloaded. The app features a light and dark mode based on your device's system theme with no manual toggle. Each colorful card features the notebook name, emoji, number of sources, and date, as well as a play button for Audio Overviews. There's background playback and offline support for the podcast-style experience (the fullscreen player has a nice glow), while you can "Join" the AI hosts (in beta) to ask follow-up questions.

You get a "Create new" button at the bottom of the list to add PDFs, websites, YouTube videos, and text. Notably, the NotebookLM app will appear in the Android and iOS share sheet to quickly add sources. When you open a notebook, there's a bottom bar for the list of Sources, Chat Q&A, and Studio. It's similar to the current mobile website, with the native client letting users ditch the Progressive Web App. Out of the gate, there are phone and (straightforward) tablet interfaces.
You can download the app for iOS and Android using their respective links.

Slashdot Top Deals