Windows

Huawei To Pivot To Linux, HarmonyOS as Microsoft Windows License Expires 37

Huawei will no longer be able to produce or sell Windows-based PCs as Microsoft's supply license to the Chinese tech company expires this month, according to Chinese tech site MyDrivers. The restriction comes as Huawei remains on the U.S. Department of Commerce's Entity List, requiring American companies to obtain special export licenses to conduct business with the firm.

Richard Yu, executive director of Huawei's consumer business unit, said the company is preparing to pivot to alternative operating systems. Huawei had previously announced plans to abandon Windows for future PC generations. The Chinese tech giant will introduce a new "AI PC" laptop in April running its own Kunpeng CPU and HarmonyOS, alongside a MateBook D16 Linux Edition, its first Linux-based laptop.
Social Networks

BlueSky Proposes 'New Standard' When Scraping Data for AI Training (techcrunch.com) 52

An anonymous reader shared this article from TechCrunch: Social network Bluesky recently published a proposal on GitHub outlining new options it could give users to indicate whether they want their posts and data to be scraped for things like generative AI training and public archiving.

CEO Jay Graber discussed the proposal earlier this week, while on-stage at South by Southwest, but it attracted fresh attention on Friday night, after she posted about it on Bluesky. Some users reacted with alarm to the company's plans, which they saw as a reversal of Bluesky's previous insistence that it won't sell user data to advertisers and won't train AI on user posts.... Graber replied that generative AI companies are "already scraping public data from across the web," including from Bluesky, since "everything on Bluesky is public like a website is public." So she said Bluesky is trying to create a "new standard" to govern that scraping, similar to the robots.txt file that websites use to communicate their permissions to web crawlers...

If a user indicates that they don't want their data used to train generative AI, the proposal says, "Companies and research teams building AI training sets are expected to respect this intent when they see it, either when scraping websites, or doing bulk transfers using the protocol itself."

Over on Threads someone had a different wish for our AI-enabled future. "I want to be able to conversationally chat to my feed algorithm. To be able to explain to it the types of content I want to see, and what I don't want to see. I want this to be an ongoing conversation as it refines what it shows me, or my interests change."

"Yeah I want this too," posted top Instagram/Threads executive Adam Mosseri, who said he'd talked about the idea with VC Sam Lessin. "There's a ways to go before we can do this at scale, but I think it'll happen eventually."
AI

Google's AI 'Co-Scientist' Solved a 10-Year Superbug Problem in Two Days (livescience.com) 48

Google collaborated with Imperial College London and its "Fleming Initiative" partnership with Imperial NHS, giving their scientists "access to a powerful new AI designed" built with Gemini 2.0 "to make research faster and more efficient," according to an announcement from the school. And the results were surprising...

"José Penadés and his colleagues at Imperial College London spent 10 years figuring out how some superbugs gain resistance to antibiotics," writes LiveScience. "But when the team gave Google's 'co-scientist'' — an AI tool designed to collaborate with researchers — this question in a short prompt, the AI's response produced the same answer as their then-unpublished findings in just two days." Astonished, Penadés emailed Google to check if they had access to his research. The company responded that it didn't. The researchers published their findings [about working with Google's AI] Feb. 19 on the preprint server bioRxiv...

"What our findings show is that AI has the potential to synthesise all the available evidence and direct us to the most important questions and experimental designs," co-author Tiago Dias da Costa, a lecturer in bacterial pathogenesis at Imperial College London, said in a statement. "If the system works as well as we hope it could, this could be game-changing; ruling out 'dead ends' and effectively enabling us to progress at an extraordinary pace...."

After two days, the AI returned suggestions, one being what they knew to be the correct answer. "This effectively meant that the algorithm was able to look at the available evidence, analyse the possibilities, ask questions, design experiments and propose the very same hypothesis that we arrived at through years of painstaking scientific research, but in a fraction of the time," Penadés, a professor of microbiology at Imperial College London, said in the statement. The researchers noted that using the AI from the start wouldn't have removed the need to conduct experiments but that it would have helped them come up with the hypothesis much sooner, thus saving them years of work.

Despite these promising findings and others, the use of AI in science remains controversial. A growing body of AI-assisted research, for example, has been shown to be irreproducible or even outright fraudulent.

Google has also published the first test results of its AI 'co-scientist' system, according to Imperial's announcement, which adds that academics from a handful of top-universities "asked a question to help them make progress in their field of biomedical research... Google's AI co-scientist system does not aim to completely automate the scientific process with AI. Instead, it is purpose-built for collaboration to help experts who can converse with the tool in simple natural language, and provide feedback in a variety of ways, including directly supplying their own hypotheses to be tested experimentally by the scientists."

Google describes their system as "intended to uncover new, original knowledge and to formulate demonstrably novel research hypotheses and proposals, building upon prior evidence and tailored to specific research objectives...

"We look forward to responsible exploration of the potential of the AI co-scientist as an assistive tool for scientists," Google adds, saying the project "illustrates how collaborative and human-centred AI systems might be able to augment human ingenuity and accelerate scientific discovery.
Intel

Intel's Stock Jumps 18.8% - But What's In Its Future? (msn.com) 47

Intel's stock jumped nearly 19% this week. "However, in the past year through Wednesday's close, Intel stock had fallen 53%," notes Investor's Business Daily: The appointment of Lip-Bu Tan as CEO is a "good start" but Intel has significant challenges, Morgan Stanley analyst Joseph Moore said in a client note. Those challenges include delays in its server chip product line, a very competitive PC chip market, lack of a compelling AI chip offering, and over $10 billion in losses in its foundry business over the past 12 months. There is "no quick fix" for those issues, he said.
"There are things you can do," a Columbia business school associate professor tells the Wall Street Journal in a video interview, "but it's going to be incremental, and it's going to be extremely risky... They will try to be competitive in the foundry manufacturing space," but "It takes very aggressive investments."

Meanwhile, TSMC is exploring a joint venture where they'd operate Intel's factories, even pitching the idea to AMD, Nvidia, Broadcam, and Qualcomm, according to Reuters. (They add that Intel "reported a 2024 net loss of $18.8 billion, its first since 1986," and talked to multiple sources "familiar with" talks about Intel's future). Multiple companies have expressed interest in buying parts of Intel, but two of the four sources said the U.S. company has rejected discussions about selling its chip design house separately from the foundry division. Qualcomm has exited earlier discussions to buy all or part of Intel, according to those people and a separate source. Intel board members have backed a deal and held negotiations with TSMC, while some executives are firmly opposed, according to two sources.
"They say Lip-Bu Tan is the best hope to fix Intel — if Intel can be fixed at all," writes the Wall Street Journal: He brings two decades of semiconductor industry experience, relationships across the sector, a startup mindset and an obsession with AI...and basketball. He also comes with tricky China business relationships, underscoring Silicon Valley's inability to sever itself from one of America's top adversaries... [Intel's] stock has lost two-thirds of its value in four short years as Intel sat out the AI boom...

Manufacturing chips is an enormous expense that Intel can't currently sustain, say industry leaders and analysts. Former board members have called for a split-up. But a deal to sell all or part of Intel to competitors seems to be off the table for the immediate future, according to bankers. A variety of early-stage discussions with Broadcom, Qualcomm, GlobalFoundries and TSMC in recent months have failed to go anywhere, and so far seem unlikely to progress. The company has already hinted at a more likely outcome: bringing in outside financial backers, including customers who want a stake in the manufacturing business...

Tan has likely no more than a year to turn the company around, said people close to the company. His decades of investing in startups and running companies — he founded a multinational venture firm and was CEO of chip design company Cadence Design Systems for 13 years — provide indications of how Tan will tackle this task in the early days: by cutting expenses, moving quickly and trying to turn Intel back into an engineering-first company. "In areas where we are behind the competition, we need to take calculated risks to disrupt and leapfrog," Tan said in a note to Intel employees on Wednesday. "And in areas where our progress has been slower than expected, we need to find new ways to pick up the pace...."

Many take this culture reset to also mean significant cuts at Intel, which already shed about 15,000 jobs last year. "He is brave enough to adjust the workforce to the size needed for the business today," said Reed Hundt, a former Intel board member who has known Tan since the 1990s.

AI

'There's a Good Chance Your Kid Uses AI To Cheat' (msn.com) 98

Long-time Slashdot reader theodp writes: Wall Street Journal K-12 education reporter Matt Barnum has a heads-up for parents: There's a Good Chance Your Kid Uses AI to Cheat. Barnum writes:

"A high-school senior from New Jersey doesn't want the world to know that she cheated her way through English, math and history classes last year. Yet her experience, which the 17-year-old told The Wall Street Journal with her parent's permission, shows how generative AI has rooted in America's education system, allowing a generation of students to outsource their schoolwork to software with access to the world's knowledge. [...] The New Jersey student told the Journal why she used AI for dozens of assignments last year: Work was boring or difficult. She wanted a better grade. A few times, she procrastinated and ran out of time to complete assignments. The student turned to OpenAI's ChatGPT and Google's Gemini, to help spawn ideas and review concepts, which many teachers allow. More often, though, AI completed her work. Gemini solved math homework problems, she said, and aced a take-home test. ChatGPT did calculations for a science lab. It produced a tricky section of a history term paper, which she rewrote to avoid detection. The student was caught only once."

Not surprisingly, AI companies play up the idea that AI will radically improve learning, while educators are more skeptical. "This is a gigantic public experiment that no one has asked for," said Marc Watkins, assistant director of academic innovation at the University of Mississippi.

Facebook

After Meta Blocks Whistleblower's Book Promotion, It Becomes an Amazon Bestseller (thetimes.com) 39

After Meta convinced an arbitrator to temporarily prevent a whistleblower from promoting their book about the company (titled: Careless People), the book climbed to the top of Amazon's best-seller list. And the book's publisher Macmillan released a defiant statement that "The arbitration order has no impact on Macmillan... We will absolutely continue to support and promote it." (They added that they were "appalled by Meta's tactics to silence our author through the use of a non-disparagement clause in a severance agreement.")

Saturday the controversy was even covered by Rolling Stone: [Whistleblower Sarah] Wynn-Williams is a diplomat, policy expert, and international lawyer, with previous roles including serving as the Chief Negotiator for the United Nations on biosafety liability, according to her bio on the World Economic Forum...

Since the book's announcement, Meta has forcefully responded to the book's allegations in a statement... "Eight years ago, Sarah Wynn-Williams was fired for poor performance and toxic behavior, and an investigation at the time determined she made misleading and unfounded allegations of harassment. Since then, she has been paid by anti-Facebook activists and this is simply a continuation of that work. Whistleblower status protects communications to the government, not disgruntled activists trying to sell books."

But the negative coverage continues, with the Observer Sunday highlighting it as their Book of the Week. "This account of working life at Mark Zuckerberg's tech giant organisation describes a 'diabolical cult' able to swing elections and profit at the expense of the world's vulnerable..."

Though ironically Wynn-Williams started their career with optimism about Facebook's role in the app internet.org. . "Upon witnessing how the nascent Facebook kept Kiwis connected in the aftermath of the 2011 Christchurch earthquake, she believed that Mark Zuckerberg's company could make a difference — but in a good way — to social bonds, and that she could be part of that utopian project...

What internet.org involves for countries that adopt it is a Facebook-controlled monopoly of access to the internet, whereby to get online at all you have to log in to a Facebook account. When the scales fall from Wynn-Williams's eyes she realises there is nothing morally worthwhile in Zuckerberg's initiative, nothing empowering to the most deprived of global citizens, but rather his tool involves "delivering a crap version of the internet to two-thirds of the world". But Facebook's impact in the developing world proves worse than crap. In Myanmar, as Wynn-Williams recounts at the end of the book, Facebook facilitated the military junta to post hate speech, thereby fomenting sexual violence and attempted genocide of the country's Muslim minority. "Myanmar," she writes with a lapsed believer's rue, "would have been a better place if Facebook had not arrived." And what is true of Myanmar, you can't help but reflect, applies globally...

"Myanmar is where Wynn-Williams thinks the 'carelessness' of Facebook is most egregious," writes the Sunday Times: In 2018, UN human rights experts said Facebook had helped spread hate speech against Rohingya Muslims, about 25,000 of whom were slaughtered by the Burmese military and nationalists. Facebook is so ubiquitous in Myanmar, Wynn-Williams points out, that people think it is the entire internet. "It's no surprise that the worst outcome happened in the place that had the most extreme take-up of Facebook." Meta admits it was "too slow to act" on abuse in its Myanmar services....

After Wynn-Williams left Facebook, she worked on an international AI initiative, and says she wants the world to learn from the mistakes we made with social media, so that we fare better in the next technological revolution. "AI is being integrated into weapons," she explains. "We can't just blindly wander into this next era. You think social media has turned out with some issues? This is on another level."

Open Source

Startup Claims Its Upcoming (RISC-V ISA) Zeus GPU is 10X Faster Than Nvidia's RTX 5090 (tomshardware.com) 69

"The number of discrete GPU developers from the U.S. and Western Europe shrank to three companies in 2025," notes Tom's Hardware, "from around 10 in 2000." (Nvidia, AMD, and Intel...) No company in the recent years — at least outside of China — was bold enough to engage into competition against these three contenders, so the very emergence of Bolt Graphics seems like a breakthrough. However, the major focuses of Bolt's Zeus are high-quality rendering for movie and scientific industries as well as high-performance supercomputer simulations. If Zeus delivers on its promises, it could establish itself as a serious alternative for scientific computing, path tracing, and offline rendering. But without strong software support, it risks struggling against dominant market leaders.
This week the Sunnyvale, California-based startup introduced its Zeus GPU platform designed for gaming, rendering, and supercomputer simulations, according to the article. "The company says that its Zeus GPU not only supports features like upgradeable memory and built-in Ethernet interfaces, but it can also beat Nvidia's GeForce RTX 5090 by around 10 times in path tracing workloads, according to slide published by technology news site ServeTheHome." There is one catch: Zeus can only beat the RTX 5090 GPU in path tracing and FP64 compute workloads. It's not clear how well it will handle traditional rendering techniques, as that was less of a focus. In speaking with Bolt Graphics, the card does support rasterization, but there was less emphasis on that aspect of the GPU, and it may struggle to compete with the best graphics cards when it comes to gaming. And when it comes to data center options like Nvidia's Blackwell B200, it's an entirely different matter.

Unlike GPUs from AMD, Intel, and Nvidia that rely on proprietary instruction set architectures, Bolt's Zeus relies on the open-source RISC-V ISA, according to the published slides. The Zeus core relies on an open-source out-of-order general-purpose RVA23 scalar core mated with FP64 ALUs and the RVV 1.0 (RISC-V Vector Extension Version 1.0) that can handle 8-bit, 16-bit, 32-bit, and 64-bit data types as well as Bolt's additional proprietary extensions designed for acceleration of scientific workloads... Like many processors these days, Zeus relies on a multi-chiplet design... Unlike high-end GPUs that prioritize bandwidth, Bolt is evidently focusing on greater memory size to handle larger datasets for rendering and simulations. Also, built-in 400GbE and 800GbE ports to enable faster data transfer across networked GPUs indicates the data center focus of Zeus.

High-quality rendering, real-time path tracing, and compute are key focus areas for Zeus. As a result, even the entry-level Zeus 1c26-32 offers significantly higher FP64 compute performance than Nvidia's GeForce RTX 5090 — up to 5 TFLOPS vs. 1.6 TFLOPS — and considerably higher path tracing performance: 77 Gigarays vs. 32 Gigarays. Zeus also features a larger on-chip cache than Nvidia's flagship — up to 128MB vs. 96MB — and lower power consumption of 120W vs. 575W, making it more efficient for simulations, path tracing, and offline rendering. However, the RTX 5090 dominates in AI workloads with its 105 FP16 TFLOPS and 1,637 INT8 TFLOPS compared to the 10 FP16 TFLOPS and 614 INT8 TFLOPS offered by a single-chiplet Zeus...

The article emphasizes that Zeus "is only running in simulation right now... Bolt Graphics says that the first developer kits will be available in late 2025, with full production set for late 2026."

Thanks to long-time Slashdot reader arvn for sharing the news.
AI

Ask Slashdot: Where Are the Open-Source Local-Only AI Solutions? 192

"Why can't we each have our own AI software that runs locally," asks long-time Slashdot reader BrendaEM — and that doesn't steal the work of others.

Imagine a powerful-but-locally-hosted LLM that "doesn't spy... and no one else owns it." We download it, from souce-code if you like, install it, if we want. And it assists: us... No one gate-keeps it. It's not out to get us...

And this is important: because no one owns it, the AI software is ours and leaks no data anywhere — to no one, no company, for no political nor financial purpose. No one profits — but you!

Their longer original submission also asks a series of related questions — like why can't we have software without AI? (Along with "Why is AMD stamping AI on local-processors?" and "Should AI be crowned the ultimate hype?") But this question seems to be at the heart of their concern. "What future will anyone have if anything they really wanted to do — could be mimicked and sold by the ill-gotten work of others...?"

"Could local, open-source, AI software be the only answer to dishearten billionaire companies from taking and selling back to their customers — everything we have done? Could we not...instead — steal their dream?!"

Share your own thoughts and answers in the comments. Where are the open-source, local-only AI solutions?
AI

Last Year Waymo's Autonomous Vehicles Got 589 Parking Tickets in San Francisco (yahoo.com) 57

"Alphabet's Waymo autonomous vehicles are programmed to follow the rules of the road..." notes the Washington Post. But while the cars obey speed limits and properly use their turn signals — they also "routinely violate parking rules." Waymo vehicles driving themselves received 589 tickets for parking violations in 2024, according to records from San Francisco's Municipal Transportation Agency... The robots incurred $65,065 in fines for violations such as obstructing traffic, disobeying street cleaning restrictions and parking in prohibited areas... [Waymo is responsible for 0.05% of the city's fines, according to statistics from the article.]

Parking violations are one of the few ways to quantify how often self-driving companies' vehicles break the rules of the road... Some parking violations, such as overstaying in a paid spot, cause inconvenience but do not directly endanger other people. Others increase the risk of crashes, said Michael Brooks, executive director of the Center for Auto Safety. Anytime a vehicle is obstructing the flow of traffic, other drivers might be forced to brake suddenly or change lanes, he said, creating risks for drivers, pedestrians or other road users...

San Francisco transit operators lost 2 hours and 12 minutes of service time in 2024 because of Waymo vehicles blocking or colliding with transit vehicles, according to San Francisco Municipal Transportation Agency records. Autonomous vehicles have obstructed firefighters responding to emergency scenes in San Francisco, triggering city officials to ask for tougher oversight from state regulators.

The article adds that driverless Waymo vehicles in Los Angeles received 75 more tickets in 2024 — "with $543 in fines still outstanding, according to records from the Los Angeles Department of Transportation."
Apple

Leaked Apple Meeting Shows How Dire the Siri Situation Really Is (theverge.com) 51

A leaked Apple meeting reveals significant internal struggles with Siri's development, as AI-powered features announced last June have been delayed and may not make it into iOS 19. The Verge reports: Bloomberg (paywalled) has the full scoop on what happened at a Siri team meeting led by senior director Robby Walker, who oversees the division. He called the delay an "ugly" situation and sympathized with employees who might be feeling burned out or frustrated by Apple's decisions and Siri's still-lackluster reputation. He also said it's not a given that the missing Siri features will make it into iOS 19 this year; that's the company's current target, but "doesn't mean that we're shipping then," he told employees. "We have other commitments across Apple to other projects," Walker said, according to Bloomberg's report. "We want to keep our commitments to those, and we understand those are now potentially more timeline-urgent than the features that have been deferred."

The meeting also hinted at tension between Apple's Siri unit and the marketing division. Walker said the communications team wanted to highlight features like Siri understanding personal context and being able to take action based on what's currently on a user's screen -- even though they were nowhere near ready. Those WWDC teases and the resulting customer expectations only made matters worse, Walker acknowledged. Apple has since pulled an iPhone 16 ad that showcased the features and has added disclaimers to several areas of its website noting they've all been punted to a TBD date. They were held back in part due to quality issues "that resulted in them not working properly up to a third of the time," according to Mark Gurman.

[...] Walker told his staff that senior executives like software chief Craig Federighi and AI boss John Giannandrea are taking "intense personal accountability" for a predicament that's drawing fierce criticism as the months pass by with little to show for it beyond a prettier Siri animation. "Customers are not expecting only these new features but they also want a more fully rounded-out Siri," Walker said. "We're going to ship these features and more as soon as they are ready." He praised the team for its "incredibly impressive" work so far. "These are not quite ready to go to the general public, even though our competitors might have launched them in this state or worse," he said of the delayed features.

Government

US IRS To Re-Evaluate Modernization Investments In Light of AI Technology (msn.com) 35

The IRS is pausing its technology modernization efforts to reassess its strategy in light of AI advancements. Reuters reports: The agency will review a number of technology modernization initiatives that have been taken in recent years, including a new direct free filing system for tax returns that was launched last year under the Biden administration, the official told reporters. The official said the IRS did not have a specific number of staff cuts in mind as a result of the technology pause, but said there would be an opportunity to "realign the workforce to those new ways of doing business."
Google

Google Is Officially Replacing Assistant With Gemini (9to5google.com) 26

Google announced today that Gemini will replace Google Assistant on Android phones later in 2025. "[T]he classic Google Assistant will no longer be accessible on most mobile devices or available for new downloads on mobile app stores," says Google in a blog post. "Additionally, we'll be upgrading tablets, cars and devices that connect to your phone, such as headphones and watches, to Gemini. We're also bringing a new experience, powered by Gemini, to home devices like speakers, displays and TVs." 9to5Google reports: There will be an exception for phones running Android 9 or earlier and don't have at least 2 GB of RAM, with the existing Assistant experience remaining in place for those users. Google replacing Assistant follows new Android phones, including Pixel, Samsung, OnePlus, and Motorola, launched in the past year making Gemini the default experience. Meanwhile, the company says "millions of people have already made the switch."

Before Assistant's sunset, Google is "continuing to focus on improving the quality of the day-to-day Gemini experience, especially for those who have come to rely on Google Assistant." In winding down Google Assistant, the company notes how "natural language processing and voice recognition technology unlocked a more natural way to get help from Google" in 2016.
Further reading: Google's Gemini AI Can Now See Your Search History
Privacy

Everything You Say To Your Echo Will Be Sent To Amazon Starting On March 28 (arstechnica.com) 43

An anonymous reader quotes a report from Ars Technica: In an email sent to customers today, Amazon said that Echo users will no longer be able to set their devices to process Alexa requests locally and, therefore, avoid sending voice recordings to Amazon's cloud. Amazon apparently sent the email to users with "Do Not Send Voice Recordings" enabled on their Echo. Starting on March 28, recordings of everything spoken to the Alexa living in Echo speakers and smart displays will automatically be sent to Amazon and processed in the cloud.

Attempting to rationalize the change, Amazon's email said: "As we continue to expand Alexa's capabilities with generative AI features that rely on the processing power of Amazon's secure cloud, we have decided to no longer support this feature." One of the most marketed features of Alexa+ is its more advanced ability to recognize who is speaking to it, a feature known as Alexa Voice ID. To accommodate this feature, Amazon is eliminating a privacy-focused capability for all Echo users, even those who aren't interested in the subscription-based version of Alexa or want to use Alexa+ but not its ability to recognize different voices.

[...] Amazon said in its email today that by default, it will delete recordings of users' Alexa requests after processing. However, anyone with their Echo device set to "Don't save recordings" will see their already-purchased devices' Voice ID feature bricked. Voice ID enables Alexa to do things like share user-specified calendar events, reminders, music, and more. Previously, Amazon has said that "if you choose not to save any voice recordings, Voice ID may not work." As of March 28, broken Voice ID is a guarantee for people who don't let Amazon store their voice recordings.
Amazon's email continues: "Alexa voice requests are always encrypted in transit to Amazon's secure cloud, which was designed with layers of security protections to keep customer information safe. Customers can continue to choose from a robust set of controls by visiting the Alexa Privacy dashboard online or navigating to More - Alexa Privacy in the Alexa app."

Further reading: Google's Gemini AI Can Now See Your Search History
AI

AI Summaries Are Coming To Notepad (theverge.com) 26

way2trivial shares a report: Microsoft is testing AI-powered summaries in Notepad. In an update rolling out to Windows Insiders in the Canary and Dev channels, you'll be able to summarize information in Notepad by highlighting a chunk of text, right-clicking it, and selecting Summarize.

Notepad will then generate a summary of the text, as well as provide an option to change its length. You can also generate summaries by selecting text and using the Ctrl + M shortcut or choosing Summarize from the Copilot menu.

AI

China Announces Generative AI Labeling To Cull Disinformation (bloomberg.com) 20

China has introduced regulations requiring service providers to label AI-generated content, joining similar efforts by the European Union and United States to combat disinformation. The Cyberspace Administration of China and three other agencies announced Friday that AI-generated material must be labeled explicitly or via metadata, with implementation beginning September 1.

"The Labeling Law will help users identify disinformation and hold service suppliers responsible for labeling their content," the CAC said. App store operators must verify whether applications provide AI-generated content and review their labeling mechanisms. Platforms can still offer unlabeled AI content if they comply with relevant regulations and respond to user demand.
AI

'No One Knows What the Hell an AI Agent Is' (techcrunch.com) 40

Major technology companies are heavily promoting AI agents as transformative tools for work, but industry insiders say no one can agree on what these systems actually are, according to TechCrunch. OpenAI CEO Sam Altman said agents will "join the workforce" this year, while Microsoft CEO Satya Nadella predicted they will replace certain knowledge work. Salesforce CEO Marc Benioff declared his company's goal to become "the number one provider of digital labor in the world."

The definition problem has worsened recently. OpenAI published a blog post defining agents as "automated systems that can independently accomplish tasks," but its developer documentation described them as "LLMs equipped with instructions and tools." Microsoft distinguishes between agents and AI assistants, while Salesforce lists six different categories of agents. "I think that our industry overuses the term 'agent' to the point where it is almost nonsensical," Ryan Salva, senior director of product at Google, told TechCrunch. Andrew Ng, founder of DeepLearning.ai, blamed marketing: "The concepts of AI 'agents' and 'agentic' workflows used to have a technical meaning, but about a year ago, marketers and a few big companies got a hold of them." Analysts say this ambiguity threatens to create misaligned expectations as companies build product lineups around agents.
AI

AI Coding Assistant Refuses To Write Code, Tells User To Learn Programming Instead (arstechnica.com) 96

An anonymous reader quotes a report from Ars Technica: On Saturday, a developer using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice. According to a bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."

The AI didn't stop at merely refusing -- it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities." [...] The developer who encountered this refusal, posting under the username "janswist," expressed frustration at hitting this limitation after "just 1h of vibe coding" with the Pro Trial version. "Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding." One forum member replied, "never saw something like that, i have 3 files with 1500+ loc in my codebase (still waiting for a refactoring) and never experienced such thing."

Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding" -- a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor's philosophical pushback seems to directly challenge the effortless "vibes-based" workflow its users have come to expect from modern AI coding assistants.

AI

Yale Suspends Palestine Activist After AI Article Linked Her To Terrorism 151

Yale University has suspended a law scholar and pro-Palestinian activist after an AI-generated article from Jewish Onliner falsely linked her to a terrorist group. Gizmodo reports: Helyeh Doutaghi, the scholar at Yale Law School, told the New York Times that she is a "loud and proud" supporter of Palestinian rights. "I am not a member of any organization that would constitute a violation of U.S. law." The article that led to her suspension was published in Jewish Onliner, a Substack that says it is "empowered by A.I. capabilities." The website does not publish the names of its authors out of fear of harassment. Ironically, Doutaghi and Yale were reportedly the subject of intense harassment after Jewish Onliner published the article linking Doutaghi to terrorism by citing appearances she made at events sponsored by Samidoun, a pro-Palestinian group. [...]

Jewish Onliner is vague about how it uses AI to produce its articles, but the technology is known for making lots of mistakes and hallucinating information that is not true. It is quite possible that Jewish Onliner relied on AI to source information it used to write the article. That could open it up to liability if it did not perform fact-checking and due diligence on its writing. Besides the fact that Doutaghi says she is not a member of Samidoun, she attended events it sponsored that support Palestinian causes, Yale Law School said the allegations against her reflect "potential unlawful conduct."
AI

Anthropic CEO Floats Idea of Giving AI a 'Quit Job' Button 57

An anonymous reader quotes a report from Ars Technica: Anthropic CEO Dario Amodei raised a few eyebrows on Monday after suggesting that advanced AI models might someday be provided with the ability to push a "button" to quit tasks they might find unpleasant. Amodei made the provocative remarks during an interview at the Council on Foreign Relations, acknowledging that the idea "sounds crazy."

"So this is -- this is another one of those topics that's going to make me sound completely insane," Amodei said during the interview. "I think we should at least consider the question of, if we are building these systems and they do all kinds of things like humans as well as humans, and seem to have a lot of the same cognitive capacities, if it quacks like a duck and it walks like a duck, maybe it's a duck."

Amodei's comments came in response to an audience question from data scientist Carmem Domingues about Anthropic's late-2024 hiring of AI welfare researcher Kyle Fish "to look at, you know, sentience or lack of thereof of future AI models, and whether they might deserve moral consideration and protections in the future." Fish currently investigates the highly contentious topic of whether AI models could possess sentience or otherwise merit moral consideration.
"So, something we're thinking about starting to deploy is, you know, when we deploy our models in their deployment environments, just giving the model a button that says, 'I quit this job,' that the model can press, right?" Amodei said. "It's just some kind of very basic, you know, preference framework, where you say if, hypothesizing the model did have experience and that it hated the job enough, giving it the ability to press the button, 'I quit this job.' If you find the models pressing this button a lot for things that are really unpleasant, you know, maybe you should -- it doesn't mean you're convinced -- but maybe you should pay some attention to it."

Amodei's comments drew immediate skepticism on X and Reddit.
Google

Google's Gemini AI Can Now See Your Search History (arstechnica.com) 30

Google is continuing its quest to get more people to use Gemini, and it's doing that by giving away even more AI computing. From a report: Today, Google is releasing a raft of improvements for the Gemini 2.0 models, and as part of that upgrade, some of the AI's most advanced features are now available to free users. You'll be able to use the improved Deep Research to get in-depth information on a topic, and Google's newest reasoning model can peruse your search history to improve its understanding of you as a person.

[...] With the aim of making Gemini more personal to you, Google is also plugging Flash Thinking Experimental into a new source of data: your search history. Google stresses that you have to opt in to this feature, and it can be disabled at any time. Gemini will even display a banner to remind you it's connected to your search history so you don't forget.

Slashdot Top Deals