Wikipedia

AI Translations Are Adding 'Hallucinations' To Wikipedia Articles (404media.co) 13

An anonymous reader quotes a report from 404 Media: Wikipedia editors have implemented new policies and restricted a number of contributors who were paid to use AI to translate existing Wikipedia articles into other languages after they discovered these AI translations added AI "hallucinations," or errors, to the resulting article. The new restrictions show how Wikipedia editors continue to fight the flood of generative AI across the internet from diminishing the reliability of the world's largest repository of knowledge. The incident also reveals how even well-intentioned efforts to expand Wikipedia are prone to errors when they rely on generative AI, and how they're remedied by Wikipedia's open governance model. The issue centers around a program run by the Open Knowledge Association (OKA), a nonprofit that was found to be "mostly relying on cheap labor from contractors in the Global South" to translate English Wikipedia articles into other languages. Some translators began using tools like Google Gemini and ChatGPT to speed up the process, but editors reviewing the work found numerous hallucinations, including factual errors, missing citations, and references to unrelated sources.

"Ultimately the editors decided to implement restrictions against OKA translators who make multiple errors, but not block OKA translation as a rule," reports 404 Media.
The Courts

Trump's TikTok Deal Benefited Firms That 'Personally Enriched' Him, Lawsuit Says (nbcnews.com) 42

An anti-corruption group has filed a lawsuit (PDF) against Donald Trump and Attorney General Pam Bondi over the deal that transferred TikTok's U.S. operations to a group of investors tied to the administration. The suit claims the arrangement violates a 2024 law requiring ByteDance to divest and alleges the deal financially benefited Trump allies while leaving the platform's algorithm under Chinese ownership. NBC News reports: The suit, filed by the Public Integrity Project, a law firm that seeks to raise the "reputational cost of corruption in America," argues the deal violates a law intended to prevent the spread of Chinese government propaganda and has enriched Trump's allies. That law, signed by then-President Joe Biden in 2024, said that TikTok couldn't be distributed in the United States unless the Chinese company ByteDance found an American-based corporate home by the day before Donald Trump returned to office. The law was upheld by the Supreme Court.

"The law was clear, but it was never enforced," says the lawsuit, filed Thursday in the U.S. Court of Appeals for the District of Columbia Circuit. "Shortly after the deadline to divest passed, President Trump issued an executive order purportedly granting an extension for TikTok to find a domestic owner and directed his Attorney General not to enforce the law." The plaintiffs in the suit are two software engineers from California: One is a shareholder in Alphabet Inc., YouTube's parent company; the other is a shareholder in Meta Platforms, Inc., which is Instagram's parent company. Both say they suffered financially due to the non-enforcement of the law.
"The original motivation for this law was to prevent the Chinese government from pushing propaganda onto American audiences," said Brendan Ballou, CEO of the Public Integrity Project and a former Justice Department prosecutor. "The deal that the president approved is the absolute worst of all possible worlds, because right now ByteDance continues to own the algorithm, which means that it can censor the content that it doesn't like, but at the same time Oracle controls the data and it can censor the information that it doesn't like. Really it's a situation that's going to be terrible for users, and terrible for free speech on the platform."
Earth

Microplastics and Nanoplastics In Urban Air Originate Mainly From Tire Abrasion, Research Reveals 12

Dustin Destree shares a report from Phys.org: Although plastic particles in the air are increasingly coming into focus, knowledge about their distribution and effects is still limited. Chemical analyses from Leipzig now provide details from Germany for the first time: Around 4% of the particulate matter consists of plastic. Around two-thirds of this comes from tire abrasion. Extrapolated, this means that people in a city like Leipzig inhale approximately 2.1 micrograms of plastic per day through the air, which increases the risk of death from cardiovascular disease by 9% and from lung cancer by 13%. These findings underscore the need to take global action against plastic pollution and to examine air quality and health at the regional level, write researchers from the Leibniz Institute for Tropospheric Research (TROPOS) and Carl von Ossietzky University of Oldenburg in the journal Communications Earth & Environment. "With around two-thirds of microplastics coming from tire abrasion, this shows that action is needed and that the fine dust problem cannot be solved by switching to electric mobility alone. To protect health, it would be important to also take tire abrasion into account when regulating air quality and to set limits for microplastics in the air," demands Prof. Hartmut Herrmann from TROPOS, who led the study.

The study has been published in the journal Communications Earth & Environment.
Iphone

A Possible US Government iPhone-Hacking Toolkit Is Now In the Hands of Foreign Spies, Criminals (wired.com) 38

Security researchers say a highly sophisticated iPhone exploitation toolkit dubbed "Coruna," which possibly originated from a U.S. government contractor, has spread from suspected Russian espionage operations to crypto-stealing criminal campaigns. Apple has patched the exploited vulnerabilities in newer iOS versions, but tens of thousands of devices may have already been compromised. An anonymous reader quotes an excerpt from Wired's report: Security researchers at Google on Tuesday released a report describing what they're calling "Coruna," a highly sophisticated iPhone hacking toolkit that includes five complete hacking techniques capable of bypassing all the defenses of an iPhone to silently install malware on a device when it visits a website containing the exploitation code. In total, Coruna takes advantage of 23 distinct vulnerabilities in iOS, a rare collection of hacking components that suggests it was created by a well-resourced, likely state-sponsored group of hackers.

In fact, Google traces components of Coruna to hacking techniques it spotted in use in February of last year and attributed to what it describes only as a "customer of a surveillance company." Then, five months later, Google says a more complete version of Coruna reappeared in what appears to have been an espionage campaign carried out by a suspected Russian spy group, which hid the hacking code in a common visitor-counting component of Ukrainian websites. Finally, Google spotted Coruna in use yet again in what seems to have been a purely profit-focused hacking campaign, infecting Chinese-language crypto and gambling sites to deliver malware that steals victims cryptocurrency.

Conspicuously absent from Google's report is any mention of who the original surveillance company "customer" that deployed Coruna may have been. But the mobile security company iVerify, which also analyzed a version of Coruna it obtained from one of the infected Chinese sites, suggests the code may well have started life as a hacking kit built for or purchased by the US government. Google and iVerify both note that Coruna contains multiple components previously used in a hacking operation known as "Triangulation" that was discovered targeting Russian cybersecurity firm Kaspersky in 2023, which the Russian government claimed was the work of the NSA. (The US government didn't respond to Russia's claim.)

Coruna's code also appears to have been originally written by English-speaking coders, notes iVerify's cofounder Rocky Cole. "It's highly sophisticated, took millions of dollars to develop, and it bears the hallmarks of other modules that have been publicly attributed to the US government," Cole tells WIRED. "This is the first example we've seen of very likely US government tools -- based on what the code is telling us -- spinning out of control and being used by both our adversaries and cybercriminal groups." Regardless of Coruna's origin, Google warns that a highly valuable and rare hacking toolkit appears to have traveled through a series of unlikely hands, and now exists in the wild where it could still be adopted -- or adapted -- by any hacker group seeking to target iPhone users.
"How this proliferation occurred is unclear, but suggests an active market for 'second hand' zero-day exploits," Google's report reads. "Beyond these identified exploits, multiple threat actors have now acquired advanced exploitation techniques that can be re-used and modified with newly identified vulnerabilities."
Privacy

Meta's AI Display Glasses Reportedly Share Intimate Videos With Human Moderators (engadget.com) 38

An anonymous reader quotes a report from Engadget: Users of Meta's AI smart glasses in Europe may be unknowingly sharing intimate video and sensitive financial information with moderators outside of the bloc, according to a report from Sweden's Svenska Dagbladet released last week. Employees in Kenya doing AI "annotation" told the journalists that they've seen people nude, using the toilet and engaging in sexual activity, along with credit card numbers and other sensitive information.

With Meta's Ray-Ban Display and other glasses with AI capabilities, users can record what they're looking at or get answers to questions via a Meta AI assistant. If a wearer wants to make use of that AI, though, they must agree to Meta's terms of service that allow any data captured to be reviewed by humans. That's because Meta's large language models (LLMs) often require people to annotate visual data so that the AI can understand it and build its training models.

This data can end up in places like Nairobi, Kenya, often moderated by underpaid workers. Such actions are subject to Europe's GDPR rules that require transparency about how personal data is processed, according to a data protection lawyer cited in the report. However, Svenska Dagbladet's reporters said they needed to jump through some hoops to see Meta's privacy policy for its wearable products. That policy states that either humans or automated systems may review sensitive data, and puts the onus on the user to not share sensitive information.

Businesses

Accenture Acquires Ookla, Downdetector As Part of $1.2 Billion Deal (theregister.com) 15

Accenture is acquiring Downdetector parent company Ookla from Ziff Davis in a $1.2 billion deal to bolster its network analytics and visibility tools for telecoms, hyperscalers, and enterprises. "The deal, which will transfer all of Ziff Davis's Connectivity division to Accenture, includes Ookla's Speedtest, Ekahau, and RootMetrics," notes The Register reports: "Modern networks have evolved from simple infrastructure into business-critical platforms," said Accenture CEO Julie Sweet in a canned statement. "Without the ability to measure performance, organizations cannot optimize experience, revenue, or security." Ookla is meant to let them do just that.

Data captured at the network and device layer are used to enhance fraud prevention in banking, smart homes monitoring, and traffic optimization in retail, Accenture said. Ookla's platform, which lets user's test their own connectivity speed, captures more than 1,000 attributes per test, and provides the foundation for those analytics, Accenture said.

United States

Iowa County Rolls Out Extensive Zoning Rules For Data Centers (insideclimatenews.org) 38

Linn County, Iowa has adopted what may be one of the nation's strictest local zoning ordinances for data centers, requiring detailed water studies, formal water-use agreements, 1,000-foot residential setbacks, noise and light limits, and infrastructure compensation. "But seated beneath a van-sized American flag hanging from the rafters of the drafty Palo Community Center gymnasium, residents asked for even stronger protections," reports Inside Climate News. "One by one, they approached the microphone at the front of the gym to voice concerns about water use, electricity rates, light pollution, the impacts of low-frequency noise on livestock, and the county's ability to enforce the terms of the ordinance. Some, including Dorothy Landt of Palo, called for a complete moratorium on new data center development."

Landt asked: "Why has Linn County, Iowa, become a dumping ground for soon-to-be obsolete technology that spoils our landscape and robs us of our resources? While I admire the efforts of the Board of Supervisors to propose a data center ordinance, I would prefer to see all future data centers banned from Linn County." From the report: The county is already home to two major data center projects, operated by Google and QTS. Both are located in Cedar Rapids, Iowa's second-largest city, and are therefore subject to its laws. The new ordinance would apply only to unincorporated areas of the county, which make up more than two-thirds of its geographic footprint. [...] In drafting the ordinance, [Charlie Nichols, director of planning and development for Linn County] and his staff drew on the experiences of communities nationwide, meeting with local government officials in regions that have seen massive booms in data center development, including several counties in northern Virginia, the "data center capital of the world."

As data center development balloons, many communities that initially zoned the operations as warehouses or standard commercial users are abandoning that practice, Nichols noted. The extreme energy and water demands of data centers simply cannot be accounted for by existing zoning frameworks, he said. "These are generational uses with generational infrastructure impacts, and treating them as a normal warehouse or normal commercial user is just not working." [...] The Linn County, Iowa, ordinance goes one step further than tightening existing zoning rules. Instead, it creates a new, exclusive-use zoning district for data centers, granting county officials the power to set specific application requirements and development standards for projects. No other counties in the state have introduced similar zoning requirements, said Nichols. In fact, few jurisdictions nationwide have. [...]

From its first reading to final adoption, the ordinance has expanded to include language setting light pollution standards, requiring a waste management plan, including the Iowa DNR in the water-use agreement to address potential well interference issues and requiring an applicant-led public meeting before any zoning commission meetings. "I am very confident that no ordinance for data centers in Iowa is asking for more information or asking for more requirements to be met than our ordinance right now," said Nichols at the final reading. The Cedar Rapids Metro Economic Alliance has said that it strongly supports current and future data center development in the area. The new ordinance is not an effective moratorium, Nichols said. He said he "strongly believes" that a data center can be built within the adopted framework.

Canada

British Columbia To End Time Changes, Adopt Year-Round Daylight Time (www.cbc.ca) 168

An anonymous reader quotes a report from CBC.ca: The B.C. government says this Sunday will be the last time British Columbians have to change their clocks. The province will be permanently adopting daylight time and the March 8 "spring forward" will be the last time change, Premier David Eby announced Monday. "We are done waiting. British Columbia is going to change our clocks just one more time -- and then never again," Eby said. Residents will have eight months to prepare for Nov. 1, 2026, when the clocks would have been turned back one hour, but will now remain the same. B.C.'s new time zone will be called "Pacific Time," according to the province. Further reading: Permanent Standard Time Could Cut Strokes, Obesity Among Americans
AI

Editor At 184-Year-Old Ohio Newspaper Pushes To Let AI Draft News Articles (washingtonpost.com) 46

An anonymous reader quotes a report from the Washington Post: The Plain Dealer, Cleveland's largest newspaper, has begun to feature a new byline. On recent articles about an ice carving festival, a medical research discovery and a roaming pack of chicken-slaying dogs, a reporter's name is paired with the words "Advance Local Express Desk." It means: This article was drafted by artificial intelligence. "This article was produced with assistance from AI tools and reviewed by Cleveland.com staff," reads a note at the bottom of each robot-penned piece, differentiating it from those still written primarily by journalists. The disclosure has done little to stem the backlash that caromed across the news industry after the paper's editor, Chris Quinn, published a Feb. 14 column lamenting that a fresh-out-of-college job applicant withdrew from a reporting fellowship when they found out the position included no writing -- just filing notes to an AI writing tool.

"Artificial intelligence is not bad for newsrooms. It's the future of them," Quinn wrote, adding that "by removing writing from reporters' workloads, we've effectively freed up an extra workday for them each week." [...] Quinn, for his part, says his paper's use of AI to find, draft and edit stories is a success story that others must emulate if they want to survive. "It's a tool," he said in a phone interview last week. "If AI can do part of our job, then why not let it -- and have people do the part it can't do?" He added that the paper's embrace of technology -- including using AI to write stories summarizing its reporters' podcasts and its readers' letters to the editor -- is already boosting its bottom line, helping it retain staff at a time when other newspapers are shrinking or even shutting down. Just 130 miles east of Cleveland, the 240-year-old Pittsburgh Post-Gazette said in January that it will close its doors this spring.

Quinn, who has led the Plain Dealer's newsroom since 2013, said its newsroom has shrunk from some 400 employees in the late 1990s to just 71 today. Over the past three years, Quinn has implemented a suite of AI tools with various purposes: transcribing local government meetings, scraping municipal websites for story leads, cleaning up typos in story drafts, suggesting headlines and helping reporters draft follow-ups to articles they've already written. He said he is particularly pleased with an AI tool that turns podcasts by the paper's reporters into stories for the website, which he said generated more than 10 million page views last year. He has documented those efforts in letters to readers and sought their feedback. But the paper's latest experiment -- using AI to turn reporters' notes into full story drafts -- has aroused indignation online and anxiety within the paper's ranks.

Open Source

Norway's Consumer Council Calls for Right to Repair and Antitrust Enforcement - and Mocks 'Enshittification' (forbrukerradet.no) 69

The Norwegian Consumer Council, a government funded organization advocating for consumer's rights, released a report on the trend of "enshittification" in digital consumer goods and services, suggesting ways consumers for consumers to resist. But they've also dramatized the problem with a funny four-minute video about the man whose calls for him to make things shitty for people.

"It's not just your imagination. Digital services are getting worse," the video concludes — before adding that "Luckily, it doesn't have to be this way." The Consumer Council's announcement recommends:
  • Stronger rights for consumers to control, adapt, repair, and alter their products and services,
  • Interoperability, data portability, and decentralisation as the norm, so the threshold for moving to different services becomes as low as possible,
  • Deterrent and vigorous enforcement of competition law, so that Big Tech companies are not allowed to indiscriminately acquire start-ups, competitors or otherwise steer the market to their advantage,
  • Better financing of initiatives to build, maintain or improve alternative digital services and infrastructure based on open source code and open protocols,
  • Reduce public sector dependence on big tech, to regain control and to contribute to a functioning market for service providers that respect fundamental rights,
  • Deterrent and consistent enforcement of other laws, including consumer and data protection law.

The Norwegian Consumer Council is also joining 58 organisations and experts in a letter asking the Norwegian government to rebalance power with enforcement resources and by prioritizing the procurement of services based on open source code. And "Our sister organisations are sending similar letters to their own governments in 12 countries."

They're also sending a second letter to the European Commission with 29 civil society organisations (including the EFF and Amnesty International) warning about the risks of deregulation and calling for reducing dependency on big tech.

Thanks to Slashdot reader DeanonymizedCoward for sharing the news.


Earth

Chronic Ocean Heating Fuels 'Staggering' Loss of Marine Life, Study Finds (theguardian.com) 30

Slashdot reader JustAnotherOldGuy shared this report from the Guardian: Chronic ocean heating is fuelling a "staggering and deeply concerning" loss of marine life, a study has found, with fish levels falling by 7.2% from as little as 0.1C of warming per decade. Researchers examined the year-to-year change of 33,000 populations in the northern hemisphere between 1993 and 2021, and isolated the effect of the decadal rate of seabed warming from short shifts such as marine heatwaves. They found the drop in biomass from chronic heating to be as high as 19.8% in a single year.

"To put it simply, the faster the ocean floor warms, the faster we lose fish," said Shahar Chaikin, a marine ecologist at the National Museum of Natural Sciences in Spain and the study's lead author. "A 7.2% decline for every tenth of a degree per decade might sound small," he added. "But compounded over time, across entire ocean basins, it represents a staggering and deeply concerning loss of marine life."

The Military

America Used Anthropic's AI for Its Attack On Iran, One Day After Banning It (engadget.com) 64

Engadget reports: In a lengthy post on Truth Social on February 27, President Trump ordered all federal agencies to "immediately cease all use of Anthropic's technology" following strong disagreements between the Department of Defense and the AI company. A few hours later, the U.S. conducted a major air attack on Iran with the help of Anthropic's AI tools, according to a report from The Wall Street Journal.
Even Trump's post noted there would be a six-month phase-out for Anthropic's technology (adding that Anthropic "better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.")

Anthropic's Claude technology was also used by the U.S. military less than two months ago in its operation in Venezuela — reportedly making them the first AI developer known to be used in a classified U.S. War Department operation. The Wall Street Journal reported Anthropic's technology found its way into the mission through Anthropic's contract with Palintir.
Earth

North America's Bird Populations Are Shrinking Faster. Blame Climate Change and Agriculture (apnews.com) 21

"Billions fewer birds are flying through North American skies than decades ago," reports the Associated Press, "and their population is shrinking ever faster, mostly due to a combination of intensive agriculture and warming temperatures, a new study found." Nearly half of the 261 species studied showed big enough losses in numbers to be statistically significant and more than half of those declining are seeing their losses accelerate since 1987, according to Thursday's journal Science... The only consolation is that the birds that are shrinking in numbers the fastest are species — such as the European starling, American crow, grackle and house sparrow — with large enough populations that they aren't yet at risk of going extinct, said study lead author Francois Leroy, also an Ohio State ecologist...

When it came to population declines — not the acceleration — the scientists noticed bigger losses further south. When they did a deeper analysis they statistically connected those losses to warmer temperatures from human-caused climate change. "In regions where temperatures increase the most, we are seeing strongest declines in populations," [said study co-author Marta Jarzyna, an ecologist at Ohio State University]. "On the other hand, the acceleration of those declines, that's mostly driven by agricultural practices." The scientists found statistical correlations between speeded-up decline rates and high fertilizer use, high pesticide use and amount of cropland, Leroy said. He said they couldn't say any of those caused the acceleration of losses, but it indicates agriculture in general is a factor. "The stronger the agriculture, the faster we will lose birds," said Leroy...

McGill University wildlife biologist David Bird, who wasn't part of the study, said it was done well and that its conclusions made sense. With a growing human population, agriculture practices are intensified, more bird habitats are being converted to cropland, modern machinery often grind up nests and eggs and single crop plantings offer less possibilities for birds to find food and nests, said Bird, the editor of Birds of Canada. "The biggest impact of agricultural intensity though is our war on insects. Numerous recent studies have shown that insect populations in many places throughout the world, including the U.S., have crashed by well over 40 percent," Bird said in an email. "Many of the birds in this new study showing population declines depend heavily on insects for food."

A 2019 study of the same bird species by Cornell University conservation scientist Kenneth Rosenberg also found that North America had 3 billion fewer birds than in 1970, the article points out.
Open Source

Collabora Clashes With LibreOffice Over Move To Revive LibreOffice Online (neowin.net) 29

Slashdot reader darwinmac writes: The Document Foundation (TDF), the organization behind LibreOffice, has decided to bring back its LibreOffice Online project which been inactive since 2022. Collabora, a company that was a major contributor to the original LibreOffice Online, is not pleased with this development. After the original project went dormant, Collabora forked the code and created its own product, Collabora Online.

Collaboras Michael Meeks, who also sits on the TDF board, reacted to the TDFs decision by saying that a fully supported, free online version already exists in the form of Collabora Online, and that resurrecting a dead repository makes little sense when an active, open community around the online suite already exists.

For now, The Document Foundation plans to reopen the old repository for new contributions. The organization has issued a warning that the code is not ready for live deployment and users should wait until the development team confirms it is stable.

The Military

Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic (x.com) 42

Saturday afternoon Sam Altman announced he'd start answering questions on X.com about OpenAI's work with America's Department of War — and all the developments over the past few days. (After that department's negotions had failed with Anthropic, they announced they'd stop using Anthropic's technology and threatened to designate it a "Supply-Chain Risk to National Security". Then they'd reached a deal for OpenAI's technology — though Altman says it includes OpenAI's own similar prohibitions against using their products for domestic mass surveillance and requiring "human responsibility" for the use of force in autonomous weapon systems.)

Altman said Saturday that enforcing that "Supply-Chain Risk" designation on Anthropic "would be very bad for our industry and our country, and obviously their company. We said [that] to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation.... We should all care very much about the precedent... To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it."

Altman also said that for a long time, OpenAI was planning to do "non-classified work only," but this week found the Department of War "flexible on what we needed..." Sam Altman: The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs.

I know what it's like to feel backed into a corner, and I think it's worth some empathy to the Department of War. They are... a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them "The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind." And then we say "But we won't help you, and we think you are kind of evil." I don't think I'd react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.

Question: Are you worried at all about the potential for things to go really south during a possible dispute over what's legal or not later on and be deemed a supply chain risk...?

Sam Altman: Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that...

Question: Why the rush to sign the deal ? Obviously the optics don't look great.

Sam Altman: It was definitely rushed, and the optics don't look good. We really wanted to de-escalate things, and we thought the deal on offer was good.

If we are right and this does lead to a de-escalation between the Department of War and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as as rushed and uncareful. I don't where it's going to land, but I have already seen promising signs. I think a good relationship between the government and the companies developing this technology is critical over the next couple of years...

Question: What was the core difference why you think the Department of War accepted OpenAI but not Anthropic?

Sam Altman: [...] We believe in a layered approach to safety — building a safety stack, deploying FDEs [embedded Forward Deployed Engineers] and having our safety and alignment researcher involved, deploying via cloud, working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one...

I think Anthropic may have wanted more operational control than we did...

Question: Were the terms that you accepted the same ones Anthropic rejected?

Sam Altman: No, we had some different ones. But our terms would now be available to them (and others) if they wanted.

Question: Will you turn off the tool if they violate the rules?

Sam Altman: Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.

Questions were also answered by OpenAI's head of National Security Partnerships (who at one point posted that they'd managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.) And they stressed that with OpenAI's deal with Department of War, "We control how we train the models and what types of requests the models refuse." Question: Are employees allowed to opt out of working on Department of War-related projects?

Answer: We won't ask employees to support Department of War-related projects if they don't want to.

Question: How much is the deal worth?

Answer: It's a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We're doing it because it's the right thing to do for the country, at great cost to ourselves, not because of revenue impact...

Question: Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a 'threat to democratic values'?

Answer: We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Other AI labs (including Anthropic) have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. These are the terms we negotiated in our contract.

They also detailed OpenAI's position on LinkedIn: Deployment architecture matters more than contract language. Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware...

Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren't refusing queries they should, or there's more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could.

U.S. law already constrains the worst outcomes. We accepted the "all lawful uses" language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can't anticipate.

Slashdot Top Deals