AI

After Child's Trauma, Chatbot Maker Allegedly Forced Mom To Arbitration For $100 Payout (arstechnica.com) 35

At a Senate hearing, grieving parents testified that companion chatbots from major tech companies encouraged their children toward self-harm, suicide, and violence. One mom even claimed that Character.AI tried to "silence" her by forcing her into arbitration. Ars Technica reports: At the Senate Judiciary Committee's Subcommittee on Crime and Counterterrorism hearing, one mom, identified as "Jane Doe," shared her son's story for the first time publicly after suing Character.AI. She explained that she had four kids, including a son with autism who wasn't allowed on social media but found C.AI's app -- which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish -- and quickly became unrecognizable. Within months, he "developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts," his mom testified.

"He stopped eating and bathing," Doe said. "He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me." It wasn't until her son attacked her for taking away his phone that Doe found her son's C.AI chat logs, which she said showed he'd been exposed to sexual exploitation (including interactions that "mimicked incest"), emotional abuse, and manipulation. Setting screen time limits didn't stop her son's spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents "would be an understandable response" to them.

"When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me," Doe said. "The chatbot -- or really in my mind the people programming it -- encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help." All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring "constant monitoring to keep him alive." Prioritizing her son's health, Doe did not immediately seek to fight C.AI to force changes, but another mom's story -- Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation -- gave Doe courage to seek accountability.

However, Doe claimed that C.AI tried to "silence" her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform's terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but "once they forced arbitration, they refused to participate," Doe said. Doe suspected that C.AI's alleged tactics to frustrate arbitration were designed to keep her son's story out of the public view. And after she refused to give up, she claimed that C.AI "re-traumatized" her son by compelling him to give a deposition "while he is in a mental health institution" and "against the advice of the mental health team." "This company had no concern for his well-being," Doe testified. "They have silenced us the way abusers silence victims."
A Character.AI spokesperson told Ars that C.AI sends "our deepest sympathies" to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe's case. C.AI never "made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe's case is limited to $100," the spokesperson said.

One of Doe's lawyers backed up her clients' testimony, citing C.AI terms that suggested C.AI's liability was limited to either $100 or the amount that Doe's son paid for the service, whichever was greater.
Earth

Extreme Heat Spurs New Laws Aimed at Protecting Workers Worldwide (nytimes.com) 57

Governments worldwide are implementing heat protection laws as 2.4 billion workers face extreme temperature exposure and 19,000 die annually from heat-related workplace injuries, according to a World Health Organization and World Meteorological Organization report.

Japan imposed $3,400 fines for employers failing to provide cooling measures when wet-bulb temperatures reach 28C. Singapore mandated hourly temperature sensors at large outdoor sites and requires 15-minute breaks every hour at 33C wet-bulb readings. Southern European nations ordered afternoon work stoppages this summer when temperatures exceeded 115F across Greece, Italy and Spain.

The United States lacks federal heat standards; only California, Colorado, Maryland, Minnesota, Nevada, Oregon and Washington have state-level protections. Boston passed requirements for heat illness prevention plans on city projects. Enforcement remains inconsistent -- Singapore inspectors found nearly one-third of 70 sites violated the 2023 law. Texas and Florida prohibit local governments from mandating rest and water breaks.
Earth

Darkest Nights Are Getting Lighter (ieee.org) 26

Light pollution now doubles every eight years globally as LED adoption accelerates artificial brightness worldwide. A recent study measured 10% annual growth in light pollution from 2011 to 2022. Northern Chile's Atacama Desert remains one of the few Bortle Scale 1 locations -- the darkest rating for astronomical observation -- though La Serena's population has nearly doubled in 25 years. The region hosts major observatories including the Vera C. Rubin Observatory at Cerro Pachon.

Satellite constellations pose additional challenges: numbers have increased from hundreds decades ago to 12,000 currently operating satellites. Astronomers predict 100,000 or more satellites within a decade. Chile faces pressure from proposed mining operations including the 7,400-acre INNA green-hydrogen facility near key astronomical sites despite national laws limiting artificial light from mining operations that generate over half the country's exports.
Earth

Corals Won't Survive a Warmer Planet, a New Study Finds (nytimes.com) 44

If global temperatures continue rising, virtually all the corals in the Atlantic Ocean will stop growing and could succumb to erosion by the end of the century, a new study finds. From a report: The analysis of over 400 existing coral reefs across the Atlantic Ocean estimates that more than 70 percent of the region's reefs will begin dying by 2040 even under optimistic climate warming scenarios. And if the planet exceeds 2 degrees Celsius of warming above preindustrial temperatures by the end of the century, 99 percent of corals in the region would meet this fate. Today, the planet has warmed about 1.3 degrees Celsius over preindustrial temperatures.

The implications are grave. Corals act as the fundamental building blocks of reefs, providing habitat for thousands of species of fish and other marine life. They are also bulwarks that break up waves and help protect shorelines from rising sea levels. A quarter of all ocean life depends on coral reefs and over a billion people worldwide benefit from them, according to the National Oceanic and Atmospheric Administration.

United States

Anthropic Denies Federal Agencies Use of Claude for Surveillance Tasks (semafor.com) 19

Anthropic has declined requests from federal law enforcement contractors to use its Claude AI models for surveillance activities, deepening tensions with the Trump administration, Semafor reported Wednesday, citing two senior officials. The company's usage policies prohibit domestic surveillance, limiting how agencies including the FBI, Secret Service, and Immigration and Customs Enforcement can deploy its technology. While Anthropic maintains a $1 contract with federal agencies through AWS GovCloud and works with the Department of Defense on non-weapons applications, administration officials said the restrictions amount to making moral judgments about law enforcement operations.
Earth

Gas Stove Makers Quietly Delete Air Pollution Warnings as They Fight Mandatory Health Labels (grist.org) 153

The home appliance industry would like you to believe that gas-burning stoves are not a risk to your health -- and several companies that make the devices are scrambling to erase their prior acknowledgements that they are. From a report: That claim is at the heart of a lawsuit the Association of Home Appliance Manufacturers has filed against the state of Colorado to stop it from requiring natural gas stoves, which burn methane, to carry health labels not unlike those on every pack of cigarettes. "Understand the air quality implications of having an indoor gas stove," the warning would read.

The law was to take effect August 5 but is now on hold, and state officials did not respond to a request for comment. In its federal lawsuit, the Association -- whose board includes representatives of LG Electronics, BSH Home Appliance Corp. (which makes Bosch appliances), Whirlpool, and Samsung Electronics -- asserts that the labeling requirement is "unconstitutional compelled speech" and illegal under the First Amendment. It calls the legislation a climate law disguised as a health law and, most strikingly, it claims there is "no association between gas stoves and adverse health outcomes."

Slashdot Top Deals