Privacy

Telegram Allows Private Chat Reports After Founder's Arrest (techcrunch.com) 48

An anonymous reader shares a report: Telegram has quietly updated its policy to allow users to report private chats to its moderators following the arrest of founder Pavel Durov in France over "crimes committed by third parties" on the platform. [...] The Dubai-headquartered company has additionally edited its FAQ page, removing two sentences that previously emphasized its privacy stance on private chats. The earlier version had stated: "All Telegram chats and group chats are private amongst their participants. We do not process any requests related to them."
EU

US, UK, EU Sign 'Legally Binding' AI Treaty 51

The United States, United Kingdom and European Union have signed the first "legally binding" international AI treaty on Thursday, the Council of Europe human rights organization said. Called the AI Convention, the treaty promotes responsible innovation and addresses the risks AI may pose. Reuters reports: The AI Convention mainly focuses on the protection of human rights of people affected by AI systems and is separate from the EU AI Act, which entered into force last month. The EU's AI Act entails comprehensive regulations on the development, deployment, and use of AI systems within the EU internal market. The Council of Europe, founded in 1949, is an international organization distinct from the EU with a mandate to safeguard human rights; 46 countries are members, including all the 27 EU member states. An ad hoc committee in 2019 started examining the feasibility of an AI framework convention and a Committee on Artificial Intelligence was formed in 2022 which drafted and negotiated the text. The signatories can choose to adopt or maintain legislative, administrative or other measures to give effect to the provisions.

Francesca Fanucci, a legal expert at ECNL (European Center for Not-for-Profit Law Stichting) who contributed to the treaty's drafting process alongside other civil society groups, told Reuters the agreement had been "watered down" into a broad set of principles. "The formulation of principles and obligations in this convention is so overbroad and fraught with caveats that it raises serious questions about their legal certainty and effective enforceability," she said. Fanucci highlighted exemptions on AI systems used for national security purposes, and limited scrutiny of private companies versus the public sector, as flaws. "This double standard is disappointing," she added.
AT&T

AT&T Sues Broadcom For Breaching VMware Support Extension Contract (theregister.com) 76

AT&T has filed a lawsuit against Broadcom, alleging that Broadcom is refusing to honor an extended support agreement for VMware software unless AT&T purchases additional subscriptions it doesn't need. The company warns the consequences could risk massive outages for AT&T's customer support operations and critical federal services, including the U.S. President's office. The Register reports: A complaint [PDF] filed last week in the Supreme Court of New York State explains that AT&T holds perpetual licenses for VMware software and paid for support services under a contract that ends on September 8. The complaint also alleges that AT&T has an option to extend that support deal for two years -- provided it activates the option before the end of the current deal. AT&T's filing claims it exercised that option, but that Broadcom "is refusing to honor" the contract. Broadcom has apparently told AT&T it will continue to provide support if the comms giant "agrees to purchase scores of subscription services and software." AT&T counters that it "does not want or need" those subscriptions, because they:

- Would impose significant additional contractual and technological obligations on AT
- Would require AT&T to invest potentially millions to develop its network to accommodate the new software;
- May violate certain rights of first refusal that AT&T has granted to third parties;
- Would cost AT&T tens of millions more than the price of the support services alone.

[...] The complaint also suggests Broadcom's refusal to extend support creates enormous risk for US national security -- some of the ~8,600 servers that host AT&T's ~75,000 VMs "are dedicated to various national security and public safety agencies within the federal government as well as the Office of the President." Other VMs are relied upon by emergency responders, and still more "deliver services to millions of AT&T customers worldwide" according to the suit. Without support from Broadcom, AT&T claims it fears "widespread network outages that could cripple the operations of millions of AT&T customers worldwide" because it may not be able to fix VMware's software.

The Courts

Snap Sued Over 'Sextortion' of Kids By Predators (cnbc.com) 41

New Mexico Attorney General Raul Torrez has filed a lawsuit against Snap, accusing Snapchat of fostering and promoting illicit sexual material involving children, facilitating sextortion, and enabling trafficking of children, drugs, and guns. CNBC reports: The suit alleges that Snap "repeatedly made statements to the public regarding the safety and design of its platforms that it knew were untrue," or that were contradicted by the company's own internal findings. "Snap was specifically aware, but failed to warn children and parents, of 'rampant' and 'massive' sextortion on its platform -- a problem so grave that it drives children facing merciless and relentless blackmail demands or disclosure of intimate images to their families and friends to suicide," the suit says.

New Mexico's Department of Justice, which Torrez leads, in recent months conducted an investigation that found that there was a "vast network of dark web sites dedicated to sharing stolen, non-consensual sexual images from Snap" and that there were more than 10,000 records related to SNAP and child sexual abuse material "in the last year alone," the department said. The suit alleges violations of New Mexico's unfair trade practices law.

Privacy

Leaked Disney Data Reveals Financial and Strategy Secrets (msn.com) 48

An anonymous reader shares a report: Passport numbers for a group of Disney cruise line workers. Disney+ streaming revenue. Sales of Genie+ theme park passes. The trove of data from Disney that was leaked online by hackers earlier this summer includes a range of financial and strategy information that sheds light on the entertainment giant's operations, according to files viewed by The Wall Street Journal. It also includes personally identifiable information of some staff and customers.

The leaked files include granular details about revenue generated by such products as Disney+ and ESPN+; park pricing offers the company has modeled; and what appear to be login credentials for some of Disney's cloud infrastructure. (The Journal didn't attempt to access any Disney systems.) "We decline to comment on unverified information The Wall Street Journal has purportedly obtained as a result of a bad actor's illegal activity," a Disney spokesman said. Disney told investors in an August regulatory filing that it is investigating the unauthorized release of "over a terabyte of data" from one of its communications systems. It said the incident hadn't had a material impact on its operations or financial performance and doesn't expect that it will.

Data that a hacking entity calling itself Nullbulge released online spans more than 44 million messages from Disney's Slack workplace communications tool, upward of 18,800 spreadsheets and at least 13,000 PDFs, the Journal found. The scope of the material taken appears to be limited to public and private channels within Disney's Slack that one employee had access to. No private messages between executives appear to be included. Slack is only one online forum in which Disney employees communicate at work.

Crime

Fake CV Lands Top 'Engineer' In Jail For 15 Years (bbc.com) 90

Daniel Mthimkhulu, former chief "engineer" at South Africa's Passenger Rail Agency (Prasa), was sentenced to 15 years in prison for claiming false engineering degrees and a doctorate. His fraudulent credentials allowed him to rise rapidly within Prasa, contributing to significant financial losses and corruption within the agency. The BBC reports: Once hailed for his successful career, Daniel Mthimkhulu was head of engineering at the Passenger Rail Agency of South Africa (Prasa) for five years -- earning an annual salary of about [$156,000]. On his CV, the 49-year-old claimed to have had several mechanical engineering qualifications, including a degree from South Africa's respected Witwatersrand University as well as a doctorate from a German university. However, the court in Johannesburg heard that he had only completed his high-school education.

Mthimkhulu was arrested in July 2015 shortly after his web of lies began to unravel. He had started working at Prasa 15 years earlier, shooting up the ranks to become chief engineer, thanks to his fake qualifications. The court also heard how he had forged a job offer letter from a German company, which encouraged Prasa to increase his salary so the agency would not lose him. He was also at the forefront of a 600m rand deal to buy dozens of new trains from Spain, but they could not be used in South Africa as they were too high. [...] In an interview from 2019 with local broadcaster eNCA, Mthimkhulu admitted that he did not have a PhD. "I failed to correct the perception that I have it. I just became comfortable with the title. I did not foresee any damages as a result of this," he said.

Businesses

Nvidia Hit With DOJ Subpoena In Escalating Antitrust Probe (reuters.com) 13

According to Bloomberg (paywalled), Nvidia has received a subpoena from the U.S. Department of Justice as the regulator seeks evidence that the AI computing company violated antitrust laws. "The antitrust watchdog had previously delivered questionnaires to companies, and is now sending legally binding requests," notes Reuters. "Officials are concerned that the chipmaker is making it harder to switch to other suppliers and penalizes buyers that do not exclusively use its artificial intelligence chips."

The development follows a push by progressive groups last month, who criticized Nvidia's bundling of software and hardware, claiming it stifles innovation and locks in customers. In July, French antitrust regulators announced plans to charge the company for alleged anti-competitive practices.

Developing...
The Courts

Clearview AI Fined $33.7 Million Over 'Illegal Database' of Faces (apnews.com) 40

An anonymous reader quotes a report from the Associated Press: The Dutch data protection watchdog on Tuesday issued facial recognition startup Clearview AI with a fine of $33.7 million over its creation of what the agency called an "illegal database" of billion of photos of faces. The Netherlands' Data Protection Agency, or DPA, also warned Dutch companies that using Clearview's services is also banned. The data agency said that New York-based Clearview "has not objected to this decision and is therefore unable to appeal against the fine."

But in a statement emailed to The Associated Press, Clearview's chief legal officer, Jack Mulcaire, said that the decision is "unlawful, devoid of due process and is unenforceable." The Dutch agency said that building the database and insufficiently informing people whose images appear in the database amounted to serious breaches of the European Union's General Data Protection Regulation, or GDPR. "Facial recognition is a highly intrusive technology, that you cannot simply unleash on anyone in the world," DPA chairman Aleid Wolfsen said in a statement. "If there is a photo of you on the Internet -- and doesn't that apply to all of us? -- then you can end up in the database of Clearview and be tracked. This is not a doom scenario from a scary film. Nor is it something that could only be done in China," he said. DPA said that if Clearview doesn't halt the breaches of the regulation, it faces noncompliance penalties of up to $5.6 million on top of the fine.
Mulcaire said Clearview doesn't fall under EU data protection regulations. "Clearview AI does not have a place of business in the Netherlands or the EU, it does not have any customers in the Netherlands or the EU, and does not undertake any activities that would otherwise mean it is subject to the GDPR," he said.
The Courts

Shrinkwrap 'Contract' Found At Costco On... Collagen Peptides (mastodon.social) 74

Slashdot covered shrinkwrap licenses on software back in 2000 and 2002. But now ewhac (Slashdot reader #5,844) writes: The user Wraithe on the Mastodon network is reporting that a bottle of Vital Proteins(TM) collagen peptides purchased at Costco came with a shrinkwrap contract. Collagen peptides are often used as an anti-aging nutritional supplement. The top of the Vital Proteins bottle has a pull-to-open seal. Printed on the seal is the following: "Read This: By opening and using this product, you agree to be bound by our Terms and Conditions, fully set forth at vitalproteins.com/tc, which includes a mandatory arbitration agreement. If you do not agree to be bound, please return this product immediately."

So-called "shrinkwrap contracts" have been the subject of controversy and derision for decades since their first widespread appearance in the 1970's, attempting to alter the terms of sale after the fact, impose unethical and onerous restrictions on the purchaser, and absolving the vendor of all liability. Most such contracts appear on items involving copyrighted works (computer software, or any item containing computer software). The alleged "validity" of such contracts supposedly proceeds from the (alleged) need that the item requires a copyright license from the vendor to use (because the right to use/read/listen/view/execute is somehow not concomitant with purchase), and that the shrinkwrap contract furnishes such license.

The application of such a contract to a good where copyright has no scope, however, is something new. The alleged contract itself governs consumers' use of, "the VitalProteins.com website and any other applications, content, products, and services (collectively, the "Service")...," contains the usual we're-not-responsible-for-anything indemnification paragraph, and unilaterally removes your right to seek redress in court of law and imposes binding arbitration involving any disputes that may arise between the consumer and the company. Indeed, the arbitration clause is the first numbered section in the alleged contract.

The same contract has been spotted by numerous others — including someone who posted about it on Reddit two years ago. ("When I opened it, encountered a vacuum seal with the following 'READ THIS: by opening and using this product, you agree to...'") But the same verbiage still appears in online listings today for the product from Albertsons, Walgreens, and CVS.

Shrinkwrap contracts. They're not just for software any more...
United States

Investigation Finds 'Little Oversight' Over Crucial Supply Chain for US Election Software (politico.com) 94

Politico reports U.S. states have no uniform way of policing the use of overseas subcontractors in election technology, "let alone to understand which individual software components make up a piece of code."

For example, to replace New Hampshire's old voter registration database, state election officials "turned to one of the best — and only — choices on the market," Politico: "a small, Connecticut-based IT firm that was just getting into election software." But last fall, as the new company, WSD Digital, raced to complete the project, New Hampshire officials made an unsettling discovery: The firm had offshored part of the work. That meant unknown coders outside the U.S. had access to the software that would determine which New Hampshirites would be welcome at the polls this November.

The revelation prompted the state to take a precaution that is rare among election officials: It hired a forensic firm to scour the technology for signs that hackers had hidden malware deep inside the coding supply chain. The probe unearthed some unwelcome surprises: software misconfigured to connect to servers in Russia ["probably by accident," they write later] and the use of open-source code — which is freely available online — overseen by a Russian computer engineer convicted of manslaughter, according to a person familiar with the examination and granted anonymity because they were not authorized to speak about it... New Hampshire officials say the scan revealed another issue: A programmer had hard-coded the Ukrainian national anthem into the database, in an apparent gesture of solidarity with Kyiv.

None of the findings amounted to evidence of wrongdoing, the officials said, and the company resolved the issues before the new database came into use ahead of the presidential vote this spring. This was "a disaster averted," said the person familiar with the probe, citing the risk that hackers could have exploited the first two issues to surreptitiously edit the state's voter rolls, or use them and the presence of the Ukrainian national anthem to stoke election conspiracies. [Though WSD only maintains one other state's voter registration database — Vermont] the supply-chain scare in New Hampshire — which has not been reported before — underscores a broader vulnerability in the U.S. election system, POLITICO found during a six-month-long investigation: There is little oversight of the supply chain that produces crucial election software, leaving financially strapped state and county offices to do the best they can with scant resources and expertise.

The technology vendors who build software used on Election Day face razor-thin profit margins in a market that is unforgiving commercially and toxic politically. That provides little room for needed investments in security, POLITICO found. It also leaves states with minimal leverage over underperforming vendors, who provide them with everything from software to check in Americans at their polling stations to voting machines and election night reporting systems. Many states lack a uniform or rigorous system to verify what goes into software used on Election Day and whether it is secure.

The article also points out that many state and federal election officials "insist there has been significant progress" since 2016, with more regular state-federal communication. "The Cybersecurity and Infrastructure Security Agency, now the lead federal agency on election security, didn't even exist back then.

"Perhaps most importantly, more than 95% of U.S. voters now vote by hand or on machines that leave some type of paper trail, which officials can audit after Election Day."
Crime

Was the Arrest of Telegram's CEO Inevitable? (platformer.news) 174

Casey Newton, former senior editor at the Verge, weighs in on Platformer about the arrest of Telegram CEO Pavel Durov.

"Fending off onerous speech regulations and overzealous prosecutors requires that platform builders act responsibly. Telegram never even pretended to." Officially, Telegram's terms of service prohibit users from posting illegal pornographic content or promotions of violence on public channels. But as the Stanford Internet Observatory noted last year in an analysis of how CSAM spreads online, these terms implicitly permit users who share CSAM in private channels as much as they want to. "There's illegal content on Telegram. How do I take it down?" asks a question on Telegram's FAQ page. The company declares that it will not intervene in any circumstances: "All Telegram chats and group chats are private amongst their participants," it states. "We do not process any requests related to them...."

Telegram can look at the contents of private messages, making it vulnerable to law enforcement requests for that data. Anticipating these requests, Telegram created a kind of jurisdictional obstacle course for law enforcement that (it says) none of them have successfully navigated so far. From the FAQ again:

To protect the data that is not covered by end-to-end encryption, Telegram uses a distributed infrastructure. Cloud chat data is stored in multiple data centers around the globe that are controlled by different legal entities spread across different jurisdictions. The relevant decryption keys are split into parts and are never kept in the same place as the data they protect. As a result, several court orders from different jurisdictions are required to force us to give up any data. [...] To this day, we have disclosed 0 bytes of user data to third parties, including governments.

As a result, investigation after investigation finds that Telegram is a significant vector for the spread of CSAM.... The company's refusal to answer almost any law enforcement request, no matter how dire, has enabled some truly vile behavior. "Telegram is another level," Brian Fishman, Meta's former anti-terrorism chief, wrote in a post on Threads. "It has been the key hub for ISIS for a decade. It tolerates CSAM. Its ignored reasonable [law enforcement] engagement for YEARS. It's not 'light' content moderation; it's a different approach entirely.

The article asks whether France's action "will embolden countries around the world to prosecute platform CEOs criminally for failing to turn over user data." On the other hand, Telegram really does seem to be actively enabling a staggering amount of abuse. And while it's disturbing to see state power used indiscriminately to snoop on private conversations, it's equally disturbing to see a private company declare itself to be above the law.

Given its behavior, a legal intervention into Telegram's business practices was inevitable. But the end of private conversation, and end-to-end encryption, need not be.

Crime

Woman Mailed Herself an Apple AirTag To Help Catch Mail Thieves (cnn.com) 103

Several items were stolen from a woman's P.O. box. So she mailed herself a package containing an Apple AirTag, according to the Santa Barbara County Sheriff's office: Her mail was again stolen on Monday morning, including the package with the AirTag that she was able to track.

It is important to note that the victim did not attempt to contact the suspects on her own... The Sheriff's Office would like to commend the victim for her proactive solution, while highlighting that she also exercised appropriate caution by contacting law enforcement to safely and successfully apprehend the suspects.

CNN reports on what the authorities found: The suspected thieves were located in nearby Santa Maria, California, with the victim's mail — including the package containing the AirTag — and other items authorities believe were stolen from more than a dozen victims, according to the sheriff's office. Virginia Franchessca Lara, 27, and Donald Ashton Terry, 37, were arrested in connection with the crime, authorities said.

Lara was booked on felonies including possession of checks with intent to commit fraud, fictitious checks, identity theft, credit card theft and conspiracy, and remains held on a $50,000 bail as of Thursday, jail records show. Terry faces felony charges including burglary, possession of checks with intent to commit fraud, credit card theft, identity theft and conspiracy and was held on a $460,000 bail, according to jail records...

Authorities said they're working on contacting other victims of theft in this case.

Thanks to long-time Slashdot reader schwit1 for sharing the news.
Power

US Government Opens Up 31 Million Acres of Federal Lands For Solar (electrek.co) 103

An anonymous reader quotes a report from Electrek: The Biden administration has finalized a plan to expand solar on 31 million acres of federal lands in 11 western states. The proposed updated Western Solar Plan is a roadmap for Bureau of Land Management's (BLM) governance of solar energy proposals and projects on public lands. It bumps up the acreage from the 22 million acres it recommended in January, and this plan adds five additional states -- Idaho, Montana, Oregon, Washington, and Wyoming -- to the six states -- Arizona, California, Colorado, Nevada, New Mexico, and Utah -- analyzed in the original plan.

It would make the public lands available for potential solar development, putting solar farms closer to transmission lines or on previously disturbed lands and avoiding protected lands, sensitive cultural resources, and important wildlife habitats. [...] BLM surpassed its goal of permitting more than 25 gigawatts (GW) of clean energy projects on public lands earlier in 2024. It's permitted 29 GW of projects on public lands -- enough to power over 12 million homes. The Biden administration set the goal to achieve 100% clean electricity on the US grid by 2035.

The Courts

City of Columbus Sues Man After He Discloses Severity of Ransomware Attack (arstechnica.com) 37

An anonymous reader quotes a report from Ars Technica, written by Dan Goodin: A judge in Ohio has issued a temporary restraining order against a security researcher who presented evidence that a recent ransomware attack on the city of Columbus scooped up reams of sensitive personal information, contradicting claims made by city officials. The order, issued by a judge in Ohio's Franklin County, came after the city of Columbus fell victim to a ransomware attack on July 18 that siphoned 6.5 terabytes of the city's data. A ransomware group known as Rhysida took credit for the attack and offered to auction off the data with a starting bid of about $1.7 million in bitcoin. On August 8, after the auction failed to find a bidder, Rhysida released what it said was about 45 percent of the stolen data on the group's dark web site, which is accessible to anyone with a TOR browser.

Columbus Mayor Andrew Ginther said on August 13 that a "breakthrough" in the city's forensic investigation of the breach found that the sensitive files Rhysida obtained were either encrypted or corrupted, making them "unusable" to the thieves. Ginther went on to say the data's lack of integrity was likely the reason the ransomware group had been unable to auction off the data. Shortly after Ginther made his remarks, security researcher David Leroy Ross contacted local news outlets and presented evidence that showed the data Rhysida published was fully intact and contained highly sensitive information regarding city employees and residents. Ross, who uses the alias Connor Goodwolf, presented screenshots and other data that showed the files Rhysida had posted included names from domestic violence cases and Social Security numbers for police officers and crime victims. Some of the data spanned years.

On Thursday, the city of Columbus sued Ross (PDF) for alleged damages for criminal acts, invasion of privacy, negligence, and civil conversion. The lawsuit claimed that downloading documents from a dark web site run by ransomware attackers amounted to him "interacting" with them and required special expertise and tools. The suit went on to challenge Ross alerting reporters to the information, which ii claimed would not be easily obtained by others. "Only individuals willing to navigate and interact with the criminal element on the dark web, who also have the computer expertise and tools necessary to download data from the dark web, would be able to do so," city attorneys wrote. "The dark web-posted data is not readily available for public consumption. Defendant is making it so." The same day, a Franklin County judge granted the city's motion for a temporary restraining order (PDF) against Ross. It bars the researcher "from accessing, and/or downloading, and/or disseminating" any city files that were posted to the dark web. The motion was made and granted "ex parte," meaning in secret before Ross was informed of it or had an opportunity to present his case.

Security

Malware Infiltrates Pidgin Messenger's Official Plugin Repository (bleepingcomputer.com) 10

The Pidgin messaging app removed the ScreenShareOTR plugin from its third-party plugin list after it was found to be used to install keyloggers, information stealers, and malware targeting corporate networks. BleepingComputer reports: The plugin was promoted as a screen-sharing tool for secure Off-The-Record (OTR) protocol and was available for both Windows and Linux versions of Pidgin. According to ESET, the malicious plugin was configured to infect unsuspecting users with DarkGate malware, a powerful malware threat actors use to breach networks since QBot's dismantling by the authorities. [...] Those who installed it are recommended to remove it immediately and perform a full system scan with an antivirus tool, as DarkGate may be lurking on their system.

After publishing our story, Pidgin's maintainer and lead developer, Gary Kramlich, notified us on Mastodon to say that they do not keep track of how many times a plugin is installed. To prevent similar incidents from happening in the future, Pidgin announced that, from now on, it will only accept third-party plugins that have an OSI Approved Open Source License, allowing scrutiny into their code and internal functionality.

The Courts

$400 Million Algorithmic System Illegally Denied Thousands of Medicaid Benefits (gizmodo.com) 64

An anonymous reader quotes a report from Gizmodo: Thousands of Tennesseans were illegally denied Medicaid and other benefits due to programming and data errors in an algorithmic system the state uses to determine eligibility for low-income residents and people with disabilities, a U.S. District Court judge ruled this week. The TennCare Connect system -- built by Deloitte and other contractors for more than $400 million -- is supposed to analyze income and health information to automatically determine eligibility for benefits program applicants. But in practice, the system often doesn't load the appropriate data, assigns beneficiaries to the wrong households, and makes incorrect eligibility determinations, according to the decision (PDF) from Middle District of Tennessee Judge Waverly Crenshaw Jr.

"When an enrollee is entitled to state-administered Medicaid, it should not require luck, perseverance, and zealous lawyering for him or her to receive that healthcare coverage," Crenshaw wrote in his opinion. The decision was a result of a class action lawsuit filed in 2020 on behalf of 35 adults and children who were denied benefits. [...] ]Crenshaw found that TennCare Connect did not consider whether applicants were eligible for all available programs before it terminated their coverage. Deloitte was a major beneficiary of the nationwide modernization effort, winning contracts to build automated eligibility systems in more than 20 states, including Tennessee and Texas. Advocacy groups have asked (PDF) the Federal Trade Commission to investigate Deloitte's practices in Texas, where they say thousands of residents are similarly being inappropriately denied life-saving benefits by the company's faulty systems.

Encryption

Feds Bust Alaska Man With 10,000+ CSAM Images Despite His Many Encrypted Apps (arstechnica.com) 209

A recent indictment (PDF) of an Alaska man stands out due to the sophisticated use of multiple encrypted communication tools, privacy-focused apps, and dark web technology. "I've never seen anyone who, when arrested, had three Samsung Galaxy phones filled with 'tens of thousands of videos and images' depicting CSAM, all of it hidden behind a secrecy-focused, password-protected app called 'Calculator Photo Vault,'" writes Ars Technica's Nate Anderson. "Nor have I seen anyone arrested for CSAM having used all of the following: [Potato Chat, Enigma, nandbox, Telegram, TOR, Mega NZ, and web-based generative AI tools/chatbots]." An anonymous reader shares the report: According to the government, Seth Herrera not only used all of these tools to store and download CSAM, but he also created his own -- and in two disturbing varieties. First, he allegedly recorded nude minor children himself and later "zoomed in on and enhanced those images using AI-powered technology." Secondly, he took this imagery he had created and then "turned to AI chatbots to ensure these minor victims would be depicted as if they had engaged in the type of sexual contact he wanted to see." In other words, he created fake AI CSAM -- but using imagery of real kids.

The material was allegedly stored behind password protection on his phone(s) but also on Mega and on Telegram, where Herrera is said to have "created his own public Telegram group to store his CSAM." He also joined "multiple CSAM-related Enigma groups" and frequented dark websites with taglines like "The Only Child Porn Site you need!" Despite all the precautions, Herrera's home was searched and his phones were seized by Homeland Security Investigations; he was eventually arrested on August 23. In a court filing that day, a government attorney noted that Herrera "was arrested this morning with another smartphone -- the same make and model as one of his previously seized devices."

The government is cagey about how, exactly, this criminal activity was unearthed, noting only that Herrera "tried to access a link containing apparent CSAM." Presumably, this "apparent" CSAM was a government honeypot file or web-based redirect that logged the IP address and any other relevant information of anyone who clicked on it. In the end, given that fatal click, none of the "I'll hide it behind an encrypted app that looks like a calculator!" technical sophistication accomplished much. Forensic reviews of Herrera's three phones now form the primary basis for the charges against him, and Herrera himself allegedly "admitted to seeing CSAM online for the past year and a half" in an interview with the feds.

Government

California Passes Bill Requiring Easier Data Sharing Opt Outs (therecord.media) 22

Most of the attention today has been focused on California's controversial "kill switch" AI safety bill, which passed the California State Assembly by a 45-11 vote. However, California legislators passed another tech bill this week which requires internet browsers and mobile operating systems to offer a simple tool for consumers to easily opt out of data sharing and selling for targeted advertising. Slashdot reader awwshit shares a report from The Record: The state's Senate passed the landmark legislation after the General Assembly approved it late Wednesday. The Senate then added amendments to the bill which now goes back to the Assembly for final sign off before it is sent to the governor's desk, a process Matt Schwartz, a policy analyst at Consumer Reports, called a "formality." California, long a bellwether for privacy regulation, now sets an example for other states which could offer the same protections and in doing so dramatically disrupt the online advertising ecosystem, according to Schwartz.

"If folks use it, [the new tool] could severely impact businesses that make their revenue from monetizing consumers' data," Schwartz said in an interview with Recorded Future News. "You could go from relatively small numbers of individuals taking advantage of this right now to potentially millions and that's going to have a big impact." As it stands, many Californians don't know they have the right to opt out because the option is invisible on their browsers, a fact which Schwartz said has "artificially suppressed" the existing regulation's intended effects. "It shouldn't be that hard to send the universal opt out signal," Schwartz added. "This will require [browsers and mobile operating systems] to make that setting easy to use and find."

AI

California Legislature Passes Controversial 'Kill Switch' AI Safety Bill (arstechnica.com) 56

An anonymous reader quotes a report from Ars Technica: A controversial bill aimed at enforcing safety standards for large artificial intelligence models has now passed the California State Assembly by a 45-11 vote. Following a 32-1 state Senate vote in May, SB-1047 now faces just one more procedural state senate vote before heading to Governor Gavin Newsom's desk. As we've previously explored in depth, SB-1047 asks AI model creators to implement a "kill switch" that can be activated if that model starts introducing "novel threats to public safety and security," especially if it's acting "with limited human oversight, intervention, or supervision." Some have criticized the bill for focusing on outlandish risks from an imagined future AI rather than real, present-day harms of AI use cases like deep fakes or misinformation. [...]

If the Senate confirms the Assembly version as expected, Newsom will have until September 30 to decide whether to sign the bill into law. If he vetoes it, the legislature could override with a two-thirds vote in each chamber (a strong possibility given the overwhelming votes in favor of the bill). At a UC Berkeley Symposium in May, Newsom said he worried that "if we over-regulate, if we overindulge, if we chase a shiny object, we could put ourselves in a perilous position." At the same time, Newsom said those over-regulation worries were balanced against concerns he was hearing from leaders in the AI industry. "When you have the inventors of this technology, the godmothers and fathers, saying, 'Help, you need to regulate us,' that's a very different environment," he said at the symposium. "When they're rushing to educate people, and they're basically saying, 'We don't know, really, what we've done, but you've got to do something about it,' that's an interesting environment."
Supporters of the AI safety bill include state senator Scott Weiner and AI experts including Geoffrey Hinton and Yoshua Bengio. Bengio supports the bill as a necessary step for consumer protection and insists that AI should not be self-regulated by corporations, akin to other industries like pharmaceuticals and aerospace.

Stanford professor Fei-Fei Li opposes the bill, arguing that it could have harmful effects on the AI ecosystem by discouraging open-source collaboration and limiting academic research due to the liability placed on developers of modified models. A group of business leaders also sent an open letter Wednesday urging Newsom to veto the bill, calling it "fundamentally flawed."
The Courts

Appeals Court Questions TikTok's Section 230 Shield for Algorithm (reuters.com) 92

A U.S. appeals court has revived a lawsuit against TikTok over a child's death, potentially limiting tech companies' legal shield under Section 230. The 3rd U.S. Circuit Court of Appeals ruled that the law does not protect TikTok from claims that its algorithm recommended a deadly "blackout challenge" to a 10-year-old girl.

Judge Patty Shwartz wrote that Section 230 only immunizes third-party content, not recommendations made by TikTok's own algorithm. The decision marks a departure from previous rulings, citing a recent Supreme Court opinion that platform algorithms reflect "editorial judgments." This interpretation could significantly impact how courts apply Section 230 to social media companies' content curation practices.

Slashdot Top Deals