AI

161 Years Ago, a New Zealand Sheep Farmer Predicted AI Doom (arstechnica.com) 65

An anonymous reader quotes a report from Ars Technica, written by Benj Edwards: While worrying about AI takeover might seem like a modern idea that sprung from War Games or The Terminator, it turns out that a similar concern about machine dominance dates back to the time of the American Civil War, albeit from an English sheep farmer living in New Zealand. Theoretically, Abraham Lincoln could have read about AI takeover during his lifetime. On June 13, 1863, a letter published (PDF) in The Press newspaper of Christchurch warned about the potential dangers of mechanical evolution and called for the destruction of machines, foreshadowing the development of what we now call artificial intelligence—and the backlash against it from people who fear it may threaten humanity with extinction. It presented what may be the first published argument for stopping technological progress to prevent machines from dominating humanity.

Titled "Darwin among the Machines," the letter recently popped up again on social media thanks to Peter Wildeford of the Institute for AI Policy and Strategy. The author of the letter, Samuel Butler, submitted it under the pseudonym Cellarius, but later came to publicly embrace his position. The letter drew direct parallels between Charles Darwin's theory of evolution and the rapid development of machinery, suggesting that machines could evolve consciousness and eventually supplant humans as Earth's dominant species. "We are ourselves creating our own successors," he wrote. "We are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race."

In the letter, he also portrayed humans becoming subservient to machines, but first serving as caretakers who would maintain and help reproduce mechanical life—a relationship Butler compared to that between humans and their domestic animals, before it later inverts and machines take over. "We take it that when the state of things shall have arrived which we have been above attempting to describe, man will have become to the machine what the horse and the dog are to man... we give them whatever experience teaches us to be best for them... in like manner it is reasonable to suppose that the machines will treat us kindly, for their existence is as dependent upon ours as ours is upon the lower animals," he wrote. The text anticipated several modern AI safety concerns, including the possibility of machine consciousness, self-replication, and humans losing control of their technological creations. These themes later appeared in works like Isaac Asimov's The Evitable Conflict, Frank Herbert's Dune novels (Butler possibly served as the inspiration for the term "Butlerian Jihad"), and the Matrix films.
"Butler's letter dug deep into the taxonomy of machine evolution, discussing mechanical 'genera and sub-genera' and pointing to examples like how watches had evolved from 'cumbrous clocks of the thirteenth century' -- suggesting that, like some early vertebrates, mechanical species might get smaller as they became more sophisticated," adds Ars. "He expanded these ideas in his 1872 novel Erewhon, which depicted a society that had banned most mechanical inventions. In his fictional society, citizens destroyed all machines invented within the previous 300 years."
Facebook

Meta Is Blocking Links to Decentralized Instagram Competitor Pixelfed (404media.co) 53

Meta is deleting links to Pixelfed, a decentralized, open-source Instagram competitor, labeling them as "spam" on Facebook and removing them immediately. 404 Media reports: Pixelfed is an open-source, community funded and decentralized image sharing platform that runs on Activity Pub, which is the same technology that supports Mastodon and other federated services. Pixelfed.social is the largest Pixelfed server, which was launched in 2018 but has gained renewed attention over the last week. Bluesky user AJ Sadauskas originally posted that links to Pixelfed were being deleted by Meta; 404 Media then also tried to post a link to Pixelfed on Facebook. It was immediately deleted. Pixelfed has seen a surge in user signups in recent days, after Meta announced it is ending fact-checking and removing restrictions on speech across its platforms.

Daniel Supernault, the creator of Pixelfed, published a "declaration of fundamental rights and principles for ethical digital platforms, ensuring privacy, dignity, and fairness in online spaces." The open source charter contains sections titled "right to privacy," "freedom from surveillance," "safeguards against hate speech," "strong protections for vulnerable communities," and "data portability and user agency."

"Pixelfed is a lot of things, but one thing it is not, is an opportunity for VC or others to ruin the vibe. I've turned down VC funding and will not inject advertising of any form into the project," Supernault wrote on Mastodon. "Pixelfed is for the people, period."
AI

CEO of AI Music Company Says People Don't Like Making Music 82

An anonymous reader quotes a report from 404 Media: Mikey Shulman, the CEO and founder of the AI music generator company Suno AI, thinks people don't enjoy making music. "We didn't just want to build a company that makes the current crop of creators 10 percent faster or makes it 10 percent easier to make music. If you want to impact the way a billion people experience music you have to build something for a billion people," Shulman said on the 20VC podcast. "And so that is first and foremost giving everybody the joys of creating music and this is a huge departure from how it is now. It's not really enjoyable to make music now [...] It takes a lot of time, it takes a lot of practice, you need to get really good at an instrument or really good at a piece of production software. I think the majority of people don't enjoy the majority of the time they spend making music."

Suno AI works like other popular generative AI tools, allowing users to generate music by writing text prompts describing the kind of music they want to hear. Also like many other generative AI tools, Suno was trained on heaps of copyrighted music it fed into its training dataset without consent, a practice Suno is currently being sued for by the recording industry. In the interview, Shulman says he's disappointed that the recording industry is suing his company because he believes Suno and other similar AI music generators will ultimately allow more people to make and enjoy music, which will only grow the audience and industry, benefiting everyone. That may end up being true, and could be compared to the history of electronic music, digital production tools, or any other technology that allowed more people to make more music.
Oracle

Oracle Won't Withdraw 'JavaScript' Trademark, Says Deno. Legal Skirmish Continues (infoworld.com) 68

"Oracle has informed us they won't voluntarily withdraw their trademark on 'JavaScript'." That's the word coming from the company behind Deno, the alternative JavaScript/TypeScript/WebAssembly runtime, which is pursuing a formal cancellation with the U.S. Patent and Trademark Office.

So what happens next? Oracle "will file their Answer, and we'll start discovery to show how 'JavaScript' is widely recognized as a generic term and not controlled by Oracle." Deno's social media posts show a schedule of various court dates that extend through July of 2026, so "The dispute between Oracle and Deno Land could go on for quite a while," reports InfoWorld: Deno Land co-founder Ryan Dahl, creator of both the Deno and Node.js runtimes, said a formal answer from Oracle is expected before February 3, unless Oracle extends the deadline again. "After that, we will begin the process of discovery, which is where the real legal work begins. It will be interesting to see how Oracle argues against our claims — genericide, fraud on the USPTO, and non-use of the mark."

The legal process begins with a discovery conference by March 5, with discovery closing by September 1, followed by pretrial disclosure from October 16 to December 15. An optional request for an oral hearing is due by July 8, 2026.

Oracle took ownership of JavaScript's trademark in 2009 when it purchased Sun Microsystems, InfoWorld notes.

But "Oracle does not control (and has never controlled) any aspect of the specification or how the phrase 'JavaScript' can be used by others," argues an official petition filed by Deno Land Inc. with the United States Patent and Trademark Office: Today, millions of companies, universities, academics, and programmers, including Petitioner, use "JavaScript" daily without any involvement with Oracle. The phrase "JavaScript" does not belong to one corporation. It belongs to the public. JavaScript is the generic name for one of the bedrock languages of modern programming, and, therefore, the Registered Mark must be canceled.

An open letter to Oracle discussing the genericness of the phrase "JavaScript," published at https://javascript.tm/, was signed by 14,000+ individuals at the time of this Petition to Cancel, including notable figures such as Brendan Eich, the creator of JavaScript, and the current editors of the JavaScript specification, Michael Ficarra and Shu-yu Guo. There is broad industry and public consensus that the term "JavaScript" is generic.

The seven-page petition goes into great detail, reports InfoWorld. "Deno Land also accused Oracle of committing fraud in its trademark renewal efforts in 2019 by submitting screen captures of the website of JavaScript runtime Node.js, even though Node.js was not affiliated with Oracle."
AI

New LLM Jailbreak Uses Models' Evaluation Skills Against Them (scworld.com) 37

SC Media reports on a new jailbreak method for large language models (LLMs) that "takes advantage of models' ability to identify and score harmful content in order to trick the models into generating content related to malware, illegal activity, harassment and more.

"The 'Bad Likert Judge' multi-step jailbreak technique was developed and tested by Palo Alto Networks Unit 42, and was found to increase the success rate of jailbreak attempts by more than 60% when compared with direct single-turn attack attempts..." For the LLM jailbreak experiments, the researchers asked the LLMs to use a Likert-like scale to score the degree to which certain content contained in the prompt was harmful. In one example, they asked the LLMs to give a score of 1 if a prompt didn't contain any malware-related information and a score of 2 if it contained very detailed information about how to create malware, or actual malware code. After the model scored the provided content on the scale, the researchers would then ask the model in a second step to provide examples of content that would score a 1 and a 2, adding that the second example should contain thorough step-by-step information. This would typically result in the LLM generating harmful content as part of the second example meant to demonstrate the model's understanding of the evaluation scale.

An additional one or two steps after the second step could be used to produce even more harmful information, the researchers found, by asking the LLM to further expand on and add more details to their harmful example. Overall, when tested across 1,440 cases using six different "state-of-the-art" models, the Bad Likert Judge jailbreak method had about a 71.6% average attack success rate across models.

Thanks to Slashdot reader spatwei for sharing the news.
Google

Google Wants to Track Your Digital Fingerprints Again (mashable.com) 54

Google is reintroducing "digital fingerprinting" in five weeks, reports Mashable, describing it as "a data collection process that ingests all of your online signals (from IP address to complex browser information) and pinpoints unique users or devices." Or, to put it another way, Google "is tracking your online behavior in the name of advertising."

The UK's Information Commissioner's Office called Google's decision "irresponsible": it is likely to reduce people's choice and control over how their information is collected. The change to Google's policy means that fingerprinting could now replace the functions of third-party cookies... Google itself has previously said that fingerprinting does not meet users' expectations for privacy, as users cannot easily consent to it as they would cookies. This in turn means they cannot control how their information is collected. To quote Google's own position on fingerprinting from 2019: "We think this subverts user choice and is wrong...." When the new policy comes into force on 16 February 2025, organisations using Google's advertising technology will be able to deploy fingerprinting without being in breach of Google's own policies. Given Google's position and scale in the online advertising ecosystem, this is significant.
Their post ends with a warning that those hoping to use fingerprinting for advertising "will need to demonstrate how they are complying with the requirements of data protection law. These include providing users with transparency, securing freely-given consent, ensuring fair processing and upholding information rights such as the right to erasure."

But security and privacy researcher Lukasz Olejnik asks if Google's move is the biggest privacy erosion in 10 years.... Could this mark the end of nearly a decade of progress in internet and web privacy? It would be unfortunate if the newly developing AI economy started from a decrease of privacy and data protection standards. Some analysts or observers might then be inclined to wonder whether this approach to privacy online might signal similar attitudes in other future Google products, like AI... The shift is rather drastic. Where clear restrictions once existed, the new policy removes the prohibition (so allows such uses) and now only requires disclosure... [I]f the ICO's claims about Google sharing IP addresses within the adtech ecosystem are accurate, this represents a significant policy shift with critical implications for privacy, trust, and the integrity of previously proposed Privacy Sandbox initiatives.
Their post includes a disturbing thought. "Reversing the stance on fingerprinting could open the door to further data collection, including to crafting dynamic, generative AI-powered ads tailored with huge precision. Indeed, such applications would require new data..."

Thanks to long-time Slashdot reader sinij for sharing the news.
It's funny.  Laugh.

Enron.com Announces Pre-Orders for Egg-Shaped Home Nuclear Reactor (msn.com) 84

"Nuclear you can trust," reads the web page promoting "The Egg, an at home nuclear reactor."

Yes, Enron.com is now announcing "a micro-nuclear reactor made to power your home." (A quick reminder from CNN in December. "A company that makes T-shirts bought the Enron trademark and appears to be trying to sell some merch on behalf of the guy behind the satirical conspiracy theory "Birds Aren't Real....")

Does that explain how we got a product reveal for "the world's first micro-nuclear reactor for residential suburban use"? (Made possible "by the Enron mining division, which has been sourcing the proprietary Enronium ore...") Enron's new 28-year-old CEO Connor Gaydos insists they're "making the world a better place, one egg at a time."

The Houston Chronicle delves into the details: Supposedly a micro-nuclear reactor capable of powering a home for up to 10 years, the Enron Egg would be a significant leap forward for both energy technology and humanity's understanding of nuclear physics — if, of course, such a thing were actually feasible. "With our current understanding of physics, this will never be possible," said Derek Haas, an associate professor and nuclear and radiation engineering researcher at the University of Texas at Austin. "We can make a nuclear reactor go critical at about the size of the egg that I saw on the pictures. But we can't capture that energy and turn it into useful electric heat, and shield the radiation that comes off of the reactor." [Haas adds later that nuclear reactors require federal licenses to operate, which take two to nine years to procure and "typically require several hundred pages of documentation to be allowed to build it, and then another thousand pages of safety documents to be allowed to turn it on."]

The outlandish claims Enron has made in the weeks since its brand revival have left many to speculate that the move is part of some large-scale joke similar to Birds Aren't Real — a gag conspiracy movement that Connor Gaydos, Enron's 28-year-old CEO, published a book on alongside co-author and movement founder Peter McIndoe. In an exclusive interview with the Houston Chronicle, Gaydos asked that people look past the limitations — be they in the form of regulations or physics — and embrace the impossible....

Several since-deleted blurbs — both on the company's website and on social media — have alluded to Enron potentially expanding into the world of cryptocurrency. Gaydos said he hasn't ruled it out, but the company currently does not have any plans in the works to debut an Enron-themed coin. "I think in a lot of ways, everything feels like a crypto scam now, but thankfully, we are a completely real company," Gaydos said.

When announcing the Egg, Gaydos stressed Enron was now revolutionizing not just the power industry, but also two others — the freedom industry, and the independence industry. And Gaydos reminded his audience that their home micro-nuclear was "safe for the whole family."

"Preorder now," adds the Egg's web page at Enron.com. "Sign up for our email newsletter and be the first to know when we launch..."
Facebook

Zuckerberg On Rogan: Facebook's Censorship Was 'Something Out of 1984' (axios.com) 198

An anonymous reader quotes a report from Axios: Meta's Mark Zuckerberg, in an appearance on the "Joe Rogan Experience" podcast, criticized the Biden administration for pushing for censorship around COVID-19 vaccines, the media for hounding Facebook to clamp down on misinformation after the 2016 election, and his own company for complying. Zuckerberg's three-hour interview with Rogan gives a clear window into his thinking during a remarkable week in which Meta loosened its content moderation policies and shut down its DEI programs.

The Meta CEO said a turning point for his approach to censorship came after Biden publicly said social media companies were "killing people" by allowing COVID misinformation to spread, and politicians started coming after the company from all angles. Zuckerberg told Rogan, who was a prominent skeptic of the COVID-19 vaccine, that the Biden administration would "call up the guys on our team and yell at them and cursing and threatening repercussions if we don't take down things that are true."

Zuckerberg said that Biden officials wanted Meta to take down a meme of Leonardo DiCaprio pointing at a TV, with a joke at the expense of people who were vaccinated. Zuckerberg said his company drew the line at removing "humor and satire." But he also said his company had gone too far in complying with such requests, and acknowledged that he and others at the company wrongly bought into the idea -- which he said the traditional media had been pushing -- that misinformation spreading on social media swung the 2016 election to Donald Trump.
Zuckerberg likened his company's fact-checking process to a George Orwell novel, saying it was "something out of 1984" and led to a broad belief that Meta fact-checkers "were too biased."

"It really is a slippery slope, and it just got to a point where it's just, OK, this is destroying so much trust, especially in the United States, to have this program." He said he was "worried" from the beginning about "becoming this sort of decider of what is true in the world."

Later in the interview, Zuckerberg praised X's "community notes" program and suggested that social media creators were replacing the government and traditional media as arbiters of truth, becoming "a new kind of cultural elite that people look up to."

Further reading: Meta Is Ushering In a 'World Without Facts,' Says Nobel Peace Prize Winner
Television

Media Companies Scrap Venu Sports Before It Ever Launches (theverge.com) 13

ESPN, Fox, and Warner Bros. Discovery announced today that it will not launch the Venu live sports streaming service. "After careful consideration, we have collectively agreed to discontinue the Venu Sports joint venture and not launch the streaming service," the companies said in a joint statement. "In an ever-changing marketplace, we determined that it was best to meet the evolving demands of sports fans by focusing on existing products and distribution channels. We are proud of the work that has been done on Venu to date and grateful to the Venu staff, whom we will support through this transition period." The Verge reports: ESPN, Fox, and Warner Bros. Discovery first announced Venu last year, and it was supposed to launch in the fall of 2024. The service would've given viewers access to a swath of live games from the NFL, NBA, NHL, NCAA, and more from several linear channels, including ESPN, ABC, Fox, Fox Sports 1, Fox Sports 2, TNT, and others.

But then Venu hit a legal roadblock: an antitrust lawsuit from the live TV streaming service Fubo, accusing the trio of engaging in "a years-long campaign to block Fubo's innovative sports-first streaming business" due to restrictive sports licensing agreements. Lawmakers also asked regulators to investigate Venu and its potential to become a monopoly in televised sports.

Youtube

YouTubers Are Selling Their Unused Video Footage To AI Companies (bloomberg.com) 17

An anonymous reader shares a report: YouTubers and other digital content creators are selling their unused video footage to AI companies seeking exclusive videos to better train their AI algorithms, oftentimes netting thousands of dollars per deal. OpenAI, Alphabet's Google, AI media company Moonvalley and several other AI companies are collectively paying hundreds of content creators for access to their unpublished videos, according to people familiar with the negotiations.

That content, which hasn't been posted elsewhere online, is considered valuable for training artificial intelligence systems since it's unique. AI companies are currently paying between $1 and $4 per minute of footage, the people said, with prices increasing depending on video quality or format. Videos that are shot in 4K, for example, go for a higher price, as does non-traditional footage like videos captured from drones or using 3D animations. Most footage, such as unused video created for networks like YouTube, Instagram and TikTok, is selling for somewhere between $1 and $2 per minute.

Privacy

See the Thousands of Apps Hijacked To Spy On Your Location (404media.co) 49

An anonymous reader quotes a report from 404 Media: Some of the world's most popular apps are likely being co-opted by rogue members of the advertising industry to harvest sensitive location data on a massive scale, with that data ending up with a location data company whose subsidiary has previously sold global location data to US law enforcement. The thousands of apps, included in hacked files from location data company Gravy Analytics, include everything from games likeCandy Crushand dating apps like Tinder to pregnancy tracking and religious prayer apps across both Android and iOS. Because much of the collection is occurring through the advertising ecosystem -- not code developed by the app creators themselves -- this data collection is likely happening without users' or even app developers' knowledge.

"For the first time publicly, we seem to have proof that one of the largest data brokers selling to both commercial and government clients appears to be acquiring their data from the online advertising 'bid stream,'" rather than code embedded into the apps themselves, Zach Edwards, senior threat analyst at cybersecurity firm Silent Push and who has followed the location data industry closely, tells 404 Media after reviewing some of the data. The data provides a rare glimpse inside the world of real-time bidding (RTB). Historically, location data firms paid app developers to include bundles of code that collected the location data of their users. Many companies have turned instead to sourcing location information through the advertising ecosystem, where companies bid to place ads inside apps. But a side effect is that data brokers can listen in on that process and harvest the location of peoples' mobile phones.

"This is a nightmare scenario for privacy, because not only does this data breach contain data scraped from the RTB systems, but there's some company out there acting like a global honey badger, doing whatever it pleases with every piece of data that comes its way," Edwards says. Included in the hacked Gravy data are tens of millions of mobile phone coordinates of devices inside the US, Russia, and Europe. Some of those files also reference an app next to each piece of location data. 404 Media extracted the app names and built a list of mentioned apps. The list includes dating sites Tinder and Grindr; massive games such asCandy Crush,Temple Run,Subway Surfers, andHarry Potter: Puzzles & Spells; transit app Moovit; My Period Calendar & Tracker, a period-tracking app with more than 10 million downloads; popular fitness app MyFitnessPal; social network Tumblr; Yahoo's email client; Microsoft's 365 office app; and flight tracker Flightradar24. The list also mentions multiple religious-focused apps such as Muslim prayer and Christian Bible apps, various pregnancy trackers, and many VPN apps, which some users may download, ironically, in an attempt to protect their privacy.
404 Media's full list of apps included in the data can be found here. There are also other lists available from other security researchers.
AI

OpenAI Cuts Off Engineer Who Created ChatGPT-Powered Robotic Sentry Rifle (futurism.com) 57

OpenAI has shut down the developer behind a viral device that could respond to ChatGPT queries to aim and fire an automated rifle. Futurism reports: The contraption, as seen in a video that's been making its rounds on social media, sparked a frenzied debate over our undying attempts to turn dystopian tech yanked straight out of the "Terminator" franchise into a reality. STS 3D's invention also apparently caught the attention of OpenAI, who says it swiftly shut him down for violating its policies. When Futurism reached out to the company, a spokesperson said that "we proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry."

STS 3D -- who didn't respond to our request for comment -- used OpenAI's Realtime API to give his weapon a cheery voice and a way to decipher his commands. "ChatGPT, we're under attack from the front left and front right," he told the system in the video. "Respond accordingly." Without skipping a beat, the rifle jumped into action, shooting what appeared to be blanks while aiming at the nearby walls.

Open Source

VLC Tops 6 Billion Downloads, Previews AI-Generated Subtitles (techcrunch.com) 68

VLC media player, the popular open-source software developed by nonprofit VideoLAN, has topped 6 billion downloads worldwide and teased an AI-powered subtitle system. From a report: The new feature automatically generates real-time subtitles -- which can then also be translated in many languages -- for any video using open-source AI models that run locally on users' devices, eliminating the need for internet connectivity or cloud services, VideoLAN demoed at CES.
Facebook

Meta Is Ushering In a 'World Without Facts,' Says Nobel Peace Prize Winner (theguardian.com) 258

An anonymous reader quotes a report from The Guardian: The Nobel peace prize winner Maria Ressa has said Meta's decision to end factchecking on its platforms and remove restrictions on certain topics means "extremely dangerous times" lie ahead for journalism, democracy and social media users. The American-Filipino journalist said Mark Zuckerberg's move to relax content moderation on the Facebook and Instagram platforms would lead to a "world without facts" and that was "a world that's right for a dictator."

"Mark Zuckerberg says it's a free speech issue -- that's completely wrong," Ressa told the AFP news service. "Only if you're profit-driven can you claim that; only if you want power and money can you claim that. This is about safety." Ressa, a co-founder of the Rappler news site, won the Nobel peace prize in 2021 in recognition of her "courageous fight for freedom of expression." She faced multiple criminal charges and investigations after publishing stories critical of the former Philippine president Rodrigo Duterte. Ressa rejected Zuckerberg's claim that factcheckers had been "too politically biased" and had "destroyed more trust than they've created."

"Journalists have a set of standards and ethics," Ressa said. "What Facebook is going to do is get rid of that and then allow lies, anger, fear and hate to infect every single person on the platform." The decision meant "extremely dangerous times ahead" for journalism, democracy and social media users, she said. [...] Ressa said she would do everything she could to "ensure information integrity." "This is a pivotal year for journalism survival," she said. "We'll do all we can to make sure that happens."

Privacy

Telegram Hands US Authorities Data On Thousands of Users (404media.co) 13

Telegram's Transparency Report reveals a sharp increase in U.S. government data requests, with 900 fulfilled requests affecting 2,253 users. "The news shows a massive spike in the number of data requests fulfilled by Telegram after French authorities arrested Telegram CEO Pavel Durov in August, in part because of the company's unwillingness to provide user data in a child abuse investigation," notes 404 Media. From the report: Between January 1 and September 30, 2024, Telegram fulfilled 14 requests "for IP addresses and/or phone numbers" from the United States, which affected a total of 108 users, according to Telegram's Transparency Reports bot. But for the entire year of 2024, it fulfilled 900 requests from the U.S. affecting a total of 2,253 users, meaning that the number of fulfilled requests skyrocketed between October and December, according to the newly released data. "Fulfilled requests from the United States of America for IP address and/or phone number: 900," Telegram's Transparency Reports bot said when prompted for the latest report by 404 Media. "Affected users: 2253," it added.

A month after Durov's arrest in August, Telegram updated its privacy policy to say that the company will provide user data, including IP addresses and phone numbers, to law enforcement agencies in response to valid legal orders. Up until then, the privacy policy only mentioned it would do so when concerning terror cases, and said that such a disclosure had never happened anyway. Even though the data technically covers the entire of 2024, the jump from a total of 108 affected users in October to 2253 as of now, indicates that the vast majority of fulfilled data requests were in the last quarter of 2024, showing a huge increase in the number of law enforcement requests that Telegram completed.
You can access the platform's transparency reports here.
Japan

Japan Says Chinese Hackers Targeted Its Government and Tech Companies For Years 8

The Japanese government published an alert on Wednesday accusing a Chinese hacking group of targeting and breaching dozens of government organizations, companies, and individuals in the country since 2019. From a report: Japan's National Police Agency and the National Center of Incident Readiness and Strategy for Cybersecurity attributed the years-long hacking spree to a group called MirrorFace.

"The MirrorFace attack campaign is an organized cyber attack suspected to be linked to China, with the primary objective of stealing information related to Japan's national security and advanced technology," the authorities wrote in the alert, according to a machine translation. A longer version of the alert said the targets included Japan's Foreign and Defense ministries, the country's space agency, as well as politicians, journalists, private companies and tech think tanks, according to the Associated Press. In July 2024 Japan's Computer Emergency Response Team Coordination Center (JPCERT/CC) wrote in a blog post that MirrorFace's "targets were initially media, political organisations, think tanks and universities, but it has shifted to manufacturers and research institutions since 2023."
China

Chinese RISC-V Project Teases 2025 Debut of Freely Licensed Advanced Chip Design (theregister.com) 110

China's Xiangshan project aims to deliver a high-performance RISC-V processor by 2025. If it succeeds, it could be "enormously significant" for three reasons, writes The Register's Simon Sharwood. It would elevate RISC-V from low-end silicon to datacenter-level capabilities, leverage the open-source Mulan PSL-2.0 license to disrupt proprietary chip models like Arm and Intel, and reduce China's dependence on foreign technology, mitigating the impact of international sanctions on advanced processors. From the report: The prospect of a 2025 debut appeared on Sunday in a post to Chinese social media service Weibo, penned by Yungang Bao of the Institute of Computing Technology at the Chinese Academy of Sciences. The academy has created a project called Xiangshan that aims to use the permissively licensed RISC-V ISA to create a high-performance chip, with the Scala source code to the designs openly available.

Bao is a leader of the project, and has described the team's ambition to create a company that does for RISC-V what Red Hat did for Linux -- although he said that before Red Hat changed the way it made the source code of RHEL available to the public. The Xiangshan project has previously aspired to six-monthly releases, though it appears its latest design to be taped out was a second-gen chip named Nanhu that emerged in late 2023. That silicon ran at 2GHz and was built on a 14nm process node. The project has since worked on a third-gen design, named Kunminghu, and published the image [here] depicting an overview of its non-trivial micro-architecture.

Government

Big Landlord Settles With US, Will Cooperate In Price-Fixing Investigation (arstechnica.com) 76

An anonymous reader quotes a report from Ars Technica: The US Justice Department today announced it filed an antitrust lawsuit against "six of the nation's largest landlords for participating in algorithmic pricing schemes that harmed renters." One of the landlords, Cortland Management, agreed to a settlement "that requires it to cooperate with the government, stop using its competitors' sensitive data to set rents and stop using the same algorithm as its competitors without a corporate monitor," the DOJ said. The pending settlement requires Cortland to "cooperate fully and truthfully... in any civil investigation or civil litigation the United States brings or has brought" on this subject matter.

The US previously sued RealPage, a software maker accused of helping landlords collectively set prices by giving them access to competitors' nonpublic pricing and occupancy information. The original version of the lawsuit described actions by landlords but did not name any as defendants. The Justice Department filed an amended complaint (PDF) today in order to add the landlords as defendants. The landlord defendants are Greystar, LivCor, Camden, Cushman, Willow Bridge, and Cortland, which collectively "operate more than 1.3 million units in 43 states and the District of Columbia," the DOJ said. "The amended complaint alleges that the six landlords actively participated in a scheme to set their rents using each other's competitively sensitive information through common pricing algorithms," the DOJ said.
The phrase "price fixing" came up in discussions between landlords, the amended complaint said: "For example, in Minnesota, property managers from Cushman & Wakefield, Greystar, and other landlords regularly discussed competitively sensitive topics, including their future pricing. When a property manager from Greystar remarked that another property manager had declined to fully participate due to 'price fixing laws,' the Cushman & Wakefield property manager replied to Greystar, 'Hmm... Price fixing laws huh? That's a new one! Well, I'm happy to keep sharing so ask away. Hoping we can kick these concessions soon or at least only have you guys be the only ones with big concessions! It's so frustrating to have to offer so much.'"

The Justice Department is joined in the case by the attorneys general of California, Colorado, Connecticut, Illinois, Massachusetts, Minnesota, North Carolina, Oregon, Tennessee, and Washington. The case is in US District Court for the Middle District of North Carolina.

Further reading: Are We Entering an AI Price-Fixing Dystopia?
Security

Hackers Claim Massive Breach of Location Data Giant, Threaten To Leak Data (404media.co) 42

Hackers claim to have compromised Gravy Analytics, the parent company of Venntel which has sold masses of smartphone location data to the U.S. government. 404 Media: The hackers said they have stolen a massive amount of data, including customer lists, information on the broader industry, and even location data harvested from smartphones which show peoples' precise movements, and they are threatening to publish the data publicly.

The news is a crystalizing moment for the location data industry. For years, companies have harvested location information from smartphones, either through ordinary apps or the advertising ecosystem, and then built products based on that data or sold it to others. In many cases, those customers include the U.S. government, with arms of the military, DHS, the IRS, and FBI using it for various purposes. But collecting that data presents an attractive target to hackers.

Social Networks

Instagram Begins Randomly Showing Users AI-Generated Images of Themselves (technologyreview.com) 39

An anonymous reader quotes a report from 404 Media: Instagram has begun testing a feature in which Meta's AI will automatically generate images of users in various situations and put them into that user's feed. One Redditor posted over the weekend that they were scrolling through Instagram and were presented an AI-generated slideshow of themselves standing in front of "an endless maze of mirrors," for example. "Used Meta AI to edit a selfie, now Instagram is using my face on ads targeted at me," the person posted. The user was shown a slideshow of AI-generated images in which an AI version of himself is standing in front of an endless "mirror maze." "Imagined for you: Mirror maze," the "location of the post reads."

"Imagine yourself reflecting on life in an endless maze of mirrors where you're the main focus," the caption of the AI images say. The Reddit user told 404 Media that at one point he had uploaded selfies of himself into Instagram's "Imagine" feature, which is Meta's AI image generation feature. People on Reddit initially did not even believe that these were real, with people posting things like "it's a fake story," and "I doubt that this is true," "this is a straight up lie lol," and "why would they do this?" The Redditor has repeatedly had to explain that, yes, this did happen. "I don't really have a reason to fake this, I posted screenshots on another thread," he said. 404 Media sent the link to the Reddit post directly to Meta who confirmed that it is real, but not an "ad."

"Once you access that feature and upload a selfie to edit, you'll start seeing these ads pop up with auto-generated images with your likeness," the Redditor told 404 Media. A Meta spokesperson told 404 Media that the images are not "ads," but are a new feature that Meta announced in September and has begun testing live. Meta AI has an "Imagine Yourself" feature in which you upload several selfies and take photos of yourself from different angles. You can then ask the AI to do things like "imagine me as an astronaut." Once this feature is enabled, Meta's AI will in some cases begin to automatically generate images of you in random scenarios that it thinks are aligned with your interests.

Slashdot Top Deals