×
Privacy

Proton Acquires Standard Notes (zdnet.com) 10

Privacy startup Proton already offers an email app, a VPN tool, cloud storage, a password manager, and a calendar app. In April 2022, Proton acquired SimpleLogin, an open-source product that generates email aliases to protect inboxes from spam and phishing. Today, Proton acquired Standard Notes, advancing its already strong commitment to the open-source community. From a report: Standard Notes is an open-source note-taking app, available on both mobile and desktop platforms, with a user base of over 300,000. [...] Proton founder and CEO Andy Yen makes a point of stating that Standard Notes will remain open-source, will continue to undergo independent audits, will continue to develop new features and updates, and that prices for the app/service will not change. Standard Notes has three tiers: Free, which includes 100MB of storage, offline access, and unlimited device sync; Productivity for $90 per year, which includes features like markdown, spreadsheets with advanced formulas, Daily Notebooks, and two-factor authentication; and Professional for $120 per year, which includes 100GB of cloud storage, sharing for up to five accounts, no file limit size, and more.
AI

In America, A Complex Patchwork of State AI Regulations Has Already Arrived (cio.com) 13

While the European Parliament passed a wide-ranging "AI Act" in March, "Leaders from Microsoft, Google, and OpenAI have all called for AI regulations in the U.S.," writes CIO magazine. Even the Chamber of Commerce, "often opposed to business regulation, has called on Congress to protect human rights and national security as AI use expands," according to the article, while the White House has released a blueprint for an AI bill of rights.

But even though the U.S. Congress hasn't passed AI legislation — 16 different U.S. states have, "and state legislatures have already introduced more than 400 AI bills across the U.S. this year, six times the number introduced in 2023." Many of the bills are targeted both at the developers of AI technologies and the organizations putting AI tools to use, says Goli Mahdavi, a lawyer with global law firm BCLP, which has established an AI working group. And with populous states such as California, New York, Texas, and Florida either passing or considering AI legislation, companies doing business across the US won't be able to avoid the regulations. Enterprises developing and using AI should be ready to answer questions about how their AI tools work, even when deploying automated tools as simple as spam filtering, Mahdavi says. "Those questions will come from consumers, and they will come from regulators," she adds. "There's obviously going to be heightened scrutiny here across the board."
There's sector-specific bills, and bills that demand transparency (of both development and output), according to the article. "The third category of AI bills covers broad AI bills, often focused on transparency, preventing bias, requiring impact assessment, providing for consumer opt-outs, and other issues."

One example the article notes is Senate Bill 1047, introduced in the California State Legislature in February, "would require safety testing of AI products before they're released, and would require AI developers to prevent others from creating derivative models of their products that are used to cause critical harms."

Adrienne Fischer, a lawyer with Basecamp Legal, a Denver law firm monitoring state AI bills, tells CIO that many of the bills promote best practices in privacy and data security, but said the fragmented regulatory environment "underscores the call for national standards or laws to provide a coherent framework for AI usage."

Thanks to Slashdot reader snydeq for sharing the article.
IOS

Recent 'MFA Bombing' Attacks Targeting Apple Users (krebsonsecurity.com) 15

An anonymous reader quotes a report from KrebsOnSecurity: Several Apple customers recently reported being targeted in elaborate phishing attacks that involve what appears to be a bug in Apple's password reset feature. In this scenario, a target's Apple devices are forced to display dozens of system-level prompts that prevent the devices from being used until the recipient responds "Allow" or "Don't Allow" to each prompt. Assuming the user manages not to fat-finger the wrong button on the umpteenth password reset request, the scammers will then call the victim while spoofing Apple support in the caller ID, saying the user's account is under attack and that Apple support needs to "verify" a one-time code. [...]

What sanely designed authentication system would send dozens of requests for a password change in the span of a few moments, when the first requests haven't even been acted on by the user? Could this be the result of a bug in Apple's systems? Kishan Bagaria is a hobbyist security researcher and engineer who founded the website texts.com (now owned by Automattic), and he's convinced Apple has a problem on its end. In August 2019, Bagaria reported to Apple a bug that allowed an exploit he dubbed "AirDoS" because it could be used to let an attacker infinitely spam all nearby iOS devices with a system-level prompt to share a file via AirDrop -- a file-sharing capability built into Apple products.

Apple fixed that bug nearly four months later in December 2019, thanking Bagaria in the associated security bulletin. Bagaria said Apple's fix was to add stricter rate limiting on AirDrop requests, and he suspects that someone has figured out a way to bypass Apple's rate limit on how many of these password reset requests can be sent in a given timeframe. "I think this could be a legit Apple rate limit bug that should be reported," Bagaria said.

AI

OpenAI's Chatbot Store is Filling Up With Spam (techcrunch.com) 11

An anonymous reader shares a report: When OpenAI CEO Sam Altman announced GPTs, custom chatbots powered by OpenAI's generative AI models, onstage at the company's first-ever developer conference in November, he described them as a way to "accomplish all sorts of tasks" -- from programming to learning about esoteric scientific subjects to getting workout pointers. "Because [GPTs] combine instructions, expanded knowledge and actions, they can be more helpful to you," Altman said. "You can build a GPT ... for almost anything." He wasn't kidding about the anything part.

TechCrunch found that the GPT Store, OpenAI's official marketplace for GPTs, is flooded with bizarre, potentially copyright-infringing GPTs that imply a light touch where it concerns OpenAI's moderation efforts. A cursory search pulls up GPTs that purport to generate art in the style of Disney and Marvel properties, serve as little more than funnels to third-party paid services, advertise themselves as being able to bypass AI content detection tools such as Turnitin and Copyleaks.

Google

Google is Starting To Squash More Spam and AI in Search Results (theverge.com) 49

Google announced updates to its search ranking systems aimed at promoting high-quality content and demoting manipulative or low-effort material, including content generated by AI solely to summarize other sources. The company also stated it is improving its ability to detect and combat tactics used to deceive its ranking algorithms.
AI

Researchers Create AI Worms That Can Spread From One System to Another (arstechnica.com) 46

Long-time Slashdot reader Greymane shared this article from Wired: [I]n a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers has created one of what they claim are the first generative AI worms — which can spread from one system to another, potentially stealing data or deploying malware in the process. "It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before," says Ben Nassi, a Cornell Tech researcher behind the research. Nassi, along with fellow researchers Stav Cohen and Ron Bitton, created the worm, dubbed Morris II, as a nod to the original Morris computer worm that caused chaos across the Internet in 1988. In a research paper and website shared exclusively with WIRED, the researchers show how the AI worm can attack a generative AI email assistant to steal data from emails and send spam messages — breaking some security protections in ChatGPT and Gemini in the process...in test environments [and not against a publicly available email assistant]...

To create the generative AI worm, the researchers turned to a so-called "adversarial self-replicating prompt." This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies... To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the system — by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.

In one instance, the researchers, acting as attackers, wrote an email including the adversarial text prompt, which "poisons" the database of an email assistant using retrieval-augmented generation (RAG), a way for LLMs to pull in extra data from outside its system. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT-4 or Gemini Pro to create an answer, it "jailbreaks the GenAI service" and ultimately steals data from the emails, Nassi says. "The generated response containing the sensitive user data later infects new hosts when it is used to reply to an email sent to a new client and then stored in the database of the new client," Nassi says. In the second method, the researchers say, an image with a malicious prompt embedded makes the email assistant forward the message on to others. "By encoding the self-replicating prompt into the image, any kind of image containing spam, abuse material, or even propaganda can be forwarded further to new clients after the initial email has been sent," Nassi says.

In a video demonstrating the research, the email system can be seen forwarding a message multiple times. The researchers also say they could extract data from emails. "It can be names, it can be telephone numbers, credit card numbers, SSN, anything that is considered confidential," Nassi says.

The researchers reported their findings to Google and OpenAI, according to the article, with OpenAI confirming "They appear to have found a way to exploit prompt-injection type vulnerabilities by relying on user input that hasn't been checked or filtered." OpenAI says they're now working to make their systems "more resilient."

Google declined to comment on the research.
Google

Google is Blocking RCS on Rooted Android Devices (theverge.com) 105

Google is cracking down on rooted Android devices, blocking multiple people from using the RCS message feature in Google Messages. From a report: Users with rooted phones -- a process that unlocks privileged access to the Android operating system, like jailbreaking iPhones -- have made several reports on the Google Messages support page, Reddit, and XDA's web forum over the last few months, finding they're suddenly unable to send or receive RCS messages. One example from Reddit user u/joefuf shows that RCS messages would simply vanish after hitting the send button. Several reports also mention that Google Messages gave no indication that RCS chat was no longer working, and was still showing as connected and working in Google Messages. In a statement sent to the Verge where we asked if Google is blocking rooted devices from using RCS, Google communications manager Ivy Hunt said the company is "ensuring that message-issuing/receiving devices are following the operating measures defined by the RCS standard" in a bid to prevent spam and abuse on Google Messages. In other words, yes, Google is blocking RCS on rooted devices.
Security

GitHub Besieged By Millions of Malicious Repositories In Ongoing Attack (arstechnica.com) 50

An anonymous reader quotes a report from Ars Technica: GitHub is struggling to contain an ongoing attack that's flooding the site with millions of code repositories. These repositories contain obfuscated malware that steals passwords and cryptocurrency from developer devices, researchers said. The malicious repositories are clones of legitimate ones, making them hard to distinguish to the casual eye. An unknown party has automated a process that forks legitimate repositories, meaning the source code is copied so developers can use it in an independent project that builds on the original one. The result is millions of forks with names identical to the original one that add a payload that's wrapped under seven layers of obfuscation. To make matters worse, some people, unaware of the malice of these imitators, are forking the forks, which adds to the flood.

"Most of the forked repos are quickly removed by GitHub, which identifies the automation," Matan Giladi and Gil David, researchers at security firm Apiiro, wrote Wednesday. "However, the automation detection seems to miss many repos, and the ones that were uploaded manually survive. Because the whole attack chain seems to be mostly automated on a large scale, the 1% that survive still amount to thousands of malicious repos." Given the constant churn of new repos being uploaded and GitHub's removal, it's hard to estimate precisely how many of each there are. The researchers said the number of repos uploaded or forked before GitHub removes them is likely in the millions. They said the attack "impacts more than 100,000 GitHub repositories."
GitHub issued the following statement: "GitHub hosts over 100M developers building across over 420M repositories, and is committed to providing a safe and secure platform for developers. We have teams dedicated to detecting, analyzing, and removing content and accounts that violate our Acceptable Use Policies. We employ manual reviews and at-scale detections that use machine learning and constantly evolve and adapt to adversarial tactics. We also encourage customers and community members to report abuse and spam."
Social Networks

Supreme Court Hears Landmark Cases That Could Upend What We See on Social Media (cnn.com) 282

The US Supreme Court is hearing oral arguments Monday in two cases that could dramatically reshape social media, weighing whether states such as Texas and Florida should have the power to control what posts platforms can remove from their services. From a report: The high-stakes battle gives the nation's highest court an enormous say in how millions of Americans get their news and information, as well as whether sites such as Facebook, Instagram, YouTube and TikTok should be able to make their own decisions about how to moderate spam, hate speech and election misinformation. At issue are laws passed by the two states that prohibit online platforms from removing or demoting user content that expresses viewpoints -- legislation both states say is necessary to prevent censorship of conservative users.

More than a dozen Republican attorneys general have argued to the court that social media should be treated like traditional utilities such as the landline telephone network. The tech industry, meanwhile, argues that social media companies have First Amendment rights to make editorial decisions about what to show. That makes them more akin to newspapers or cable companies, opponents of the states say. The case could lead to a significant rethinking of First Amendment principles, according to legal experts. A ruling in favor of the states could weaken or reverse decades of precedent against "compelled speech," which protects private individuals from government speech mandates, and have far-reaching consequences beyond social media. A defeat for social media companies seems unlikely, but it would instantly transform their business models, according to Blair Levin, an industry analyst at the market research firm New Street Research.

AI

Can Robots.txt Files Really Stop AI Crawlers? (theverge.com) 97

In the high-stakes world of AI, "The fundamental agreement behind robots.txt [files], and the web as a whole — which for so long amounted to 'everybody just be cool' — may not be able to keep up..." argues the Verge: For many publishers and platforms, having their data crawled for training data felt less like trading and more like stealing. "What we found pretty quickly with the AI companies," says Medium CEO Tony Stubblebin, "is not only was it not an exchange of value, we're getting nothing in return. Literally zero." When Stubblebine announced last fall that Medium would be blocking AI crawlers, he wrote that "AI companies have leached value from writers in order to spam Internet readers."

Over the last year, a large chunk of the media industry has echoed Stubblebine's sentiment. "We do not believe the current 'scraping' of BBC data without our permission in order to train Gen AI models is in the public interest," BBC director of nations Rhodri Talfan Davies wrote last fall, announcing that the BBC would also be blocking OpenAI's crawler. The New York Times blocked GPTBot as well, months before launching a suit against OpenAI alleging that OpenAI's models "were built by copying and using millions of The Times's copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more." A study by Ben Welsh, the news applications editor at Reuters, found that 606 of 1,156 surveyed publishers had blocked GPTBot in their robots.txt file.

It's not just publishers, either. Amazon, Facebook, Pinterest, WikiHow, WebMD, and many other platforms explicitly block GPTBot from accessing some or all of their websites.

On most of these robots.txt pages, OpenAI's GPTBot is the only crawler explicitly and completely disallowed. But there are plenty of other AI-specific bots beginning to crawl the web, like Anthropic's anthropic-ai and Google's new Google-Extended. According to a study from last fall by Originality.AI, 306 of the top 1,000 sites on the web blocked GPTBot, but only 85 blocked Google-Extended and 28 blocked anthropic-ai. There are also crawlers used for both web search and AI. CCBot, which is run by the organization Common Crawl, scours the web for search engine purposes, but its data is also used by OpenAI, Google, and others to train their models. Microsoft's Bingbot is both a search crawler and an AI crawler. And those are just the crawlers that identify themselves — many others attempt to operate in relative secrecy, making it hard to stop or even find them in a sea of other web traffic.

For any sufficiently popular website, finding a sneaky crawler is needle-in-haystack stuff.

In addition, the article points out, a robots.txt file "is not a legal document — and 30 years after its creation, it still relies on the good will of all parties involved.

"Disallowing a bot on your robots.txt page is like putting up a 'No Girls Allowed' sign on your treehouse — it sends a message, but it's not going to stand up in court."
Spam

The Unsettling Scourge of Obituary Spam (theverge.com) 39

Many websites are using AI tools to generate fake obituaries about average people for profit. These articles lack substantiating details but are optimized for SEO, frequently outranking legitimate obituaries, The Verge reports. The fake obituaries, as one can imagine, are causing distress for grieving families and friends. In response, Google told The Verge that it aims to surface high-quality information but struggles with "data voids." The company terminated some YouTube channels sharing fake notices but declined to say if the flagged websites violate policies.
Social Networks

Is AI Hastening the Demise of Quora? (slate.com) 57

Quora "used to be a thriving community that worked to answer our most specific questions," writes Slate. "But users are fleeing," while the site hosts "a never-ending avalanche of meaningless, repetitive sludge, filled with bizarre, nonsensical, straight-up hateful, and A.I.-generated entries..."

The site has faced moderation issues, spam, trolls, and bots re-posting questions from Reddit (plus competition for ad revenue from sites like Facebook and Google which forced cuts in Quora's support and moderation teams). But automating its moderation "did not improve the situation...

"Now Quora is even offering A.I.-generated images to accompany users' answers, even though the spawned illustrations make little sense." To top it all off, after Quora began using A.I. to "generate machine answers on a number of selected question pages," the site made clear the possibility that human-crafted answers could be used for training A.I. This meant that the detailed writing Quorans provided mostly for free would be ingested into a custom large language model. Updated terms of service and privacy policies went into effect at the site last summer. As angel investor and Quoran David S. Rose paraphrased them: "You grant all other Quora users the unlimited right to reuse and adapt your answers," "You grant Quora the right to use your answers to train an LLM unless you specifically opt out," and "You completely give up your right to be any part of any class action suit brought against Quora," among others. (Quora's Help Center claims that "as of now, we do not use answers, posts, or comments added to Quora to train LLMs used for generating content on Quora. However, this may change in the future." The site offers an opt-out setting, although it admits that "opting out does not cover everything.")

This raised the issue of consent and ownership, as Quorans had to decide whether to consent to the new terms or take their work and flee. High-profile users, like fantasy author Mercedes R. Lackey, are removing their work from their profiles and writing notes explaining why. "The A.I. thing, the terms of service issue, has been a massive drain of top talent on Quora, just based on how many people have said, Downloaded my stuff and I'm out of there," Lackey told me. It's not that all Quorans want to leave, but it's hard for them to choose to remain on a website where they now have to constantly fight off errors, spam, trolls, and even account impersonators....

The tragedy of Quora is not just that it crushed the flourishing communities it once built up. It's that it took all of that goodwill, community, expertise, and curiosity and assumed that it could automate a system that equated it, apparently without much thought to how pale the comparison is. [Nelson McKeeby, an author who joined Quora in 2013] has a grim prediction for the future: "Eventually Quora will be robot questions, robot answers, and nothing else." I wonder how the site will answer the question of why Quora died, if anyone even bothers to ask.

The article notes that Andreessen Horowitz gave Quora "a much-needed $75 million investment — but only for the sake of developing its on-site generative-text chatbot, Poe."
Security

How a Data Breach of 1M Cancer Center Patients Led to Extorting Emails (seattletimes.com) 37

The Seattle Times reports: Concerns have grown in recent weeks about data privacy and the ongoing impacts of a recent Fred Hutchinson Cancer Center cyberattack that leaked personal information of about 1 million patients last November. Since the breach, which hit the South Lake Union cancer research center's clinical network and has led to a host of email threats from hackers and lawsuits against Fred Hutch, menacing messages from perpetrators have escalated.

Some patients have started to receive "swatting" threats, in addition to spam emails warning people that unless they pay a fee, their names, Social Security and phone numbers, medical history, lab results and insurance history will be sold to data brokers and on black markets. Steve Bernd, a spokesperson for FBI Seattle, said last week there's been no indication of any criminal swatting events... Other patients have been inundated with spam emails since the breach...

According to The New York Times, large data breaches like this are becoming more common. In the first 10 months of 2023, more than 88 million individuals had their medical data exposed, according to the Department of Health and Human Services. Meanwhile, the number of reported ransomware incidents, when a specific malware blocks a victim's personal data until a ransom is paid, has decreased in recent years — from 516 in 2021 to 423 in 2023, according to Bernd of FBI Seattle. In Washington, the number dropped from 84 to 54 in the past three years, according to FBI data.

Fred Hutchinson Cancer Center believes their breach was perpetrated outside the U.S. by exploiting the "Citrix Bleed" vulnerability (which federal cybersecurity officials warn can allow the bypassing of passwords and mutifactor authentication measures).

The article adds that in late November, the Department of Health and Human Services' Health Sector Cybersecurity Coordination Center "urged hospitals and other organizations that used Citrix to take immediate action to patch network systems in order to protect against potentially significant ransomware threats."
Google

AI-Generated Content Can Sometimes Slip Into Your Google News Feed (engadget.com) 37

Google News is sometimes boosting sites that rip-off other outlets by using AI to rapidly churn out content, 404 Media claims: From the report: Google told 404 Media that although it tries to address spam on Google News, the company ultimately does not focus on whether a news article was written by an AI or a human, opening the way for more AI-generated content making its way onto Google News. The presence of AI-generated content on Google News signals two things: first, the black box nature of Google News, with entry into Google News' rankings in the first place an opaque, but apparently gameable, system. Second, is how Google may not be ready for moderating its News service in the age of consumer-access AI, where essentially anyone is able to churn out a mass of content with little to no regard for its quality or originality.
UPDATE: Engadget argues that "to find such stories required heavily manipulating the search results in Google News," noting that in the cited case, 404 Media's search parameters "are essentially set so that the original stories don't appear."

Engadget got this rebuke from Google. "Claiming that these sites were featured prominently in Google News is not accurate - the sites in question only appeared for artificially narrow queries, including queries that explicitly filtered out the date of an original article.

"We take the quality of our results extremely seriously and have clear policies against content created for the primary purpose of ranking well on News and we remove sites that violate it."

Engadget then wrote, "We apologize for overstating the issue and are including a slightly modified version of the original story that has been corrected for accuracy, and we've updated the headline to make it more accurate."
Desktops (Apple)

Beeper Users Say Apple Is Now Blocking Their Macs From Using iMessage Entirely (techcrunch.com) 175

An anonymous reader quotes a report from TechCrunch: The Apple-versus-Beeper saga is not over yet it seems, even though the iMessage-on-Android Beeper Mini was removed from the Play Store last week. Now, Apple customers who used Beeper's apps are reporting that they've been banned from using iMessage on their Macs -- a move Apple may have taken to disable Beeper's apps from working properly, but ultimately penalizes its own customers for daring to try a non-Apple solution for accessing iMessage. The latest follows a contentious game of cat-and-mouse between Apple and Beeper, which Apple ultimately won. [...]

According to users' recounting of their tech support experiences with Apple, the support reps are telling them their computer has been flagged for spam, or for sending too many messages — even though that's not the case, some argued. This has led many Beeper users to believe this is how Apple is flagging them for removal from the iMessage network. One Beeper customer advised others facing this problem to ask Apple if their Mac was in a "throttled status" or if their Apple ID was blocked for spam to get to the root of the issue. Admitting up front that third-party software was to blame would sometimes result in the support rep being able to lift the ban, some noted.

The news of the Mac bans was earlier reported by Apple news site AppleInsider and Times of India, and is being debated on Y Combinator forum site Hacker News. On the latter, some express their belief that the retaliation against Apple's own users is justified as they had violated Apple's terms, while others said that iMessage interoperability should be managed through regulation, not rogue apps. Far fewer argued that Apple is exerting its power in an anticompetitive fashion here.

Google

Google Search Really Has Gotten Worse, Researchers Find (404media.co) 58

An anonymous reader quotes a report from 404 Media: Google search really has been taken over by low-quality SEO spam, according to a new, year-long study by German researchers (PDF). The researchers, from Leipzig University, Bauhaus-University Weimar, and the Center for Scalable Data Analytics and Artificial Intelligence, set out to answer the question "Is Google Getting Worse?" by studying search results for 7,392 product-review terms across Google, Bing, and DuckDuckGo over the course of a year. They found that, overall, "higher-ranked pages are on average more optimized, more monetized with affiliate marketing, and they show signs of lower text quality ... we find that only a small portion of product reviews on the web uses affiliate marketing, but the majority of all search results do."

They also found that spam sites are in a constant war with Google over the rankings, and that spam sites will regularly find ways to game the system, rise to the top of Google's rankings, and then will be knocked down. "SEO is a constant battle and we see repeated patterns of review spam entering and leaving the results as search engines and SEO engineers take turns adjusting their parameters," they wrote. They note that Google, Bing, and DuckDuckGo are regularly tweaking their algorithms and taking down content that is outright spam, but that, overall, this leads only to "a temporary positive effect."

"Search engines seem to lose the cat-and-mouse game that is SEO spam," they write. Notably, Google, Bing, and DuckDuckGo all have the same problems, and in many cases, Google performed better than Bing and DuckDuckGo by the researchers' measures. The researchers warn that this rankings war is likely to get much worse with the advent of AI-generated spam, and that it genuinely threatens the future utility of search engines: "the line between benign content and spam in the form of content and link farms becomes increasingly blurry -- a situation that will surely worsen in the wake of generative AI. We conclude that dynamic adversarial spam in the form of low-quality, mass-produced commercial content deserves more attention."

Security

Google Password Resets Not Enough To Stop These Info-Stealing Malware Strains (theregister.com) 13

Security researchers say info-stealing malware can still access victims' compromised Google accounts even after passwords have been changed. From a report: A zero-day exploit of Google account security was first teased by a cybercriminal known as "PRISMA" in October 2023, boasting that the technique could be used to log back into a victim's account even after the password is changed. It can also be used to generate new session tokens to regain access to victims' emails, cloud storage, and more as necessary. Since then, developers of infostealer malware -- primarily targeting Windows, it seems -- have steadily implemented the exploit in their code. The total number of known malware families that abuse the vulnerability stands at six, including Lumma and Rhadamanthys, while Eternity Stealer is also working on an update to release in the near future.

Eggheads at CloudSEK say they found the root of the exploit to be in the undocumented Google OAuth endpoint "MultiLogin." The exploit revolves around stealing victims' session tokens. That is to say, malware first infects a person's PC -- typically via a malicious spam or a dodgy download, etc -- and then scours the machine for, among other things, web browser session cookies that can be used to log into accounts.

Social Networks

The Rise and Fall of Usenet (zdnet.com) 130

An anonymous reader quotes a report from ZDNet: Long before Facebook existed, or even before the Internet, there was Usenet. Usenet was the first social network. Now, with Google Groups abandoning Usenet, this oldest of all social networks is doomed to disappear. Some might say it's well past time. As Google declared, "Over the last several years, legitimate activity in text-based Usenet groups has declined significantly because users have moved to more modern technologies and formats such as social media and web-based forums. Much of the content being disseminated via Usenet today is binary (non-text) file sharing, which Google Groups does not support, as well as spam." True, these days, Usenet's content is almost entirely spam, but in its day, Usenet was everything that Twitter and Reddit would become and more.

In 1979, Duke University computer science graduate students Tom Truscott and Jim Ellis conceived of a network of shared messages under various topics. These messages, also known as articles or posts, were submitted to topic categories, which became known as newsgroups. Within those groups, messages were bound together in threads and sub-threads. [...] In 1980, Truscott and Ellis, using the Unix to Unix Copy Protocol (UUCP), hooked up with the University of North Carolina to form the first Usenet nodes. From there, it would rapidly spread over the pre-Internet ARPANet and other early networks. These messages would be stored and retrieved from news servers. These would "peer" to each other so that messages to a newsgroup would be shared from server to server and to user to user so that within hours, your messages would reach the entire networked world. Usenet would evolve its own network protocol, Network News Transfer Protocol (NNTP), to speed the transfer of these messages. Today, the social network Mastodon uses a similar approach with the ActivityPub protocol, while other social networks, such as Threads, are exploring using ActivityPub to connect with Mastodon and the other social networks that support ActivityPub. As the saying goes, everything old is new again.

[...] Usenet was never an organized social network. Each server owner could -- and did -- set its own rules. Mind you, there was some organization to begin with. The first 'mainstream' Usenet groups, comp, misc, news, rec, soc, and sci hierarchies, were widely accepted and disseminated until 1987. Then, faced with a flood of new groups, a new naming plan emerged in what was called the Great Renaming. This led to a lot of disputes and the creation of the talk hierarchy. This and the first six became known as the Big Seven. Then the alt groups emerged as a free speech protest. Afterward, fewer Usenet sites made it possible to access all the newsgroups. Instead, maintainers and users would have to decide which one they'd support. Over the years, Usenet began to decline as discussions were replaced both by spam and flame wars. Group discussions were also overwhelmed by flame wars.
"If, going forward, you want to keep an eye on Usenet -- things could change, miracles can happen -- you'll need to get an account from a Usenet provider," writes ZDNet's Steven Vaughan-Nichols. "I favor Eternal September, which offers free access to the discussion Usenet groups; NewsHosting, $9.99 a month with access to all the Usenet groups; EasyNews, $9.98 a month with fast downloads, and a good search engine; and Eweka, 9.50 Euros a month and EU only servers."

"You'll also need a Usenet client. One popular free one is Mozilla's Thunderbird E-Mail client, which doubles as a Usenet client. EasyNews also offers a client as part of its service. If you're all about downloading files, check out SABnzbd."
Iphone

Apple Blocks 'Beeper Mini', Citing Security Concerns. But Beeper Keeps Trying (engadget.com) 90

A 16-year-old high school student reverse engineered Apple's messaging protocol, leading to the launch of an interoperable Android app called "Beeper Mini".

But on Friday the Verge reported that "less than a week after its launch, the app started experiencing technical issues when users were suddenly unable to send and receive blue bubble messages." Reached for comment, Beeper CEO Eric Migicovsky did not deny that Apple has successfully blocked Beeper Mini. "If it's Apple, then I think the biggest question is... if Apple truly cares about the privacy and security of their own iPhone users, why would they stop a service that enables their own users to now send encrypted messages to Android users, rather than using unsecure SMS...? Beeper Mini is here today and works great. Why force iPhone users back to sending unencrypted SMS when they chat with friends on Android?"
Apple says they're unable to verify that end-to-end encryption is maintained when messages are sent through unauthorized channels, according to a statement quoted by TechCrunch: "At Apple, we build our products and services with industry-leading privacy and security technologies designed to give users control of their data and keep personal information safe. We took steps to protect our users by blocking techniques that exploit fake credentials in order to gain access to iMessage. These techniques posed significant risks to user security and privacy, including the potential for metadata exposure and enabling unwanted messages, spam, and phishing attacks. We will continue to make updates in the future to protect our users."
Beeper responded on X: We stand behind what we've built. Beeper Mini is keeps your messages private, and boosts security compared to unencrypted SMS. For anyone who claims otherwise, we'd be happy to give our entire source code to mutually agreed upon third party to evaluate the security of our app.
Ars Technica adds: On Saturday, Migicovsky notified Beeper Cloud (desktop) users that iMessage was working again for them, after a long night of fixes. "Work continues on Beeper Mini," Migicovsky wrote shortly after noon Eastern time.
Engadget notes: The Beeper Mini team has apparently been working around the clock to resolve the outage affecting the new "iMessage on Android" app, and says a fix is "very close." And once the fix rolls out, users' seven-day free trials will be reset so they can start over fresh.
Meanwhile, at around 9 p.m. EST, Beeper CEO Eric Migicovsky posted on X that "For 3 blissful days this week, iPhone and Android users enjoyed high quality encrypted chats. We're working hard to return to that state."
Security

Gmail's AI-Powered Spam Detection Is Its Biggest Security Upgrade in Years (arstechnica.com) 45

The latest post on the Google Security blog details a new upgrade to Gmail's spam filters that Google is calling "one of the largest defense upgrades in recent years." ArsTechnica: The upgrade comes in the form of a new text classification system called RETVec (Resilient & Efficient Text Vectorizer). Google says this can help understand "adversarial text manipulations" -- these are emails full of special characters, emojis, typos, and other junk characters that previously were legible by humans but not easily understandable by machines. Previously, spam emails full of special characters made it through Gmail's defenses easily.

[...] The reason emails like this have been so difficult to classify is that, while any spam filter could probably swat down an email that says "Congratulations! A balance of $1000 is available for your jackpot account," that's not what this email actually says. A big portion of the letters here are "homoglyphs" -- by diving into the endless depths of the Unicode standard, you can find obscure characters that look like they're part of the normal Latin alphabet but actually aren't.

Slashdot Top Deals