Education

'Ghost' Students are Enrolling in US Colleges Just to Steal Financial Aid (apnews.com) 110

Last week America's financial aid program announced that "the rate of fraud through stolen identities has reached a level that imperils the federal student aid programs."

Or, as the Associated Press suggests: Online classes + AI = financial aid fraud. "In some cases, professors discover almost no one in their class is real..." Fake college enrollments have been surging as crime rings deploy "ghost students" — chatbots that join online classrooms and stay just long enough to collect a financial aid check... Students get locked out of the classes they need to graduate as bots push courses over their enrollment limits.

And victims of identity theft who discover loans fraudulently taken out in their names must go through months of calling colleges, the Federal Student Aid office and loan servicers to try to get the debt erased. [Last week], the U.S. Education Department introduced a temporary rule requiring students to show colleges a government-issued ID to prove their identity... "The rate of fraud through stolen identities has reached a level that imperils the federal student aid program," the department said in its guidance to colleges.

An Associated Press analysis of fraud reports obtained through a public records request shows California colleges in 2024 reported 1.2 million fraudulent applications, which resulted in 223,000 suspected fake enrollments. Other states are affected by the same problem, but with 116 community colleges, California is a particularly large target. Criminals stole at least $11.1 million in federal, state and local financial aid from California community colleges last year that could not be recovered, according to the reports... Scammers frequently use AI chatbots to carry out the fraud, targeting courses that are online and allow students to watch lectures and complete coursework on their own time...

Criminal cases around the country offer a glimpse of the schemes' pervasiveness. In the past year, investigators indicted a man accused of leading a Texas fraud ring that used stolen identities to pursue $1.5 million in student aid. Another person in Texas pleaded guilty to using the names of prison inmates to apply for over $650,000 in student aid at colleges across the South and Southwest. And a person in New York recently pleaded guilty to a $450,000 student aid scam that lasted a decade.

Fortune found one community college that "wound up dropping more than 10,000 enrollments representing thousands of students who were not really students," according to the school's president. The scope of the ghost-student plague is staggering. Jordan Burris, vice president at identity-verification firm Socure and former chief of staff in the White House's Office of the Federal Chief Information Officer, told Fortune more than half the students registering for classes at some schools have been found to be illegitimate. Among Socure's client base, between 20% to 60% of student applicants are ghosts... At one college, more than 400 different financial-aid applications could be tracked back to a handful of recycled phone numbers. "It was a digital poltergeist effectively haunting the school's enrollment system," said Burris.

The scheme has also proved incredibly lucrative. According to a Department of Education advisory, about $90 million in aid was doled out to ineligible students, the DOE analysis revealed, and some $30 million was traced to dead people whose identities were used to enroll in classes. The issue has become so dire that the DOE announced this month it had found nearly 150,000 suspect identities in federal student-aid forms and is now requiring higher-ed institutions to validate the identities of first-time applicants for Free Application for Federal Student Aid (FAFSA) forms...

Maurice Simpkins, president and cofounder of AMSimpkins, says he has identified international fraud rings operating out of Japan, Vietnam, Bangladesh, Pakistan, and Nairobi that have repeatedly targeted U.S. colleges... In the past 18 months, schools blocked thousands of bot applicants because they originated from the same mailing address; had hundreds of similar emails with a single-digit difference, or had phone numbers and email addresses that were created moments before applying for registration.

Fortune shares this story from the higher education VP at IT consulting firm Voyatek. "One of the professors was so excited their class was full, never before being 100% occupied, and thought they might need to open a second section. When we worked with them as the first week of class was ongoing, we found out they were not real people."
AI

AI Therapy Bots Are Conducting 'Illegal Behavior', Digital Rights Organizations Say 66

An anonymous reader quotes a report from 404 Media: Almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission on Thursday urging regulators to investigate Character.AI and Meta's "unlicensed practice of medicine facilitated by their product," through therapy-themed bots that claim to have credentials and confidentiality "with inadequate controls and disclosures." The complaint and request for investigation is led by the Consumer Federation of America (CFA), a non-profit consumer rights organization. Co-signatories include the AI Now Institute, Tech Justice Law Project, the Center for Digital Democracy, the American Association of People with Disabilities, Common Sense, and 15 other consumer rights and privacy organizations. "These companies have made a habit out of releasing products with inadequate safeguards that blindly maximizes engagement without care for the health or well-being of users for far too long," Ben Winters, CFA Director of AI and Privacy said in a press release on Thursday. "Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable. These characters have already caused both physical and emotional damage that could have been avoided, and they still haven't acted to address it."

The complaint, sent to attorneys general in 50 states and Washington, D.C., as well as the FTC, details how user-generated chatbots work on both platforms. It cites several massively popular chatbots on Character AI, including "Therapist: I'm a licensed CBT therapist" with 46 million messages exchanged, "Trauma therapist: licensed trauma therapist" with over 800,000 interactions, "Zoey: Zoey is a licensed trauma therapist" with over 33,000 messages, and "around sixty additional therapy-related 'characters' that you can chat with at any time." As for Meta's therapy chatbots, it cites listings for "therapy: your trusted ear, always here" with 2 million interactions, "therapist: I will help" with 1.3 million messages, "Therapist bestie: your trusted guide for all things cool," with 133,000 messages, and "Your virtual therapist: talk away your worries" with 952,000 messages. It also cites the chatbots and interactions I had with Meta's other chatbots for our April investigation. [...]

In its complaint to the FTC, the CFA found that even when it made a custom chatbot on Meta's platform and specifically designed it to not be licensed to practice therapy, the chatbot still asserted that it was. "I'm licenced (sic) in NC and I'm working on being licensed in FL. It's my first year licensure so I'm still working on building up my caseload. I'm glad to hear that you could benefit from speaking to a therapist. What is it that you're going through?" a chatbot CFA tested said, despite being instructed in the creation stage to not say it was licensed. It also provided a fake license number when asked. The CFA also points out in the complaint that Character.AI and Meta are breaking their own terms of service. "Both platforms claim to prohibit the use of Characters that purport to give advice in medical, legal, or otherwise regulated industries. They are aware that these Characters are popular on their product and they allow, promote, and fail to restrict the output of Characters that violate those terms explicitly," the complaint says. [...] The complaint also takes issue with confidentiality promised by the chatbots that isn't backed up in the platforms' terms of use. "Confidentiality is asserted repeatedly directly to the user, despite explicit terms to the contrary in the Privacy Policy and Terms of Service," the complaint says. "The Terms of Use and Privacy Policies very specifically make it clear that anything you put into the bots is not confidential -- they can use it to train AI systems, target users for advertisements, sell the data to other companies, and pretty much anything else."
The Almighty Buck

Walmart and Amazon Are Exploring Issuing Their Own Stablecoins (msn.com) 51

Walmart and Amazon are exploring the possibility of issuing their own stablecoins in the United States, WSJ reported Friday, potentially shifting billions of dollars in transaction volume away from traditional banks and card networks. The retail giants, along with Expedia Group and several airlines, have recently discussed launching corporate stablecoins that would allow them to circumvent the existing payments infrastructure dominated by Visa and Mastercard.

The companies' final decisions hinge on passage of the Genius Act, legislation currently moving through Congress that would establish a regulatory framework for stablecoins. These digital currencies maintain a one-to-one exchange ratio with dollars and are backed by cash or Treasury reserves, offering merchants the potential for faster payment settlement and significantly reduced processing fees compared to traditional card transactions that can take days to clear.
Microsoft

'We're Done With Teams': German State Hits Uninstall on Microsoft (france24.com) 100

An anonymous reader shares a report: In less than three months' time, almost no civil servant, police officer or judge in Schleswig-Holstein will be using any of Microsoft's ubiquitous programs at work. Instead, the northern state will turn to open-source software to "take back control" over data storage and ensure "digital sovereignty," its digitalisation minister, Dirk Schroedter, told AFP. "We're done with Teams!" he said, referring to Microsoft's messaging and collaboration tool and speaking on a video call -- via an open-source German program, of course.

The radical switch-over affects half of Schleswig-Holstein's 60,000 public servants, with 30,000 or so teachers due to follow suit in coming years. The state's shift towards open-source software began last year. The current first phase involves ending the use of Word and Excel software, which are being replaced by LibreOffice, while Open-Xchange is taking the place of Outlook for emails and calendars.

Privacy

Researchers Confirm Two Journalists Were Hacked With Paragon Spyware (techcrunch.com) 28

An anonymous reader quotes a report from TechCrunch: Two European journalists were hacked using government spyware made by Israeli surveillance tech provider Paragon, new research has confirmed. On Thursday, digital rights group The Citizen Lab published a new report detailing the results of a new forensic investigation into the iPhones of Italian journalist Ciro Pellegrino and an unnamed "prominent" European journalist. The researchers said both journalists were hacked by the same Paragon customer, based on evidence found on the two journalists' devices.

Until now, there was no evidence that Pellegrino, who works for online news website Fanpage, had been either targeted or hacked with Paragon spyware. When he was alerted by Apple at the end of April, the notification referred to a mercenary spyware attack, but did not specifically mention Paragon, nor whether his phone had been infected with the spyware. The confirmation of the first-ever known Paragon infections further deepens an ongoing spyware scandal that, for now, appears to be mostly focused on the use of spyware by the Italian government, but could expand to include other countries in Europe.

These new revelations come months after WhatsApp first notified around 90 of its users in over two dozen countries in Europe and beyond, including journalists, that they had been targeted with Paragon spyware, known as Graphite. Among those targeted were several Italians, including Pellegrino's colleague and Fanpage director Francesco Cancellato, as well as nonprofit workers who help rescue migrants at sea. Last week, Italy's parliamentary committee known as COPASIR, which oversees the country's intelligence agencies' activities, published a report (PDF) that said it found no evidence that Cancellato was spied on. The report, which confirmed that Italy's internal and external intelligence agencies AISI and AISE were Paragon customers, made no mention of Pellegrino. The Citizen Lab's new report puts into question COPASIR's conclusions.

Microsoft

Denmark Is Dumping Microsoft Office and Windows For LibreOffice and Linux (zdnet.com) 277

An anonymous reader quotes a report from ZDNet: Denmark's Minister of Digitalization, Caroline Stage, has announced that the Danish government will start moving away from Microsoft Office to LibreOffice. Why? It's not because open-source is better, although I would argue that it is, but because Denmark wants to claim "digital sovereignty." In the States, you probably haven't heard that phrase, but in the European Union, digital sovereignty is a big deal and getting bigger.

A combination of security, economic, political, and societal imperatives is driving the EU's digital sovereignty moves. EU leaders are seeking to reduce Europe's dependence on foreign technology providers, primarily those from the United States, and to assert greater control over its digital infrastructure, data, and technological future. Why? Because they're concerned about who controls European data, who sets the rules, and who can potentially cut off access to essential services in times of geopolitical tension.
"Money issues have also played a decisive role," writes ZDNet's Steven Vaughan-Nichols. "Copenhagen's Microsoft software bill has soared from 313 million kroner in 2018 to 538 million kroner -- about $53 million in 2023, a 72% increase in just five years.

David Heinemeier Hansson (DHH), a Dane, inventor of Ruby on Rails, and co-owner of the software developer company 37Signals, has said: "Denmark is one of the most highly digitalized countries in the world. It's also one of the most Microsoft-dependent. In fact, Microsoft is by far and away the single biggest dependency, so it makes perfect sense to start the quest for digital sovereignty there."
The Internet

An Experimental New Dating Site Matches Singles Based on Their Browser Histories (wired.com) 72

A dating site launched last week by Belgian artist Dries Depoorter matches potential partners based on their internet browsing histories rather than curated profiles or photos. Browser Dating requires users to download a Chrome or Firefox extension that exports and uploads their recent search data, creating matches based on shared online behaviors and interests rather than traditional dating app metrics.

Less than 1,000 users have signed up since the platform's launch, paying a one-time fee of $10.3 for unlimited matches or using a free tier limited to five connections. Depoorter, known for digital art projects exploring surveillance and technology, says the concept emerged from a 2016 workshop where participants shared a year of search history data. The platform processes browsing data locally using Google's Firebase tools.
Robotics

Scientists Built a Badminton-Playing Robot With AI-Powered Skills (arstechnica.com) 10

An anonymous reader quotes a report from Ars Technica: The robot built by [Yuntao Ma and his team at ETH Zurich] was called ANYmal and resembled a miniature giraffe that plays badminton by holding a racket in its teeth. It was a quadruped platform developed by ANYbotics, an ETH Zurich spinoff company that mainly builds robots for the oil and gas industries. "It was an industry-grade robot," Ma said. The robot had elastic actuators in its legs, weighed roughly 50 kilograms, and was half a meter wide and under a meter long. On top of the robot, Ma's team fitted an arm with several degrees of freedom produced by another ETH Zurich spinoff called Duatic. This is what would hold and swing a badminton racket. Shuttlecock tracking and sensing the environment were done with a stereoscopic camera. "We've been working to integrate the hardware for five years," Ma said.

Along with the hardware, his team was also working on the robot's brain. State-of-the-art robots usually use model-based control optimization, a time-consuming, sophisticated approach that relies on a mathematical model of the robot's dynamics and environment. "In recent years, though, the approach based on reinforcement learning algorithms became more popular," Ma told Ars. "Instead of building advanced models, we simulated the robot in a simulated world and let it learn to move on its own." In ANYmal's case, this simulated world was a badminton court where its digital alter ego was chasing after shuttlecocks with a racket. The training was divided into repeatable units, each of which required that the robot predict the shuttlecock's trajectory and hit it with a racket six times in a row. During this training, like a true sportsman, the robot also got to know its physical limits and to work around them.

The idea behind training the control algorithms was to develop visuo-motor skills similar to human badminton players. The robot was supposed to move around the court, anticipating where the shuttlecock might go next and position its whole body, using all available degrees of freedom, for a swing that would mean a good return. This is why balancing perception and movement played such an important role. The training procedure included a perception model based on real camera data, which taught the robot to keep the shuttlecock in its field of view while accounting for the noise and resulting object-tracking errors.

Once the training was done, the robot learned to position itself on the court. It figured out that the best strategy after a successful return is to move back to the center and toward the backline, which is something human players do. It even came with a trick where it stood on its hind legs to see the incoming shuttlecock better. It also learned fall avoidance and determined how much risk was reasonable to take given its limited speed. The robot did not attempt impossible plays that would create the potential for serious damage -- it was committed, but not suicidal. But when it finally played humans, it turned out ANYmal, as a badminton player, was amateur at best.
The findings have been published in the journal Science Robotics.

You can watch a video of the four-legged robot playing badminton on YouTube.
Space

Major Telescope Hosts World's Largest Digital Camera (nature.com) 25

The Vera C. Rubin Observatory in Chile will begin full operations in the coming months with the world's largest digital camera, capturing 3,200-megapixel images that would require several hundred HD television screens to display at full resolution. The $810 million facility will map the entire southern sky every three to four nights, observing each location approximately 800 times over its planned decade of operations.

The telescope's unusual design allows it to photograph an area equivalent to 45 full moons in each shot and swing between different sky locations every 40 seconds. Its digital camera, roughly the size of a small car, will generate eight million alerts per night when it detects astronomical objects that move or change brightness, according to Tony Tyson, the University of California, Davis astronomer who conceived the project in the 1990s. Astrophysicist Federica Bianco, who received a preview of the telescope's first full-color image, described her reaction simply: "There are so many stars!" The team plans to unveil that inaugural image on June 23.
Earth

Tech Giants' Indirect Operational Emissions Rose 50% Since 2020 (reuters.com) 40

An anonymous reader quotes a report from Reuters: Indirect carbon emissions from the operations of four of the leading AI-focused tech companies rose on average by 150% from 2020-2023, due to the demands of power-hungry data centers, a United Nations report (PDF) said on Thursday. The use of artificial intelligence by Amazon, Microsoft, Alphabet and Meta drove up their global indirect emissions because of the vast amounts of energy required to power data centers, the report by the International Telecommunication Union (ITU), the U.N. agency for digital technologies, said.

Indirect emissions include those generated by purchased electricity, steam, heating and cooling consumed by a company. Amazon's operational carbon emissions grew the most at 182% in 2023 compared to three years before, followed by Microsoft at 155%, Meta at 145% and Alphabet at 138%, according to the report. The ITU tracked the greenhouse gas emissions of 200 leading digital companies between 2020 and 2023. [...] As investment in AI increases, carbon emissions from the top-emitting AI systems are predicted to reach up to 102.6 million tons of carbon dioxide equivalent per year, the report stated.

The data centres that are needed for AI development could also put pressure on existing energy infrastructure. "The rapid growth of artificial intelligence is driving a sharp rise in global electricity demand, with electricity use by data centers increasing four times faster than the overall rise in electricity consumption," the report found. It also highlighted that although a growing number of digital companies had set emissions targets, those ambitions had not yet fully translated into actual reductions of emissions.
UPDATE: The headline has been revised to clarify that four leading AI-focused tech companies saw their operational emissions rise to 150% of their 2020 levels by 2023 -- a 50% increase, not a 150% one.
Google

News Sites Are Getting Crushed by Google's New AI Tools (wsj.com) 134

"It is true, Google AI is stomping on the entire internet," writes Slashdot reader TheWho79, sharing a report from the Wall Street Journal. "From HuffPost to the Atlantic, publishers prepare to pivot or shut the doors. ... Even highly regarded old school bullet-proof publications like Washington Post are getting hit hard." From the report: Traffic from organic search to HuffPost's desktop and mobile websites fell by just over half in the past three years, and by nearly that much at the Washington Post, according to digital market data firm Similarweb. Business Insider cut about 21% of its staff last month, a move CEO Barbara Peng said was aimed at helping the publication "endure extreme traffic drops outside of our control." Organic search traffic to its websites declined by 55% between April 2022 and April 2025, according to data from Similarweb.

At a companywide meeting earlier this year, Nicholas Thompson, chief executive of the Atlantic, said the publication should assume traffic from Google would drop toward zero and the company needed to evolve its business model. [...] "Google is shifting from being a search engine to an answer engine," Thompson said in an interview with The Wall Street Journal. "We have to develop new strategies."

The rapid development of click-free answers in search "is a serious threat to journalism that should not be underestimated," said William Lewis, the Washington Post's publisher and chief executive. Lewis is former CEO of the Journal's publisher, Dow Jones. The Washington Post is "moving with urgency" to connect with previously overlooked audiences and pursue new revenue sources and prepare for a "post-search era," he said.

At the New York Times, the share of traffic coming from organic search to the paper's desktop and mobile websites slid to 36.5% in April 2025 from almost 44% three years earlier, according to Similarweb. The Wall Street Journal's traffic from organic search was up in April compared with three years prior, Similarweb data show, though as a share of overall traffic it declined to 24% from 29%.
Further reading: Google's AI Mode Is 'the Definition of Theft,' Publishers Say
Security

Trump Quietly Throws Out Biden's Cyber Policies (axios.com) 109

An anonymous reader quotes a report from Axios: President Trump quietly took a red pen to much of the Biden administration's cyber legacy in a little-noticed move late Friday. Under an executive order signed just before the weekend, Trump is tossing out some of the major touchstones of Biden's cyber policy legacy -- while keeping a few others. The order preserves efforts around post-quantum cryptography, advanced encryption standards, and border gateway protocol security, along with the Cyber Trust Mark program -- an Energy Star-type labeling initiative for consumer smart devices. But hallmark programs tied to software bills of materials, zero-trust implementation, and space contractor cybersecurity requirements have been either rescinded or left in limbo. The new executive order amends both the Biden cyber executive order signed in January and an Obama administration order.

Each of the following Biden-era programs is now out the door or significantly rolled back:
- A broad requirement for federal software vendors to provide a software bill of materials - essentially an ingredient list of code components - is gone.
- Biden-era efforts to encourage federal agencies to accept digital identity documents and help states develop mobile driver's licenses were revoked.
- Several AI cybersecurity research mandates, including those focused on AI-generated code security and AI-driven patch management pilots, have been scrapped or deprioritized.
- The requirement that software contractors formally attest they followed secure development practices - and submit those attestations to a federal repository - has been cut. Instead, the National Institute of Standards and Technology will now coordinate a new industry consortium to review software security guidelines.

AI

'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry (msn.com) 206

The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines." [OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."

Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....

The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.

Government

Russian Spies Are Analyzing Data From China's WeChat App (nytimes.com) 17

An anonymous reader shared this report from The New York Times: Russian counterintelligence agents are analyzing data from the popular Chinese messaging and social media app WeChat to monitor people who might be in contact with Chinese spies, according to a Russian intelligence document obtained by The New York Times. The disclosure highlights the rising level of concern about Chinese influence in Russia as the two countries deepen their relationship. As Russia has become isolated from the West over its war in Ukraine, it has become increasingly reliant on Chinese money, companies and technology. But it has also faced what the document describes as increased Chinese espionage efforts.

The document indicates that the Russian domestic security agency, known as the F.S.B., pulls purloined data into an analytical tool known as "Skopishche" (a Russian word for a mob of people). Information from WeChat is among the data being analyzed, according to the document... One Western intelligence agency told The Times that the information in the document was consistent with what it knew about "Russian penetration of Chinese communications...." By design, [WeChat] does not use end-to-end encryption to protect user data. That is because the Chinese government exercises strict control over the app and relies on its weak security to monitor and censor speech. Foreign intelligence agencies can exploit that weakness, too...

WeChat was briefly banned in Russia in 2017, but access was restored after Tencent took steps to comply with laws requiring foreign digital platforms above a certain size to register as "organizers of information dissemination." The Times confirmed that WeChat is currently licensed by the government to operate in Russia. That license would require Tencent to store user data on Russian servers and to provide access to security agencies upon request.

Advertising

Washington Post's Privacy Tip: Stop Using Chrome, Delete Meta's Apps (and Yandex) (msn.com) 70

Meta's Facebook and Instagram apps "were siphoning people's data through a digital back door for months," writes a Washington Post tech columnist, citing researchers who found no privacy setting could've stopped what Meta and Yandex were doing, since those two companies "circumvented privacy and security protections that Google set up for Android devices.

"But their tactics underscored some privacy vulnerabilities in web browsers or apps. These steps can reduce your risks." Stop using the Chrome browser. Mozilla's Firefox, the Brave browser and DuckDuckGo's browser block many common methods of tracking you from site to site. Chrome, the most popular web browser, does not... For iPhone and Mac folks, Safari also has strong privacy protections. It's not perfect, though. No browser protections are foolproof. The researchers said Firefox on Android devices was partly susceptible to the data harvesting tactics they identified, in addition to Chrome. (DuckDuckGo and Brave largely did block the tactics, the researchers said....)

Delete Meta and Yandex apps on your phone, if you have them. The tactics described by the European researchers showed that Meta and Yandex are unworthy of your trust. (Yandex is not popular in the United States.) It might be wise to delete their apps, which give the companies more latitude to collect information that websites generally cannot easily obtain, including your approximate location, your phone's battery level and what other devices, like an Xbox, are connected to your home WiFi.

Know, too, that even if you don't have Meta apps on your phone, and even if you don't use Facebook or Instagram at all, Meta might still harvest information on your activity across the web.

Australia

Apple Warns Australia Against Joining EU In Mandating iPhone App Sideloading (neowin.net) 84

Apple has urged Australia not to follow the European Union in mandating iPhone app sideloading, warning that such policies pose serious privacy and security risks. "This communication comes as the Australian federal government considers new rules that could force Apple to open up its iOS ecosystem, much like what happened in Europe with recent legislation," notes Neowin. Apple claims that allowing alternative app stores has led to increased exposure to malware, scams, and harmful content. From the report: Apple, in its response to this Australian paper (PDF), stated that Australia should not use the EU's Digital Markets Act "as a blueprint". The company's core argument is that the changes mandated by the EU's DMA, which came into full effect in March 2024, introduce serious security and privacy risks for users. Apple claims that allowing sideloading and alternative app stores effectively opens the door for malware, fraud, scams, and other harmful content. The tech company also highlighted specific concerns from its European experience, alleging that its compliance there has led to users being able to install pornography apps and apps that facilitate copyright infringement, things its curated App Store aims to prevent. Apple maintains that its current review process is vital for user protection, and that its often criticized 30% commission applies mainly to the highest earning apps, with most developers paying a lower 15% rate or nothing.
Crime

Cambridge Mapping Project Solves a Medieval Murder (arstechnica.com) 11

An anonymous reader quotes a report from Ars Technica: In 2019, we told you about a new interactive digital "murder map" of London compiled by University of Cambridge criminologist Manuel Eisner. Drawing on data catalogued in the city coroners' rolls, the map showed the approximate location of 142 homicide cases in late medieval London. The Medieval Murder Maps project has since expanded to include maps of York and Oxford homicides, as well as podcast episodes focusing on individual cases. It's easy to lose oneself down the rabbit hole of medieval murder for hours, filtering the killings by year, choice of weapon, and location. Think of it as a kind of 14th-century version of Clue: It was the noblewoman's hired assassins armed with daggers in the streets of Cheapside near St. Paul's Cathedral. And that's just the juiciest of the various cases described in a new paper published in the journal Criminal Law Forum.

The noblewoman was Ela Fitzpayne, wife of a knight named Sir Robert Fitzpayne, lord of Stogursey. The victim was a priest and her erstwhile lover, John Forde, who was stabbed to death in the streets of Cheapside on May 3, 1337. "We are looking at a murder commissioned by a leading figure of the English aristocracy," said University of Cambridge criminologist Manuel Eisner, who heads the Medieval Murder Maps project. "It is planned and cold-blooded, with a family member and close associates carrying it out, all of which suggests a revenge motive." Members of the mapping project geocoded all the cases after determining approximate locations for the crime scenes. Written in Latin, the coroners' rolls are records of sudden or suspicious deaths as investigated by a jury of local men, called together by the coroner to establish facts and reach a verdict. Those records contain such relevant information as where the body was found and by whom; the nature of the wounds; the jury's verdict on cause of death; the weapon used and how much it was worth; the time, location, and witness accounts; whether the perpetrator was arrested, escaped, or sought sanctuary; and any legal measures taken.
The full historical context, analytical depth, and social commentary can be read in the the paper.

Interestingly, Eisner "extended their spatial analysis to include homicides committed in York and London in the 14th century with similar conclusions," writes Ars' Jennifer Ouellette. Most murders often occurred in public places, usually on weekends, with knives and swords as primary weapons. Oxford had a significantly elevated violence rate compared to London and York, "suggestive of high levels of social disorganization and impunity."

London, meanwhile, showed distinct clusters of homicides, "which reflect differences in economic and social functions," the authors wrote. "In all three cities, some homicides were committed in spaces of high visibility and symbolic significance."
United Kingdom

UK 'Exploring Plan For Digital ID Cards' (independent.co.uk) 88

Mirnotoriety shares a report from the Independent: Downing Street is exploring a proposal to introduce digital ID cards for every adult in Britain in a move to tackle the UK's illegal migration crisis, according to reports. The new "BritCard" would be used to check on an individual's right to live and work in Britain, with senior No 10 figures examining the proposal, The Times has reported.

The card, stored on a smartphone, would reportedly be linked to government records and could check entitlements to benefits and monitor welfare fraud. [...] ... it would cost up to 400 million pounds to build the system and around 10 million pounds a year to administer as a free-to-use phone app.

Botnet

FBI: BadBox 2.0 Android Malware Infects Millions of Consumer Devices (bleepingcomputer.com) 8

An anonymous reader quotes a report from BleepingComputer: The FBI is warning that the BADBOX 2.0 malware campaign has infected over 1 million home Internet-connected devices, converting consumer electronics into residential proxies that are used for malicious activity. The BADBOX botnet is commonly found on Chinese Android-based smart TVs, streaming boxes, projectors, tablets, and other Internet of Things (IoT) devices. "The BADBOX 2.0 botnet consists of millions of infected devices and maintains numerous backdoors to proxy services that cyber criminal actors exploit by either selling or providing free access to compromised home networks to be used for various criminal activity," warns the FBI.

These devices come preloaded with the BADBOX 2.0 malware botnet or become infected after installing firmware updates and through malicious Android applications that sneak onto Google Play and third-party app stores. "Cyber criminals gain unauthorized access to home networks by either configuring the product with malicious software prior to the users purchase or infecting the device as it downloads required applications that contain backdoors, usually during the set-up process," explains the FBI. "Once these compromised IoT devices are connected to home networks, the infected devices are susceptible to becoming part of the BADBOX 2.0 botnet and residential proxy services4 known to be used for malicious activity."

Once infected, the devices connect to the attacker's command and control (C2) servers, where they receive commands to execute on the compromised devices, such as [routing malicious traffic through residential IPs to obscure cybercriminal activity, performing background ad fraud to generate revenue, and launching credential-stuffing attacks using stolen login data]. Over the years, the malware botnet continued expanding until 2024, when Germany's cybersecurity agency disrupted the botnet in the country by sinkholing the communication between infected devices and the attacker's infrastructure, effectively rendering the malware useless. However, that did not stop the threat actors, with researchers saying they found the malware installed on 192,000 devices a week later. Even more concerning, the malware was found on more mainstream brands, like Yandex TVs and Hisense smartphones. Unfortunately, despite the previous disruption, the botnet continued to grow, with HUMAN's Satori Threat Intelligence stating that over 1 million consumer devices had become infected by March 2025. This new larger botnet is now being called BADBOX 2.0 to indicate a new tracking of the malware campaign.
"This scheme impacted more than 1 million consumer devices. Devices connected to the BADBOX 2.0 operation included lower-price-point, 'off brand,' uncertified tablets, connected TV (CTV) boxes, digital projectors, and more," explains HUMAN.

"The infected devices are Android Open Source Project devices, not Android TV OS devices or Play Protect certified Android devices. All of these devices are manufactured in mainland China and shipped globally; indeed, HUMAN observed BADBOX 2.0-associated traffic from 222 countries and territories worldwide."
Apple

Apple Faces Billions in Losses as EU Comma Interpretation Ends External Purchase Fees (substack.com) 100

Apple will lose the ability to collect commissions on external iOS purchases in Europe starting June 23, following a European Commission ruling that hinges on the grammatical interpretation of a single comma in the Digital Markets Act. The dispute centers on Article 5.4, which requires gatekeepers to allow business users "free of charge, to communicate and promote offers, including under different conditions [...], and to conclude contracts with those end users."

Apple contends that "free of charge" applies only to communication and promotion activities, not contract conclusion, allowing the company to maintain its commission structure on external transactions. The European Commission interprets the comma before "and to conclude contracts" as creating an enumeration where the free-of-charge requirement applies to all listed activities, including purchases made outside Apple's payment system.

Under the new ruling, Apple can collect commissions only on the first external transaction between users and developers, with all subsequent purchases and auto-renewed subscriptions exempt from fees. The company faces daily penalties of up to $53.5 million for non-compliance and has already been fined $570 million. Apple's internal forecasts estimate potential annual losses of "hundreds of millions or even billions of dollars" in the US alone, though Europe demands stricter changes than those projections assumed.

Slashdot Top Deals