Privacy

TikTok Is Now Collecting Even More Data About Its Users (wired.com) 41

An anonymous reader quotes a report from Wired: When TikTok users in the U.S. opened the app today, they were greeted with a pop-up asking them to agree to the social media platform's new terms of service and privacy policy before they could resume scrolling. These changes are part of TikTok's transition to new ownership. In order to continue operating in the U.S., TikTok was compelled by the U.S. government to transition from Chinese control to a new, American-majority corporate entity. Called TikTok USDS Joint Venture LLC, the new entity is made up of a group of investors that includes the software company Oracle. It's easy to tap "agree" and keep on scrolling through videos on TikTok, so users might not fully understand the extent of changes they are agreeing to with this pop-up.

Now that it's under U.S.-based ownership, TikTok potentially collects more detailed information about its users, including precise location data. Here are the three biggest changes to TikTok's privacy policy that users should know about. TikTok's change in location tracking is one of the most notable updates in this new privacy policy. Before this update, the app did not collect the precise, GPS-derived location data of U.S. users. Now, if you give TikTok permission to use your phone's location services, then the app may collect granular information about your exact whereabouts. Similar kinds of precise location data is also tracked by other social media apps, like Instagram and X.

[...] Rather than an adjustment, TikTok's policy on AI interactions adds a new topic to the privacy policy document. Now, users' interactions with any of TikTok's AI tools explicitly fall under data that the service may collect and store. This includes any prompts as well as the AI-generated outputs. The metadata attached to your interactions with AI tools may also be automatically logged. [...] This change to TikTok's privacy policy may not be as immediately noticeable to users, but it will likely have an impact on the types of ads you see outside of TikTok. So, rather than just using your collected data to target you while using the app, TikTok may now further leverage that info to serve you more relevant ads wherever you go online. As part of this advertising change, TikTok also now explicitly mentions publishers as one kind of partner the platform works with to get new data.

Printer

FBI's Washington Post Investigation Shows How Your Printer Can Snitch On You (theintercept.com) 99

alternative_right quotes a report from The Intercept: Federal prosecutors on January 9 charged Aurelio Luis Perez-Lugones, an IT specialist for an unnamed government contractor, with "the offense of unlawful retention of national defense information," according to an FBI affidavit (PDF). The case attracted national attention after federal agents investigating Perez-Lugones searched the home of a Washington Post reporter. But overlooked so far in the media coverage is the fact that a surprising surveillance tool pointed investigators toward Perez-Lugones: an office printer with a photographic memory. News of the investigation broke when the Washington Post reported that investigators seized the work laptop, personal laptop, phone, and smartwatch of journalist Hannah Natanson, who has covered the Trump administration's impact on the federal government and recently wrote about developing more than 1,000 government sources. A Justice Department official told the Post that Perez-Lugones had been messaging Natanson to discuss classified information. The affidavit does not allege that Perez-Lugones disseminated national defense information, only that he unlawfully retained it.

The affidavit provides insight into how Perez-Lugones allegedly attempted to exfiltrate information from a Secure Compartmented Information Facility, or SCIF, and the unexpected way his employer took notice. According to the FBI, Perez-Lugones printed a classified intelligence report, albeit in a roundabout fashion. It's standard for workplace printers to log certain information, such as the names of files they print and the users who printed them. In an apparent attempt to avoid detection, Perez-Lugones, according to the affidavit, took screenshots of classified materials, cropped the screenshots, and pasted them into a Microsoft Word document. By using screenshots instead of text, there would be no record of a classified report printed from the specific workstation. (Depending on the employer's chosen data loss prevention monitoring software, access logs might show a specific user had opened the file and perhaps even tracked whether they took screenshots).

Perez-Lugones allegedly gave the file an innocuous name, "Microsoft Word - Document1," that might not stand out if printer logs were later audited. In this case, however, the affidavit reveals that Perez-Lugones's employer could see not only the typical metadata stored by printers, such as file names, file sizes, and time of printing, but it could also view the actual contents of the printed materials -- in this case, prosecutors say, the screenshots themselves. As the affidavit points out, "Perez-Lugones' employer can retrieve records of print activity on classified systems, including copies of printed documents." [...] Aside from attempting to surreptitiously print a document, Perez-Lugones, investigators say, was also seen allegedly opening a classified document and taking notes, looking "back and forth between the screen corresponding the classified system and the notepad, all the while writing on the notepad." The affidavit doesn't state how this observation was made, but it strongly suggests a video surveillance system was also in play.

The Almighty Buck

'America Is Slow-Walking Into a Polymarket Disaster' (theatlantic.com) 55

In an opinion piece for The Atlantic, senior editor Saahil Desai argues that media outlets are increasingly treating prediction markets like Polymarket and Kalshi as legitimate signals of reality. The risk, as Desai warns, is a future where news coverage amplifies manipulable betting odds and turns politics, geopolitics, and even tragedy into speculative gambling theater. Here's an excerpt from the report: [...] The problem is that prediction markets are ushering in a world in which news becomes as much about gambling as about the event itself. This kind of thing has already happened to sports, where the language of "parlays" and "covering the spread" has infiltrated every inch of commentary. ESPN partners with DraftKings to bring its odds to SportsCenter and Monday Night Football; CBS Sports has a betting vertical; FanDuel runs its own streaming network. But the stakes of Greenland's future are more consequential than the NFL playoffs.

The more that prediction markets are treated like news, especially heading into another election, the more every dip and swing in the odds may end up wildly misleading people about what might happen, or influencing what happens in the real world. Yet it's unclear whether these sites are meaningful predictors of anything. After the Golden Globes, Polymarket CEO Shayne Coplan excitedly posted that his site had correctly predicted 26 of 28 winners, which seems impressive -- but Hollywood awards shows are generally predictable. One recent study found that Polymarket's forecasts in the weeks before the 2024 election were not much better than chance.

These markets are also manipulable. In 2012, one bettor on the now-defunct prediction market Intrade placed a series of huge wagers on Mitt Romney in the two weeks preceding the election, generating a betting line indicative of a tight race. The bettor did not seem motivated by financial gain, according to two researchers who examined the trades. "More plausibly, this trader could have been attempting to manipulate beliefs about the odds of victory in an attempt to boost fundraising, campaign morale, and turnout," they wrote. The trader lost at least $4 million but might have shaped media attention of the race for less than the price of a prime-time ad, they concluded. [...]

The irony of prediction markets is that they are supposed to be a more trustworthy way of gleaning the future than internet clickbait and half-baked punditry, but they risk shredding whatever shared trust we still have left. The suspiciously well-timed bets that one Polymarket user placed right before the capture of Nicolas Maduro may have been just a stroke of phenomenal luck that netted a roughly $400,000 payout. Or maybe someone with inside information was looking for easy money. [...] As Tarek Mansour, Kalshi's CEO, has said, his long-term goal is to "financialize everything and create a tradable asset out of any difference in opinion." (Kalshi means "everything" in Arabic.) What could go wrong? As one viral post on X recently put it, "Got a buddy who is praying for world war 3 so he can win $390 on Polymarket." It's a joke. I think.

Communications

HAM Radio Operators In Belarus Arrested, Face the Death Penalty (404media.co) 75

An anonymous reader quotes a report from 404 Media: The Belarusian government is threatening three HAM radio operators with the death penalty, detained at least seven people, and has accused them of "intercepting state secrets," according to Belarusian state media, independent media outside of Belarus, and the Belarusian human rights organization Viasna. The arrests are an extreme attack on what is most often a wholesome hobby that has a history of being vilified by authoritarian governments in part because the technology is quite censorship resistant.

The detentions were announced last week on Belarusian state TV, which claimed the men were part of a network of more than 50 people participating in the amateur radio hobby and have been accused of both "espionage" and "treason." Authorities there said they seized more than 500 pieces of radio equipment. The men were accused on state TV of using radio to spy on the movement of government planes, though no actual evidence of this has been produced. State TV claimed they were associated with the Belarusian Federation of Radioamateurs and Radiosportsmen (BFRR), a long-running amateur radio club and nonprofit that holds amateur radio competitions, meetups, trainings, and forums.
Siarhei Besarab, a Belarusian HAM radio operator, posted a plea for support from others in the r/amateurradio subreddit. "I am writing this because my local community is being systematically liquidated in what I can only describe as a targeted intellectual genocide," Besarab wrote. "I beg you to amplify this signal and help us spread this information. Please show this to any journalist you know, send it to human rights organizations, and share it with your local radio associations."
AI

Comic-Con Bans AI Art After Artist Pushback (404media.co) 45

San Diego Comic-Con changed an AI art friendly policy following an artist-led backlash last week. From a report: It was a small victory for working artists in an industry where jobs are slipping away as movie and video game studios adopt generative AI tools to save time and money. Every year, tens of thousands of people descend on San Diego for Comic-Con, the world's premier comic book convention that over the years has also become a major pan-media event where every major media company announces new movies, TV shows, and video games. For the past few years, Comic-Con has allowed some forms of AI-generated art at this art show at the convention.

According to archived rules for the show, artists could display AI-generated material so long as it wasn't for sale, was marked as AI-produced, and credited the original artist whose style was used. "Material produced by Artificial Intelligence (AI) may be placed in the show, but only as Not-for-Sale (NFS). It must be clearly marked as AI-produced, not simply listed as a print. If one of the parameters in its creation was something similar to 'Done in the style of,' that information must be added to the description. If there are questions, the Art Show Coordinator will be the sole judge of acceptability," Comic-Con's art show rules said until recently.

The Courts

Snap Settles Social media Addiction Lawsuit Ahead of Landmark Trial (bbc.com) 28

Snap has settled a social media addiction lawsuit just days before trial, while Meta, TikTok, and Alphabet remain defendants and are headed to court. "Terms of the deal were not announced as it was revealed by lawyers at a California Superior Court hearing, after which Snap told the BBC the parties were 'pleased to have been able to resolve this matter in an amicable manner.'" From the report: The plaintiff, a 19-year old woman identified by the initials K.G.M., alleged that the algorithmic design of the platforms left her addicted and affected her mental health. In the absence of a settlement with the other parties, the trial is scheduled to go forward against the remaining three defendants, with jury selection due to begin on January 27. Meta boss Mark Zuckerberg is expected to testify, and until Tuesday's settlement, Snap CEO Evan Spiegel was also set to take the stand.

Snap is still a defendant in other social media addiction cases that have been consolidated in the court. The closely watched cases could challenge a legal theory that social media companies have used to shield themselves. They have long argued that Section 230 of the Communications Decency Act of 1996 protects them from liability for what third parties post on their platforms. But plaintiffs argue that the platforms are designed in a way that leaves users addicted through choices that affect their algorithms and notifications. The social media companies have said the plaintiffs' evidence falls short of proving that they are responsible for alleged harms such as depression and eating disorders.

United Kingdom

UK Mulls Australia-Like Social Media Ban For Users Under 16 (engadget.com) 25

The UK government has launched a public consultation on whether to ban social media use for children under 16, drawing inspiration from Australia's recently enacted age-based restrictions. "It would also explore how to enforce that limit, how to limit tech companies from being able to access children's data and how to limit 'infinite scrolling,' as well as access to addictive online tools," reports Engadget. "In addition to seeking feedback from parents and young people themselves, the country's ministers are going to visit Australia to see the effects of the country's social media ban for kids, according to Financial Times."
News

Crypto News Outlet Cointelegraph Loses 80% of Traffic After Google Penalty For Parasitic Blackhat SEO Deal (substack.com) 24

Cointelegraph, once one of the most-visited cryptocurrency news sites, has seen its monthly traffic plummet from roughly 8 million visits to 1.4 million -- an 80% drop in three months -- after Google issued a manual penalty in October 2025 for the outlet's partnership with a blackhat SEO firm that used Cointelegraph's domain authority to promote affiliate links to offshore casinos and betting platforms.

The CEO, who had no prior media experience, proceeded despite warnings from Google earlier in 2025 and repeated objections from the outlet's three most senior editorial staff members throughout the year. The penalty removed Cointelegraph from Google News, Discover and search results entirely; a search for "Cointelegraph" now returns CoinDesk as the top result. Jon Rice, the former editor-in-chief, resigned on December 31st and described the situation as an "existential threat to business."
Electronic Frontier Foundation

Congress Wants To Hand Your Parenting To Big Tech 53

An anonymous reader quotes a report from the Electronic Frontier Foundation (EFF): Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. The Senate Commerce Committee held a hearing [Friday] on "examining the effect of technology on America's youth." Witnesses warned about "addictive" online content, mental health, and kids spending too much time buried in screen. At the center of the debate is a bill from Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) called the Kids Off Social Media Act (KOSMA), which they say will protect children and "empower parents."

That's a reasonable goal, especially at a time when many parents feel overwhelmed and nervous about how much time their kids spend on screens. But while the bill's press release contains soothing language, KOSMA doesn't actually give parents more control. Instead of respecting how most parents guide their kids towards healthy and educational content, KOSMA hands the control panel to Big Tech. That's right -- this bill would take power away from parents, and hand it over to the companies that lawmakers say are the problem. [...] This bill doesn't just set an age rule. It creates a legal duty for platforms to police families. Section 103(b) of the bill is blunt: if a platform knows a user is under 13, it "shall terminate any existing account or profile" belonging to that user. And "knows" doesn't just mean someone admits their age. The bill defines knowledge to include what is "fairly implied on the basis of objective circumstances" -- in other words, what a reasonable person would conclude from how the account is being used. The reality of how services would comply with KOSMA is clear: rather than risk liability for how they should have known a user was under 13, they will require all users to prove their age to ensure that they block anyone under 13.

KOSMA contains no exceptions for parental consent, for family accounts, or for educational or supervised use. The vast majority of people policed by this bill won't be kids sneaking around -- it will be minors who are following their parents' guidance, and the parents themselves. Imagine a child using their parent's YouTube account to watch science videos about how a volcano works. If they were to leave a comment saying, "Cool video -- I'll show this to my 6th grade teacher!" and YouTube becomes aware of the comment, the platform now has clear signals that a child is using that account. It doesn't matter whether the parent gave permission. Under KOSMA, the company is legally required to act. To avoid violating KOSMA, it would likely lock, suspend, or terminate the account, or demand proof it belongs to an adult. That proof would likely mean asking for a scan of a government ID, biometric data, or some other form of intrusive verification, all to keep what is essentially a "family" account from being shut down.

Violations of KOSMA are enforced by the FTC and state attorneys general. That's more than enough legal risk to make platforms err on the side of cutting people off. Platforms have no way to remove "just the kid" from a shared account. Their tools are blunt: freeze it, verify it, or delete it. Which means that even when a parent has explicitly approved and supervised their child's use, KOSMA forces Big Tech to override that family decision. [...] These companies don't know your family or your rules. They only know what their algorithms infer. Under KOSMA, those inferences carry the force of law. Rather than parents or teachers, decisions about who can be online, and for what purpose, will be made by corporate compliance teams and automated detection systems.
United States

The Rise and Fall of the American Monoculture (wsj.com) 66

The American monoculture -- the era when three television networks, seven movie studios, and a handful of record labels determined virtually everything the country watched and heard -- is collapsing under the weight of algorithmic recommendation engines and infinite streaming options. An estimated 200 million tickets were sold for "Gone With the Wind" in 1939 when the U.S. population was 130 million; more than 100 million people watched the MAS*H finale in 1983.

Only three American productions grossed more than $1 billion in 2025, down from nine in 2019. "That broad experience has become a more difficult thing for us studio people to manufacture," said Donna Langley, chairman of NBCUniversal Entertainment. "The audience wants a much better value for their money."

YouTube became the most popular video platform on televisions not by having the hottest shows but by having something for everyone. The internet broke Hollywood's hold on distribution; anyone can now stream to the same devices Disney and Netflix use.
Space

Could We Provide Better Cellphone Service With Fewer, Bigger Satellites? (reuters.com) 37

European satellite operator Eutelsat "plans to launch 440 Airbus-built LEO satellites in the coming years to replenish and expand its constellation," Reuters reported Friday. And last week America's Federal Communications Commission approved SpaceX's request to deploy another 7,500 Starlink satellites, while Starlink "projects it will eventually have a constellation of 34,000 satellites," writes Fast Company, and Amazon's Project Leo "plans to launch more than 3,200 satellites."

Meanwhile "Beijing and some Chinese companies are planning two separate mega-constellations, Guowang and G60 Starlink, totaling nearly 26,000 satellites," and this week the Chinese government "applied for launch permits for 200,000 satellites."

But a small Texas-based company called AST SpaceMobile "believes it can provide better service with fewer than 100 gigantic satellites in space." AST SpaceMobile has developed a direct-to-cell technology that utilizes large satellites called BlueBirds. These machines use thousands of antennas to deliver broadband coverage directly to standard mobile phones, says the company's president, Scott Wisniewski. "This approach is remarkably efficient: We can achieve global coverage with approximately 90 satellites, not thousands or even tens of thousands required by other systems," Wisniewski writes in an email...

The key is its satellites' size and sophistication. AST's first generation of commercial satellite, the BlueBird 1-5, unfolds into a massive 693-square-foot array in space. Today, the company has five operational BlueBird 1-5 satellites in orbit, but its ambitions are much bigger. On December 24, 2025, AST launched the first of its next-generation satellites from India — called Block 2 — and this one broke records. The BlueBird 6 has a surface of almost 2,400 square feet, making it the largest single satellite in low Earth orbit. The company plans to launch up to 60 more by the end of 2026. "This large surface area is essential for gathering faint signals from standard, unmodified mobile phones on the ground," Wisniewski explains. It is essentially a single, extremely powerful and sensitive cell tower in the sky, capable of serving a huge geographical area...

To be clear, AST SpaceMobile's approach is not without its own controversies. The sheer size of the company's satellites makes them incredibly bright in the night sky, a significant source of frustration for ground-based astronomers. McDowell confirms that when it launched in 2022, AST's prototype satellite, BlueWalker 3, became "one of the top 10 brightest objects in the night sky for a while."

"It's a serious issue, and we are working directly with the astronomy community to mitigate our impact," Wisniewski says. The company is exploring solutions like anti-reflective coatings and operational adjustments to minimize the time its satellites are at maximum brightness...

AST SpaceMobile has already proven its technology works, the article points out, with six working satellites now transmitting at typical 5G speeds directly to regular phones.
Privacy

What Happened After Security Researchers Found 60 Flock Cameras Livestreaming to the Internet (youtube.com) 50

A couple months ago, YouTuber Benn Jordan "found vulnerabilities in some of Flock's license plate reader cameras," reports 404 Media's Jason Koebler. "He reached out to me to tell me he had learned that some of Flock's Condor cameras were left live-streaming to the open internet."

This led to a remarkable article where Koebler confirmed the breach by visiting a Flock surveillance camera mounted on a California traffic signal. ("On my phone, I am watching myself in real time as the camera records and livestreams me — without any password or login — to the open internet... Hundreds of miles away, my colleagues are remotely watching me too through the exposed feed.") Flock left livestreams and administrator control panels for at least 60 of its AI-enabled Condor cameras around the country exposed to the open internet, where anyone could watch them, download 30 days worth of video archive, and change settings, see log files, and run diagnostics. Unlike many of Flock's cameras, which are designed to capture license plates as people drive by, Flock's Condor cameras are pan-tilt-zoom (PTZ) cameras designed to record and track people, not vehicles. Condor cameras can be set to automatically zoom in on people's faces... The exposure was initially discovered by YouTuber and technologist Benn Jordan and was shared with security researcher Jon "GainSec" Gaines, who recently found numerous vulnerabilities in several other models of Flock's automated license plate reader (ALPR) cameras.
Jordan appeared this week as a guest on Koebler's own YouTube channel, while Jordan released a video of his own about the experience. titled "We Hacked Flock Safety Cameras in under 30 Seconds." (Thanks to Slashdot reader beadon for sharing the link.) But together Jordan and 404 Media also created another video three weeks ago titled "The Flock Camera Leak is Like Netflix for Stalkers" which includes footage he says was "completely accessible at the time Flock Safety was telling cities that the devices are secure after they're deployed."

The video decries cities "too lazy to conduct their own security audit or research the efficacy versus risk," but also calls weak security "an industry-wide problem." Jordan explains in the video how he "very easily found the administration interfaces for dozens of Flock safety cameras..." — but also what happened next: None of the data or video footage was encrypted. There was no username or password required. These were all completely public-facing, for the world to see.... Making any modification to the cameras is illegal, so I didn't do this. But I had the ability to delete any of the video footage or evidence by simply pressing a button. I could see the paths where all of the evidence files were located on the file system...

During and after the process of conducting that research and making that video, I was visited by the police and had what I believed to be private investigators outside my home photographing me and my property and bothering my neighbors. John Gaines or GainSec, the brains behind most of this research, lost employment within 48 hours of the video being released. And the sad reality is that I don't view these things as consequences or punishment for researching security vulnerabilities. I view these as consequences and punishment for doing it ethically and transparently.

I've been contacted by people on or communicating with civic councils who found my videos concerning, and they shared Flock Safety's response with me. The company claimed that the devices in my video did not reflect the security standards of the ones being publicly deployed. The CEO even posted on LinkedIn and boasted about Flock Safety's security policies. So, I formally and publicly offered to personally fund security research into Flock Safety's deployed ecosystem. But the law prevents me from touching their live devices. So, all I needed was their permission so I wouldn't get arrested. And I was even willing to let them supervise this research.

I got no response.

So instead, he read Flock's official response to a security/surveillance industry research group — while standing in front of one of their security cameras, streaming his reading to the public internet.

"Might as well. It's my tax dollars that paid for it."

" 'Flock is committed to continuously improving security...'"
Australia

Nearly 5 Million Accounts Removed Under Australia's New Social Media Ban (nytimes.com) 72

An anonymous reader quotes a report from the New York Times: Nearly five million social media accounts belonging to Australian teenagers have been deactivated or removed, a month after a landmark law barring those younger than 16 from using the services took effect, the government said on Thursday. The announcement was the first reported metric reflecting the rollout of the law, which is being closely watched by several other countries weighing whether the regulation can be a blueprint for protecting children from the harms of social media, or a cautionary tale highlighting the challenges of such attempts.

The law required 10 social media platforms, including Instagram, Facebook, Snapchat and Reddit, to prevent users under 16 from accessing their services. Under the law, which came into force in December, failure by the companies to take "reasonable steps" to remove underage users could lead to fines of up to 49.5 million Australian dollars, about $33 million. [...] The number of removed accounts offered only a limited picture of the ban's impact. Many teenagers have said in the weeks since the law took effect that they were able to get around the ban by lying about their age, or that they could easily bypass verification systems.

The regulator tasked with enforcing and tracking the law, the eSafety Commissioner, did not release a detailed breakdown beyond announcing that the companies had "removed access" to about 4.7 million accounts belonging to children under 16. Meta, the parent company of Instagram and Facebook, said this week that it had removed almost 550,000 accounts of users younger than 16 before the ban came into effect.
"Change doesn't happen overnight," said Prime Minister Anthony Albanese. "But these early signs show it's important we've acted to make this change."
Social Networks

Study Finds Weak Evidence Linking Social Media Use to Teen Mental Health Problems (theguardian.com) 40

An anonymous reader quotes a report from the Guardian: Screen time spent gaming or on social media does not cause mental health problems in teenagers, according to a large-scale study. [...] Researchers at the University of Manchester followed 25,000 11- to 14-year-olds over three school years, tracking their self-reported social media habits, gaming frequency and emotional difficulties to find out whether technology use genuinely predicted later mental health difficulties. Participants were asked how much time on a normal weekday in term time they spent on TikTok, Instagram, Snapchat and other social media, or gaming. They were also asked questions about their feelings, mood and wider mental health.

The study found no evidence for boys or girls that heavier social media use or more frequent gaming increased teenagers' symptoms of anxiety or depression over the following year. Increases in girls' and boys' social media use from year 8 to year 9 and from year 9 to year 10 had zero detrimental impact on their mental health the following year, the authors found. More time spent gaming also had a zero negative effect on pupils' mental health. "We know families are worried, but our results do not support the idea that simply spending time on social media or gaming leads to mental health problems -- the story is far more complex than that," said the lead author Dr Qiqi Cheng.

The research, published in the Journal of Public Health, also examined whether how pupils use social media makes a difference, with participants asked how much time spent chatting with others, posting stories, pictures and videos, browsing feeds, profiles or scrolling through photos and stories. The scientists found that actively chatting on social media or passive scrolling feeds did not appear to drive mental health difficulties. The authors stressed that the findings did not mean online experiences were harmless. Hurtful messages, online pressures and extreme content could have detrimental effects on wellbeing, but focusing on screen time alone was not helpful, they said.

Businesses

'White-Collar Workers Shouldn't Dismiss a Blue-Collar Career Change' (msn.com) 145

White-collar workers stuck in a cycle of layoffs and stagnant wages might want to look past the traditional tech, finance and media job postings to an unexpected source of opportunity: the blue-collar sector, which faces a labor shortage and is seeing rapid transformation through private-equity investment. These jobs are generally less vulnerable to AI, and the earning trajectory can be steep, the WSJ writes.

At Crash Champions, a car-repair chain that has grown from 13 locations in 2019 to about 650 shops across 38 states, service advisers start at roughly $60,000 after a six-month apprenticeship and can double that within 18 months, according to CEO Matt Ebert. Directors overseeing multiple locations earn more than $200,000. Power Home Remodeling, a PE-backed construction company, says tech sales professionals earning $85,000 to $100,000 could make lateral moves after a 10-week training program.

The share of workers in their early 20s employed in blue-collar roles rose from 16.3% in 2019 to 18.4% in 2024, according to ADP -- five times the increase among 35- to 39-year-olds.
Social Networks

Digg Launches Its New Reddit Rival To the Public (techcrunch.com) 44

Digg is officially back under the ownership of its original founder, Kevin Rose, along with Reddit co-founder Alexis Ohanian. "Similar to Reddit, the new Digg offers a website and mobile app where you can browse feeds featuring posts from across a selection of its communities and join other communities that align with your interests," reports TechCrunch. "There, you can post, comment, and upvote (or 'digg') the site's content." From the report: [T]he rise of AI has presented an opportunity to rebuild Digg, Rose and Ohanian believe, leading them to acquire Digg last March through a leveraged buyout by True Ventures, Ohanian's firm Seven Seven Six, Rose and Ohanian themselves, and the venture firm S32. The company has not disclosed its funding. They're betting that AI can help to address some of the messiness and toxicity of today's social media landscape. At the same time, social platforms will need a new set of tools to ensure they're not taken over by AI bots posing as people.

"We obviously don't want to force everyone down some kind of crazy KYC process," said Rose in an interview with TechCrunch, referring to the 'know your customer' verification process used by financial institutions to confirm someone's identity. Instead of simply offering verification checkmarks to designate trust, Digg will try out new technologies, like using zero-knowledge proofs (cryptographic methods that verify information without revealing the underlying data) to verify the people using its platform. It could also do other things, like require that people who join a product-focused community verify they actually own or use the product being discussed there.

As an example, a community for Oura ring owners could verify that everyone who posts has proven they own one of the smart rings. Plus, Rose suggests Digg could use signals acquired from mobile devices to help verify members -- for instance, the app could identify when Digg users attended a meetup in the same location. "I don't think there's going to be any one silver bullet here," said Rose. "It's just going to be us saying ... here's a platter of things that you can add together to create trust."

Communications

Widespread Verizon Outage Prompts Emergency Alerts in Washington, New York City (nbcnews.com) 16

Verizon said on Wednesday that its wireless service was suffering an outage impacting cellular data and voice services. From a report: The nation's largest wireless carrier said that its "engineers are engaged and are working to identify and solve the issue quickly." Verizon's statement came after a swath of social media comments directed at Verizon, with users saying that their mobile devices were showing no bars of service or "SOS," indicating a lack of connection.

Verizon, which has more than 146 million customers, appears to have started experiencing services issues around 12:00 p.m. ET, according to comments on social media site X. Users also reported problems with Verizon competitor T-Mobile. But the company said that it was not having any service issues. "T-Mobile's network is keeping our customers connected, and we've confirmed that our network is operating optimally," a spokesperson told NBC News. "However, due to Verizon's reported outage, our customers may not be able to reach someone with Verizon service at this time."

Microsoft

UK Police Blame Microsoft Copilot for Intelligence Mistake (theverge.com) 60

The chief constable of one of Britain's largest police forces has admitted that Microsoft's Copilot AI assistant made a mistake in a football (soccer) intelligence report. From a report: The report, which led to Israeli football fans being banned from a match last year, included a nonexistent match between West Ham and Maccabi Tel Aviv.

Copilot hallucinated the game and West Midlands Police included the error in its intelligence report without fact checking it. "On Friday afternoon I became aware that the erroneous result concerning the West Ham v Maccabi Tel Aviv match arose as result of a use of Microsoft Co Pilot [sic]," says Craig Guildford, chief constable of West Midlands Police, in a letter to the Home Affairs Committee earlier this week. Guildford previously denied in December that the West Midlands Police had used AI to prepare the report, blaming "social media scraping" for the error.

Science

Doubt Cast On Discovery of Microplastics Throughout Human Body (theguardian.com) 50

An anonymous reader quotes a report from the Guardian: High-profile studies reporting the presence of microplastics throughout the human body have been thrown into doubt by scientists who say the discoveries are probably the result of contamination and false positives. One chemist called the concerns "a bombshell." Studies claiming to have revealed micro and nanoplastics in the brain, testes, placentas, arteries and elsewhere were reported by media across the world, including the Guardian.

There is no doubt that plastic pollution of the natural world is ubiquitous, and present in the food and drink we consume and the air we breathe. But the health damage potentially caused by microplastics and the chemicals they contain is unclear, and an explosion of research has taken off in this area in recent years. However, micro- and nanoplastic particles are tiny and at the limit of today's analytical techniques, especially in human tissue. There is no suggestion of malpractice, but researchers told the Guardian of their concern that the race to publish results, in some cases by groups with limited analytical expertise, has led to rushed results and routine scientific checks sometimes being overlooked.

The Guardian has identified seven studies that have been challenged by researchers publishing criticism in the respective journals, while a recent analysis listed 18 studies that it said had not considered that some human tissue can produce measurements easily confused with the signal given by common plastics. There is an increasing international focus on the need to control plastic pollution but faulty evidence on the level of microplastics in humans could lead to misguided regulations and policies, which is dangerous, researchers say. It could also help lobbyists for the plastics industry to dismiss real concerns by claiming they are unfounded. While researchers say analytical techniques are improving rapidly, the doubts over recent high-profile studies also raise the questions of what is really known today and how concerned people should be about microplastics in their bodies.

Government

Senate Passes a Bill That Would Let Nonconsensual Deepfake Victims Sue (theverge.com) 63

The U.S. Senate unanimously passed the Disrupt Explicit Forged Images and Non-Consensual Edits Act (DEFIANCE Act), giving victims of sexually explicit AI deepfakes the right to sue the individuals who created them. The Verge reports: The bill passed with unanimous consent -- meaning there was no roll-call vote, and no Senator objected to its passage on the floor Tuesday. It's meant to build on the work of the Take It Down Act, a law that criminalizes the distribution of nonconsensual intimate images (NCII) and requires social media platforms to promptly remove them. [...] Now the ball is again in the House leadership's court; if they decide to bring the bill to the floor, it will have to pass in order to reach the president's desk.

Slashdot Top Deals