×
Power

As AI Booms, Land Near Nuclear Power Plants Becomes Hot Real Estate 77

Tobias Mann reports via The Register: The land surrounding a nuclear power plant might not sound like prime real estate, but as more bit barns seek to trim costs, it's poised to become a rather hot commodity. All datacenters are energy-hungry but with more watt-greedy AI workloads on the horizon, nuclear power has fresh appeal, especially for hyperscalers. Such a shift in power also does wonders for greenwashing narratives around net-zero operations. While not technically renewable, nuclear power does have the benefit of being carbon-free, not to mention historically reliable -- with a few notable exceptions of course. All of these are purported benefits cited by startup NE Edge, which has been fighting for more than a year to be able to build a pair of AI datacenters adjacent to a 2GW Millstone nuclear power plant in Waterford, Connecticut.

According to the Hartford Courant, NE Energy has secured $1.6 billion to construct the switching station and bit barns, which will span 1.2 million square feet in total. NE Energy will reportedly spend an equivalent sum on between 25,000 and 35,000 servers. Considering the price of GPU systems from Nvidia, AMD, and Intel, we suspect that those figures probably refer to the number of GPUs. We've asked NE Edge for more information. NE Energy has faced local challenges getting the project approved because residents are concerned the project would end up increasing the cost of electricity. The facilities will reportedly consume as much as 13 percent of the plant's output. The project's president Thomas Quinn attempted to quell concerns, arguing that by connecting directly to the plants, NE Energy will be able to negotiate prices that make building such a power hungry facility viable in Connecticut. NE Energy has also committed to paying a 12.08 percent premium to the town on top of what it pays Dominion for power, along with other payments said to total more than $1 billion over the next 30 years. But after initially denying the sale of land to NE Edge back in January over a lack of information regarding the datacenter project, it's reported that the town council has yet to tell the company what information it is after.
Transportation

Air Industry Trends Safer, But 'Flukish' Second Crash Led Boeing to Mishandled Media Storm, WSJ Argues (msn.com) 78

There's actually "a global trend toward increased air safety," notes a Wall Street Journal columnist.

And even in the case of the two fatal Boeing crashes five years ago, he stresses that they were "were two different crashes," with the second happening only "after Boeing and the FAA issued emergency directives instructing pilots how to compensate for Boeing's poorly designed flight control software.

"The story should have ended after the first crash except the second set of pilots behaved in unexpected, unpredictable ways, flying a flyable Ethiopian Airlines jet into the ground." Boeing is guilty of designing a fallible system and placing an undue burden on pilots. The evidence strongly suggests, however, that the Ethiopian crew was never required to master the simple remedy despite the global furor occasioned by the first crash. To boot, they committed an additional error by overspeeding the aircraft in defiance of aural, visual and stick-shaker warnings against doing so. It got almost no coverage, but on the same day the Ethiopian government issued its final findings on the accident in late 2022, the U.S. National Transportation Safety Board, in what it called an "unusual step," issued its own "comment" rebuking the Ethiopian report for "inaccurate" statements, for ignoring the crew's role, for ignoring how readily the accident should have been avoided.
So the Wall Street Journal columnist challenges whether profit incentives played any role in Boeing's troubles: In reality, the global industry was reorganized largely along competitive profit-and-loss lines after the 1970s, and yet this coincided with enormous increases in safety, notwithstanding the sausage factory elements occasionally on display (witness the little-reported parking of hundreds of Airbus planes over a faulty new engine).

The point here isn't blame but to note that 100,000 repetitions likely wouldn't reproduce the flukish second MAX crash and everything that followed from it. Rather than surfacing Boeing's deeply hidden problems, it seems the second crash gave birth to them. The subsequent 20-month grounding and production shutdown, combined with Covid, cost Boeing thousands of skilled workers. The pressure of its duopoly competition with Airbus plus customers clamoring for their backordered planes made management unwisely desperate to restart production. January's nonfatal door-plug blowout of an Alaska Airlines 737 appears to have been a one-off when Boeing workers failed to reinstall the plug properly after removing it to fix faulty fuselage rivets. Not a one-off, apparently, are faulty rivets as Boeing has strained to hire new staff and resume production of half-finished planes.

Boeing will sort out its troubles eventually by applying the oldest of manufacturing insights: Training, repetition, standardization and careful documentation are the way to error-free complex manufacturing.

As he sees it, "The second MAX crash caught Boeing up in a disorienting global media and political storm that it didn't know how to handle and, indeed, has handled fairly badly."
Desktops (Apple)

Apple Criticized For Changing the macOS version of cURL (daniel.haxx.se) 75

"On December 28 2023, bugreport 12604 was filed in the curl issue tracker," writes cURL lead developer Daniel Stenberg: The title stated of the problem in this case was quite clear: flag -cacert behavior isn't consistent between macOS and Linux , and it was filed by Yuedong Wu.

The friendly reporter showed how the curl version bundled with macOS behaves differently than curl binaries built entirely from open source. Even when running the same curl version on the same macOS machine.

The curl command line option --cacert provides a way for the user to say to curl that this is the exact set of CA certificates to trust when doing the following transfer. If the TLS server cannot provide a certificate that can be verified with that set of certificates, it should fail and return error. This particular behavior and functionality in curl has been established since many years (this option was added to curl in December 2000) and of course is provided to allow users to know that it communicates with a known and trusted server. A pretty fundamental part of what TLS does really.

When this command line option is used with curl on macOS, the version shipped by Apple, it seems to fall back and checks the system CA store in case the provided set of CA certs fail the verification. A secondary check that was not asked for, is not documented and plain frankly comes completely by surprise. Therefore, when a user runs the check with a trimmed and dedicated CA cert file, it will not fail if the system CA store contains a cert that can verify the server!

This is a security problem because now suddenly certificate checks pass that should not pass.

"We don't consider this something that needs to be addressed in our platforms," Apple Product Security responded. Stenberg's blog post responds, "I disagree."

Long-time Slashdot reader lee1 shares their reaction: I started to sour on MacOS about 20 years ago when I discovered that they had, without notice, substituted their own, nonstandard version of the Readline library for the one that the rest of the Unix-like world was using. This broke gnuplot and a lot of other free software...

Apple is still breaking things, this time with serious security and privacy implications.

Mozilla

Mozilla Drops Onerep After CEO Admits To Running People-Search Networks (krebsonsecurity.com) 9

An anonymous reader quotes a report from KrebsOnSecurity: The nonprofit organization that supports the Firefox web browser said today it is winding down its new partnership with Onerep, an identity protection service recently bundled with Firefox that offers to remove users from hundreds of people-search sites. The move comes just days after a report by KrebsOnSecurity forced Onerep's CEO to admit that he has founded dozens of people-search networks over the years. Mozilla only began bundling Onerep in Firefox last month, when it announced the reputation service would be offered on a subscription basis as part of Mozilla Monitor Plus. Launched in 2018 under the name Firefox Monitor, Mozilla Monitor also checks data from the website Have I Been Pwned? to let users know when their email addresses or password are leaked in data breaches. On March 14, KrebsOnSecurity published a story showing that Onerep's Belarusian CEO and founder Dimitiri Shelest launched dozens of people-search services since 2010, including a still-active data broker called Nuwber that sells background reports on people. Onerep and Shelest did not respond to requests for comment on that story.

But on March 21, Shelest released a lengthy statement wherein he admitted to maintaining an ownership stake in Nuwber, a consumer data broker he founded in 2015 -- around the same time he launched Onerep. Shelest maintained that Nuwber has "zero cross-over or information-sharing with Onerep," and said any other old domains that may be found and associated with his name are no longer being operated by him. "I get it," Shelest wrote. "My affiliation with a people search business may look odd from the outside. In truth, if I hadn't taken that initial path with a deep dive into how people search sites work, Onerep wouldn't have the best tech and team in the space. Still, I now appreciate that we did not make this more clear in the past and I'm aiming to do better in the future." The full statement is available here (PDF).

In a statement released today, a spokesperson for Mozilla said it was moving away from Onerep as a service provider in its Monitor Plus product. "Though customer data was never at risk, the outside financial interests and activities of Onerep's CEO do not align with our values," Mozilla wrote. "We're working now to solidify a transition plan that will provide customers with a seamless experience and will continue to put their interests first." KrebsOnSecurity also reported that Shelest's email address was used circa 2010 by an affiliate of Spamit, a Russian-language organization that paid people to aggressively promote websites hawking male enhancement drugs and generic pharmaceuticals. As noted in the March 14 story, this connection was confirmed by research from multiple graduate students at my alma mater George Mason University.

Shelest denied ever being associated with Spamit. "Between 2010 and 2014, we put up some web pages and optimize them -- a widely used SEO practice -- and then ran AdSense banners on them," Shelest said, presumably referring to the dozens of people-search domains KrebsOnSecurity found were connected to his email addresses (dmitrcox@gmail.com and dmitrcox2@gmail.com). "As we progressed and learned more, we saw that a lot of the inquiries coming in were for people." Shelest also acknowledged that Onerep pays to run ads on "on a handful of data broker sites in very specific circumstances." "Our ad is served once someone has manually completed an opt-out form on their own," Shelest wrote. "The goal is to let them know that if they were exposed on that site, there may be others, and bring awareness to there being a more automated opt-out option, such as Onerep."

Technology

Vernor Vinge, Father of the Tech Singularity, Has Died At Age 79 (arstechnica.com) 67

"Vernor Vinge, who three times won the Hugo for best novel, has died," writes Slashdot reader Felix Baum. Ars Technica reports: On Wednesday, author David Brin announced that Vernor Vinge, sci-fi author, former professor, and father of the technological singularity concept, died from Parkinson's disease at age 79 on March 20, 2024, in La Jolla, California. The announcement came in a Facebook tribute where Brin wrote about Vinge's deep love for science and writing. "A titan in the literary genre that explores a limitless range of potential destinies, Vernor enthralled millions with tales of plausible tomorrows, made all the more vivid by his polymath masteries of language, drama, characters, and the implications of science," wrote Brin in his post.

As a sci-fi author, Vinge won Hugo Awards for his novels A Fire Upon the Deep (1993), A Deepness in the Sky (2000), and Rainbows End (2007). He also won Hugos for novellas Fast Times at Fairmont High (2002) and The Cookie Monster (2004). As Mike Glyer's File 770 blog notes, Vinge's novella True Names (1981) is frequency cited as the first presentation of an in-depth look at the concept of "cyberspace." Vinge first coined the term "singularity" as related to technology in 1983, borrowed from the concept of a singularity in spacetime in physics.

When discussing the creation of intelligences far greater than our own in an 1983 op-ed in OMNI magazine, Vinge wrote, "When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding." In 1993, he expanded on the idea in an essay titled The Coming Technological Singularity: How to Survive in the Post-Human Era.

Government

EPA Bans Chrysotile Asbestos (apnews.com) 98

An anonymous reader quotes a report from the Associated Press: The Environmental Protection Agency on Monday announced a comprehensive ban on asbestos, a carcinogen that kills tens of thousands of Americans every year but is still used in some chlorine bleach, brake pads and other products. The final rule marks a major expansion of EPA regulation under a landmark 2016 law that overhauled regulations governing tens of thousands of toxic chemicals in everyday products, from household cleaners to clothing and furniture. The new rule would ban chrysotile asbestos, the only ongoing use of asbestos in the United States. The substance is found in products such as brake linings and gaskets and is used to manufacture chlorine bleach and sodium hydroxide, also known as caustic soda, including some that is used for water purification. [...]

The 2016 law authorized new rules for tens of thousands of toxic chemicals found in everyday products, including substances such as asbestos and trichloroethylene that for decades have been known to cause cancer yet were largely unregulated under federal law. Known as the Frank Lautenberg Chemical Safety Act, the law was intended to clear up a hodgepodge of state rules governing chemicals and update the Toxic Substances Control Act, a 1976 law that had remained unchanged for 40 years. The EPA banned asbestos in 1989, but the rule was largely overturned by a 1991 Court of Appeals decision that weakened the EPA's authority under TSCA to address risks to human health from asbestos or other existing chemicals. The 2016 law required the EPA to evaluate chemicals and put in place protections against unreasonable risks. Asbestos, which was once common in home insulation and other products, is banned in more than 50 countries, and its use in the U.S. has been declining for decades. The only form of asbestos known to be currently imported, processed or distributed for use in the U.S. is chrysotile asbestos, which is imported primarily from Brazil and Russia. It is used by the chlor-alkali industry, which produces bleach, caustic soda and other products. Most consumer products that historically contained chrysotile asbestos have been discontinued. While chlorine is a commonly used disinfectant in water treatment, there are only eight chlor-alkali plants in the U.S. that still use asbestos diaphragms to produce chlorine and sodium hydroxide. The plants are mostly located in Louisiana and Texas.

The use of asbestos diaphragms has been declining and now accounts for less than one-third of the chlor-alkali production in the U.S., the EPA said. The EPA rule will ban imports of asbestos for chlor-alkali as soon as the rule is published but will phase in prohibitions on chlor-alkali use over five or more years to provide what the agency called "a reasonable transition period." A ban on most other uses of asbestos will effect in two years. A ban on asbestos in oilfield brake blocks, aftermarket automotive brakes and linings and other gaskets will take effect in six months. The EPA rule allows asbestos-containing sheet gaskets to be used until 2037 at the U.S. Department of Energy's Savannah River Site in South Carolina to ensure that safe disposal of nuclear materials can continue on schedule. Separately, the EPA is also evaluating so-called legacy uses of asbestos in older buildings, including schools and industrial sites, to determine possible public health risks. A final risk evaluation is expected by the end of the year.

Databases

Database-Based Operating System 'DBOS' Does Things Linux Can't (nextplatform.com) 104

Databricks CTO Matei Zaharia "said that Databricks had to keep track of scheduling a million things," remembers adjunct MIT professor Michael Stonebraker. " He said that this can't be done with traditional operating system scheduling, and so this was done out of a Postgres database. And then he started to whine that Postgres was too slow, and I told him we can do better than that...."

This resulted in DBOS — short for "database operating system" — which they teamed up to build with teams Stanford and MIT, according to The Next Platform: They founded a company to commercialize the idea in April 2023 and secured $8.5 million initial seed funding to start building the real DBOS. Engine Ventures and Construct Capital led the funding, along with Sinewave and GutBrain Ventures...

"The state that the operating system has to keep track of — memory, files, messages, and so on — is approximately linear to the resources you have got," says Stonebraker. "So without me saying another word, keeping track of operating system state is a database problem not addressed by current operating system schedulers. Moreover, OLTP [Online Transaction Processing] database performance has gone up dramatically, and that is why we thought instead of running the database system in user space on top of the operating system, why don't we invert our thinking 180 degrees and run the operating system on top of the database, with all of the operating services are coded in SQL...?"

For now, DBOS can give the same kind of performance as that full blown Linux operating system, and thanks to the distributed database underpinnings of its kernel, it can do things that a Linux kernel just cannot do... One is provide reliable execution, which means that if a program running atop DBOS is ever interrupted, it starts where it left off and does not have to redo its work from some arbitrary earlier point and does not crash and have to start from the beginning. And because every little bit of the state of the operating system — and therefore the applications that run atop it — is preserved, you can go backwards in time in the system and restart the operating system if it experiences some sort of anomaly, such as a bad piece of application software running or a hack attack. You can use this "time travel" feature, as Stonebraker calls it, to reproduce what are called heisenbugs — ones that are very hard to reproduce precisely because there is no shared state in the distributed Linux and Kubernetes environment and that are increasingly prevalent in a world of microservices.

The other benefit of the DBOS is that it presents a smaller attack surface for hackers, which boosts security, and that you analyze the metrics of the operating system in place since they are already in a NoSQL database that can be queried rather than aggregating a bunch of log files from up and down the software stack to try to figure out what is going on...

There is also a custom tier for DBOS, which we presume costs money, that can use other databases and datastores for user application data, stores more than three days of log data, can have multiple users per account, that adds email and Slack support with DBOS techies, and that is available on other clouds as well as AWS.

The operating system kernel/scheduler "is itself largely a database," with services written in TypeScript, according to the article. The first iteration used the FoundationDB distributed key-value store for its scheduling core (open sourced by Apple in 2018), according to the article — "a blazingly fast NoSQL database... Stonebraker says there is no reason to believe that DBOS can't scale across 1 million cores or more and support Java, Python, and other application languages as they are needed by customers..."

And the article speculates they could take things even further. "There is no reason why DBOS cannot complete the circle and not only have a database as an operating system kernel, but also have a relational database as the file system for applications."
AI

Why Are So Many AI Chatbots 'Dumb as Rocks'? (msn.com) 73

Amazon announced a new AI-powered chatbot last month — still under development — "to help you figure out what to buy," writes the Washington Post. Their conclusion? "[T]he chatbot wasn't a disaster. But I also found it mostly useless..."

"The experience encapsulated my exasperation with new types of AI sprouting in seemingly every technology you use. If these chatbots are supposed to be magical, why are so many of them dumb as rocks?" I thought the shopping bot was at best a slight upgrade on searching Amazon, Google or news articles for product recommendations... Amazon's chatbot doesn't deliver on the promise of finding the best product for your needs or getting you started on a new hobby.

In one of my tests, I asked what I needed to start composting at home. Depending on how I phrased the question, the Amazon bot several times offered basic suggestions that I could find in a how-to article and didn't recommend specific products... When I clicked the suggestions the bot offered for a kitchen compost bin, I was dumped into a zillion options for countertop compost products. Not helpful... Still, when the Amazon bot responded to my questions, I usually couldn't tell why the suggested products were considered the right ones for me. Or, I didn't feel I could trust the chatbot's recommendations.

I asked a few similar questions about the best cycling gloves to keep my hands warm in winter. In one search, a pair that the bot recommended were short-fingered cycling gloves intended for warm weather. In another search, the bot recommended a pair that the manufacturer indicated was for cool temperatures, not frigid winter, or to wear as a layer under warmer gloves... I did find the Amazon chatbot helpful for specific questions about a product, such as whether a particular watch was waterproof or the battery life of a wireless keyboard.

But there's a larger question about whether technology can truly handle this human-interfacing task. "I have also found that other AI chatbots, including those from ChatGPT, Microsoft and Google, are at best hit-or-miss with shopping-related questions..." These AI technologies have potentially profound applications and are rapidly improving. Some people are making productive use of AI chatbots today. (I mostly found helpful Amazon's relatively new AI-generated summaries of customer product reviews.)

But many of these chatbots require you to know exactly how to speak to them, are useless for factual information, constantly make up stuff and in many cases aren't much of an improvement on existing technologies like an app, news articles, Google or Wikipedia. How many times do you need to scream at a wrong math answer from a chatbot, botch your taxes with a TurboTax AI, feel disappointed at a ChatGPT answer or grow bored with a pointless Tom Brady chatbot before we say: What is all this AI junk for...?

"When so many AI chatbots overpromise and underdeliver, it's a tax on your time, your attention and potentially your money," the article concludes.

"I just can't with all these AI junk bots that demand a lot of us and give so little in return."
Cellphones

Social Psychologist Urges 'End the Phone-Based Childhood Now' (msn.com) 203

"The environment in which kids grow up today is hostile to human development," argues Jonathan Haidt, a social psychologist and business school ethics professor, saying that since the early 2010s, "something went suddenly and horribly wrong for adolescents."

The Atlantic recently published an excerpt from his book The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness.: By a variety of measures and in a variety of countries, the members of Generation Z (born in and after 1996) are suffering from anxiety, depression, self-harm, and related disorders at levels higher than any other generation for which we have data... I think the answer can be stated simply, although the underlying psychology is complex: Those were the years when adolescents in rich countries traded in their flip phones for smartphones and moved much more of their social lives online — particularly onto social-media platforms designed for virality and addiction. Once young people began carrying the entire internet in their pockets, available to them day and night, it altered their daily experiences and developmental pathways across the board. Friendship, dating, sexuality, exercise, sleep, academics, politics, family dynamics, identity — all were affected...

There's an important backstory, beginning as long ago as the 1980s, when we started systematically depriving children and adolescents of freedom, unsupervised play, responsibility, and opportunities for risk taking, all of which promote competence, maturity, and mental health. But the change in childhood accelerated in the early 2010s, when an already independence-deprived generation was lured into a new virtual universe that seemed safe to parents but in fact is more dangerous, in many respects, than the physical world. My claim is that the new phone-based childhood that took shape roughly 12 years ago is making young people sick and blocking their progress to flourishing in adulthood. We need a dramatic cultural correction, and we need it now...

A simple way to understand the differences between Gen Z and previous generations is that people born in and after 1996 have internal thermostats that were shifted toward defend mode. This is why life on college campuses changed so suddenly when Gen Z arrived, beginning around 2014. Students began requesting "safe spaces" and trigger warnings. They were highly sensitive to "microaggressions" and sometimes claimed that words were "violence." These trends mystified those of us in older generations at the time, but in hindsight, it all makes sense. Gen Z students found words, ideas, and ambiguous social encounters more threatening than had previous generations of students because we had fundamentally altered their psychological development.

The article argues educational scores also began dropping around 2012, while citing estimates that America's average teenager spends seven to nine hours a day on screen-based activities. "Everything else in an adolescent's day must get squeezed down or eliminated entirely to make room for the vast amount of content that is consumed... The main reason why the phone-based childhood is so harmful is because it pushes aside everything else." (For example, there's "the collapse of time spent interacting with other people face-to-face.")

The article warns of fragmented attention, disrupted learning, social withdrawal, and "the decay of wisdom and the loss of meaning." ("This rerouting of enculturating content has created a generation that is largely cut off from older generations and, to some extent, from the accumulated wisdom of humankind, including knowledge about how to live a flourishing life.") Its proposed solution?
  • No smartphones before high school
  • No social media before 16
  • Phoneâfree schools
  • More independence, free play, and responsibility in the real world

"We didn't know what we were doing in the early 2010s. Now we do. It's time to end the phone-based childhood."

Thanks to long-time Slashdot reader schwit1 and sinij for sharing the article.


Space

Conflicting Values For Hubble Constant Not Due To Measurement Error, Study Finds (arstechnica.com) 64

Jennifer Ouellette reports via Ars Technica: Astronomers have made new measurements of the Hubble Constant, a measure of how quickly the Universe is expanding, by combining data from the Hubble Space Telescope and the James Webb Space Telescope. Their results confirmed the accuracy of Hubble's earlier measurement of the constant's value, according to their recent paper published in The Astrophysical Journal Letters, with implications for a long-standing discrepancy in values obtained by different observational methods known as the "Hubble tension."

There was a time when scientists believed the Universe was static, but that changed with Albert Einstein's general theory of relativity. Alexander Friedmann published a set of equations showing that the Universe might actually be expanding in 1922, with Georges Lemaitre later making an independent derivation to arrive at that same conclusion. Edwin Hubble confirmed this expansion with observational data in 1929. Prior to this, Einstein had been trying to modify general relativity by adding a cosmological constant in order to get a static universe from his theory; after Hubble's discovery, legend has it, he referred to that effort as his biggest blunder.
The article notes how scientists have employed different methods to calculate the Hubble Constant, including observing nearby celestial objects, analyzing gravitational waves from cosmic events, and examining the Cosmic Microwave Background (CMB). However, these approaches yield differing values, highlighting the challenge in pinning down the constant precisely. A recent effort involved making additional observations of Cepheid variable stars, correlating them with the Hubble data. The results further confirmed the accuracy of the Hubble data.

"We've now spanned the whole range of what Hubble observed, and we can rule out a measurement error as the cause of the Hubble Tension with very high confidence," said co-author and team leader Adam Riess, a physicist at Johns Hopkins University. "Combining Webb and Hubble gives us the best of both worlds. We find that the Hubble measurements remain reliable as we climb farther along the cosmic distance ladder. With measurement errors negated, what remains is the real and exciting possibility that we have misunderstood the Universe."
Space

US Intelligence Officer Explains Roswell, UFO Sightings (cnn.com) 43

CNN's national security analyst interviewed a U.S. intelligence officer who worked on the newly-released Defense report debunking UFO sightings — physicist Sean Kirkpatrick. He tells CNN "about two to five percent" of UFO reports are "truly anomalous."

But CNN adds that "he thinks explanations for that small percentage will most likely be found right here on Earth..." This is how Kirkpatrick and his team explain the Roswell incident, which plays a prominent role in UFO lore. That's because, in 1947, a U.S. military news release stated that a flying saucer had crashed near Roswell Army Air Field in New Mexico. A day later, the Army retracted the story and said the crashed object was a weather balloon. Newspapers ran the initial saucer headline, followed up with the official debunking, and interest in the case largely died down. Until 1980, that is, when a pair of UFO researchers published a book alleging that alien bodies had been recovered from the Roswell wreckage and that the U.S. government had covered up the evidence.

Kirkpatrick says his office dug deep into the Roswell incident and found that in the late 1940s and early 1950s, there were a lot of things happening near the Roswell Airfield. There was a spy program called Project Mogul, which launched long strings of oddly shaped metallic balloons. They were designed to monitor Soviet nuclear tests and were highly secret. At the same time, the U.S. military was conducting tests with other high-altitude balloons that carried human test dummies rigged with sensors and zipped into body-sized bags for protection against the elements. And there was at least one military plane crash nearby with 11 fatalities.

Echoing earlier government investigations, Kirkpatrick and his team concluded that the crashed Mogul balloons, the recovery operations to retrieve downed test dummies and glimpses of the charred aftermath of that real plane crash likely combined into a single false narrative about a crashed alien spacecraft...

Since 2020, the Pentagon has standardized, de-stigmatized and increased the volume of reporting on UFOs by the U.S. military. Kirkpatrick says that's the reason the closely covered and widely-mocked Chinese spy balloon was spotted in the first place last year. The incident shows that the U.S. government's policy of taking UFOs seriously is actually working.

The pattern keeps repeating. "Kirkpatrick says, his investigation found that most UFO sightings are of advanced technology that the U.S. government needs to keep secret, of aircraft that rival nations are using to spy on the U.S. or of benign civilian drones and balloons." ("What's more likely?" asked Kirkpatrick. "The fact that there is a state-of-the-art technology that's being commercialized down in Florida that you didn't know about, or we have extraterrestrials?")

But the greatest irony may be that "stories about these secret programs spread inside the Pentagon, got embellished and received the occasional boost from service members who'd heard rumors about or caught glimpses of seemingly sci-fi technology or aircraft. And Kirkpatrick says his investigators ultimately traced this game of top-secret telephone back to fewer than a dozen people... [F]or decades, UFO true believers have been telling us there's a U.S. government conspiracy to hide evidence of aliens. But — if you believe Kirkpatrick — the more mundane truth is that these stories are being pumped up by a group of UFO true believers in and around government."
Open Source

OpenTTD (Unofficial Remake of 'Transport Tycoon Deluxe' Game) Turns 20 (openttd.org) 17

In 1995 Scottish video game designer Chris Sawyer created the business simulator game Transport Tycoon Deluxe — and within four years, Wikipedia notes, work began on the first version of an open source version that's still being actively developed. "According to a study of the 61,154 open-source projects on SourceForge in the period between 1999 and 2005, OpenTTD ranked as the 8th most active open-source project to receive patches and contributions. In 2004, development moved to their own server."

Long-time Slashdot reader orudge says he's been involved for almost 25 years. "Exactly 21 years ago, I received an ICQ message (look it up, kids) out of the blue from a guy named Ludvig Strigeus (nicknamed Ludde)." "Hello, you probably don't know me, but I've been working on a project to clone Transport Tycoon Deluxe for a while," he said, more or less... Ludde made more progress with the project [written in C] over the coming year, and it looks like we even attempted some multiplayer games (not too reliable, especially over my dial-up connection at the time). Eventually, when he was happy with what he had created, he agreed to allow me to release the game as open source. Coincidentally, this happened exactly a year after I'd first spoken to him, on the 6th March 2004...

Things really got going after this, and a community started to form with enthusiastic developers fixing bugs, adding in new features, and smoothing off the rough edges. Ludde was, I think, a bit taken aback by how popular it proved, and even rejoined the development effort for a while. A read through the old changelogs reveals just how many features were added over a very short period of time. Quick wins like higher vehicle limits came in very quickly, and support for TTDPatch's NewGRF format started to be functional just four months later. Large maps, improved multiplayer, better pathfinders, improved TTDPatch compatibility, and of course, ports to a great many different operating systems, such as Mac OS X, BeOS, MorphOS and OS/2. It was a very exciting time to be a TTD fan!

Within six years, ambitious projects to create free replacements for the original TTD graphics, sounds and music sets were complete, and OpenTTD finally had its 1.0 release. And while we may not have the same frantic addition of new features we had in 2004, there have still been massive improvements to the code, with plenty of exciting new features over the years, with major releases every year since 2008. he move to GitHub in 2018 and the release of OpenTTD on Steam in 2021 have also re-energised development efforts, with thousands of people now enjoying playing the game regularly. And development shows no signs of slowing down, with the upcoming OpenTTD 14.0 release including over 40 new features!

"Personally, I would like to say thank you to everyone who has supported OpenTTD development over the past two decades..." they write, adding "Finally, of course, I'd like to thank you, the players! None of us would be here if people weren't still playing the game.

"Seeing how the first twenty years have gone, I can't wait to see what the next twenty years have in store. :)"
Security

Linux Variants of Bifrost Trojan Evade Detection via Typosquatting (darkreading.com) 19

"A 20-year-old Trojan resurfaced recently," reports Dark Reading, "with new variants that target Linux and impersonate a trusted hosted domain to evade detection." Researchers from Palo Alto Networks spotted a new Linux variant of the Bifrost (aka Bifrose) malware that uses a deceptive practice known as typosquatting to mimic a legitimate VMware domain, which allows the malware to fly under the radar. Bifrost is a remote access Trojan (RAT) that's been active since 2004 and gathers sensitive information, such as hostname and IP address, from a compromised system.

There has been a worrying spike in Bifrost Linux variants during the past few months: Palo Alto Networks has detected more than 100 instances of Bifrost samples, which "raises concerns among security experts and organizations," researchers Anmol Murya and Siddharth Sharma wrote in the company's newly published findings.

Moreover, there is evidence that cyberattackers aim to expand Bifrost's attack surface even further, using a malicious IP address associated with a Linux variant hosting an ARM version of Bifrost as well, they said... "As ARM-based devices become more common, cybercriminals will likely change their tactics to include ARM-based malware, making their attacks stronger and able to reach more targets."

Anime

Akira Toriyama, Creator of Dragon Ball Manga Series, Dies Aged 68 (theguardian.com) 40

Longtime Slashdot reader AmiMoJo shares a report from The Guardian: Akira Toriyama, the influential Japanese manga artist who created the Dragon Ball series, has died at the age of 68. He died on March 1 from an acute subdural haematoma. The news was confirmed by Bird Studio, the manga company that Toriyama founded in 1983.

"It's our deep regret that he still had several works in the middle of creation with great enthusiasm," the studio wrote in a statement. "Also, he would have many more things to achieve." The studio remembered his "unique world of creation". "He has left many manga titles and works of art to this world," the statement read. "Thanks to the support of so many people around the world, he has been able to continue his creative activities for over 45 years." [...]

Based on an earlier work titled Dragon Boy, Dragon Ball was serialized in 519 chapters in Weekly Shonen Jump from 1984 to 1995 and birthed a blockbuster franchise including an English-language comic book series, five distinct television adaptation -- with Dragon Ball Z the most familiar to western audiences -- and spin-offs, over 20 different films and a vast array of video games. The series -- a kung fu take on the shonen (or young adult) manga genre -- drew from Chinese and Hong Kong action films as well as Japanese folklore. It introduced audiences to the now-instantly familiar Son Goku -- a young martial arts trainee searching for seven magical orbs that will summon a mystical dragon -- as well as his ragtag gang of allies and enemies.
You can learn more about Toriyama via the Dragon Ball Wiki.

The Associated Press, Washington Post, and New York Times, among others, have all reported on his passing.
Social Networks

Threads' API Is Coming in June (techcrunch.com) 17

In 2005 Gabe Rivera was a compiler software engineer at Intel — before starting the tech-news aggregator Techmeme. And last year his Threads profile added the words "This is a little self-serving, but I want all social networks to be as open as possible."

Friday Threads engineer Jesse Chen posted that it was Rivera's post when Threads launched asking for an API that "convinced us to go for it." And Techmeme just made its first post using the API, according to Chen. The Verge reports : Threads plans to release its API by the end of June after testing it with a limited set of partners, including Hootsuite, Sprinklr, Sprout Social, Social News Desk, and Techmeme. The API will let developers build third-party apps for Threads and allow sites to publish directly to the platform.
More from TechCrunch: Engineer Jesse Chen posted that the company has been building the API for the past few months. The API currently allows users to authenticate, publish threads and fetch the content they post through these tools. "Over the past few months, we've been building the Threads API to enable creators, developers, and brands to manage their Threads presence at scale and easily share fresh, new ideas with their communities from their favorite third-party applications," he said...

The engineer added that Threads is looking to add more capabilities to APIs for moderation and insights gathering.

Space

The Desert Planet In 'Dune' Is Plausible, According To Science (sciencenews.org) 51

The desert planet Arrakis in Frank Herbert's science fiction novel Dune is plausible, says Alexander Farnsworth, a climate modeler at the University of Bristol in England. According to Science News, the world would be a harsh place for humans to live, and they probably wouldn't have to worry about getting eaten by extraterrestrial helminths. From the report: For their Arrakis climate simulation, which you can explore at the website Climate Archive, Farnsworth and colleagues started with the well-known physics that drive weather and climate on Earth. Using our planet as a starting point makes sense, Farnsworth says, partly because Herbert drew inspiration for Arrakis from "some sort of semi-science of looking at dune systems on the Earth itself." The team then added nuggets of information about the planet from details in Herbert's novels and in the Dune Encyclopedia. According to that intel, the fictional planet's atmosphere is similar to Earth's with a couple of notable differences. Arrakis has less carbon dioxide in the atmosphere than Earth -- about 350 parts per million on the desert planet compared with 417 parts per million on Earth. But Dune has far more ozone in its lower atmosphere: 0.5 percent of the gases in the atmosphere compared to Earth's 0.000001 percent.

All that extra ozone is crucial for understanding the planet. Ozone is a powerful greenhouse gas, about 65 times as potent at warming the atmosphere as carbon dioxide is, when measured over a 20-year period. "Arrakis would certainly have a much warmer atmosphere, even though it has less CO2 than Earth today," Farnsworth says. In addition to warming the planet, so much ozone in the lower atmosphere could be bad news. "For humans, that would be incredibly toxic, I think, almost fatal if you were to live under such conditions," Farnsworth says. People on Arrakis would probably have to rely on technology to scrub ozone from the air. Of course, ozone in the upper atmosphere could help shield Arrakis from harmful radiation from its star, Canopus. (Canopus is a real star also known as Alpha Carinae. It's visible in the Southern Hemisphere and is the second brightest star in the sky. Unfortunately for Dune fans, it isn't known to have planets.) If Arrakis were real, it would be located about as far from Canopus as Pluto is from the sun, Farnsworth says. But Canopus is a large white star calculated to be about 7,200 degrees Celsius. "That's significantly hotter than the sun," which runs about 2,000 degrees cooler, Farnsworth says. But "there's a lot of supposition and assumptions they made in here, and whether those are accurate numbers or not, I can't say."

The climate simulation revealed that Arrakis probably wouldn't be exactly as Herbert described it. For instance, in one throwaway line, the author described polar ice caps receding in the summer heat. But Farnsworth and colleagues say it would be far too hot at the poles, about 70Â C during the summer, for ice caps to exist at all. Plus, there would be too little precipitation to replenish the ice in the winter. High clouds and other processes would warm the atmosphere at the poles and keep it warmer than lower latitudes, especially in the summertime. Although Herbert's novels have people living in the midlatitudes and close to the poles, the extreme summer heat and bone-chilling -40C to -75C temperatures in the winters would make those regions nearly unlivable without technology, Farnsworth says. Temperatures in Arrakis' tropical latitudes would be relatively more pleasant at 45C in the warmest months and about 15C in colder months. On Earth, high humidity in the tropics makes it far warmer than at the poles. But on Arrakis, "most of the atmospheric moisture was essentially removed from the tropics," making even the scorching summers more tolerable. The poles are where clouds and the paltry amount of moisture gather and heat the atmosphere. But the tropics on Arrakis pose their own challenges. Hurricane force winds would regularly sandblast inhabitants and build dunes up to 250 meters tall, the researchers calculate. It doesn't mean people couldn't live on Arrakis, just that they'd need technology and lots of off-world support to bring in food and water, Farnsworth says. "I'd say it's a very livable world, just a very inhospitable world."

Wikipedia

Rogue Editors Started a Competing Wikipedia That's Only About Roads (gizmodo.com) 57

An anonymous reader quotes a report from Gizmodo: For 20 years, a loosely organized group of Wikipedia editors toiled away curating a collection of 15,000 articles on a single subject: the roads and highways of the United States. Despite minor disagreements, the US Roads Project mostly worked in harmony, but recently, a long-simmering debate over the website's rules drove this community to the brink. Efforts at compromise fell apart. There was a schism, and in the fall of 2023, the editors packed up their articles and moved over to a website dedicated to roads and roads alone. It's called AARoads, a promised land where the editors hope, at last, that they can find peace. "Roads are a background piece. People drive on them every day, but they don't give them much attention," said editor Michael Gronseth, who goes by Imzadi1979 on Wikipedia, where he dedicated his work to Michigan highways, specifically. But a road has so much to offer if you look beyond the asphalt. It's the nexus of history, geography, travel, and government, a seemingly perfect subject for the hyper-fixations of Wikipedia. "But there was a shift about a year ago," Gronseth said. "More editors started telling us that what we're doing isn't important enough, and we should go work on more significant topics." [...]

The Roads Project had a number of adversaries, but the chief rival is a group known as the New Page Patrol, or the NPP for short. The NPP has a singular mission. When a new page goes up on Wikipedia, it gets reviewed by the NPP. The Patrol has special editing privileges and if a new article doesn't meet the website's standards, the NPP takes it down. "There's a faction of people who feel that basically anything is valid to be published on Wikipedia. They say, 'Hey, just throw it out there! Anything goes.' That's not where I come down." said Bil Zeleny, a former member of the NPP who goes by onel5969 on Wikipedia, a reference to the unusual spelling of his first name. At his peak, Zeleny said he was reviewing upwards of 100,000 articles a year, and he rejected a lot of articles about roads during his time. After years of frustration, Zeleny felt he was seeing too many new road articles that weren't following the rules -- entire articles that cited nothing other than Google Maps, he said. Enough was enough. Zeleny decided it was time to bring the subject to the council.

Zeleny brought up the problem on the NPP discussion forum, sparking months of heated debate. Eventually, the issue became so serious that some editors proposed an official policy change on the use of maps as a source. Rule changes require a process called "Request for Comment," where everyone is invited to share their thoughts on the issue. Over the course of a month, Wikipedia users had written more than 56,000 words on the subject. For reference, that's about twice as long as Ernest Hemingway's novel The Old Man and the Sea. In the end, the roads project was successful. The vote was decisive, and Wikipedia updated its "No Original Research" policy to clarify that it's ok to cite maps and other visual sources. But this, ultimately, was a victory with no winners. "Some of us felt attacked," Gronseth said. On the US Roads Project's Discord channel, a different debate was brewing. The website didn't feel safe anymore. What would happen at the next request for comment? The community decided it was time to fork. "We don't want our articles deleted. It didn't feel like we had a choice," he said.

The Wikipedia platform is designed for interoperability. If you want to start your own Wiki, you can split off and take your Wikipedia work with you, a process known as "forking." [...] Over the course of several months, the US Roads Project did the same. Leaving Wikipedia was painful, but the fight that drove the roads editors away was just as difficult for people on the other side. Some editors embroiled in the roads fights deleted their accounts, though none of these ex-Wikipedian's responded to Gizmodo's requests for comment. Bil Zeleny was among the casualties. After almost six years of hard work on the New Post Patrol, he reached the breaking point. The controversy had pushed him too far, and Zeleny resigned from the NPP. [...] AARoads actually predates Wikipedia, tracing its origins all the way back to the prehistoric internet days of the year 2000, complete with articles, maps, forums, and a collection of over 10,000 photos of highway signs and markers. When the US Roads Project needed a new home, AARoads was happy to oblige. It's a beautiful resource. It even has backlinks to relevant non-roads articles on the regular Wikipedia. But for some, it isn't home.
"There are members who disagree with me, but my ultimate goal is to fork back," said Gronseth. "We made our articles license-compatible, so they can be exported back to Wikipedia someday if that becomes an option. I don't want to stay separate. I want to be part of the Wikipedia community. But we don't know where things will land, and for now, we've struck out on our own."
AI

AI-Generated Articles Prompt Wikipedia To Downgrade CNET's Reliability Rating (arstechnica.com) 54

Wikipedia has downgraded tech website CNET's reliability rating following extensive discussions among its editors regarding the impact of AI-generated content on the site's trustworthiness. "The decision reflects concerns over the reliability of articles found on the tech news outlet after it began publishing AI-generated stories in 2022," adds Ars Technica. Futurism first reported the news. From the report: Wikipedia maintains a page called "Reliable sources/Perennial sources" that includes a chart featuring news publications and their reliability ratings as viewed from Wikipedia's perspective. Shortly after the CNET news broke in January 2023, Wikipedia editors began a discussion thread on the Reliable Sources project page about the publication. "CNET, usually regarded as an ordinary tech RS [reliable source], has started experimentally running AI-generated articles, which are riddled with errors," wrote a Wikipedia editor named David Gerard. "So far the experiment is not going down well, as it shouldn't. I haven't found any yet, but any of these articles that make it into a Wikipedia article need to be removed." After other editors agreed in the discussion, they began the process of downgrading CNET's reliability rating.

As of this writing, Wikipedia's Perennial Sources list currently features three entries for CNET broken into three time periods: (1) before October 2020, when Wikipedia considered CNET a "generally reliable" source; (2) between October 2020 and present, when Wikipedia notes that the site was acquired by Red Ventures in October 2020, "leading to a deterioration in editorial standards" and saying there is no consensus about reliability; and (3) between November 2022 and January 2023, when Wikipedia considers CNET "generally unreliable" because the site began using an AI tool "to rapidly generate articles riddled with factual inaccuracies and affiliate links."

Futurism reports that the issue with CNET's AI-generated content also sparked a broader debate within the Wikipedia community about the reliability of sources owned by Red Ventures, such as Bankrate and CreditCards.com. Those sites published AI-generated content around the same period of time as CNET. The editors also criticized Red Ventures for not being forthcoming about where and how AI was being implemented, further eroding trust in the company's publications. This lack of transparency was a key factor in the decision to downgrade CNET's reliability rating.
A CNET spokesperson said in a statement: "CNET is the world's largest provider of unbiased tech-focused news and advice. We have been trusted for nearly 30 years because of our rigorous editorial and product review standards. It is important to clarify that CNET is not actively using AI to create new content. While we have no specific plans to restart, any future initiatives would follow our public AI policy."
Open Source

'Paying People To Work on Open Source is Good Actually' 40

Jacob Kaplan-Moss, one of the lead developers of Django, writes in a long post that he says has come from a place of frustration: [...] Instead, every time a maintainer finds a way to get paid, people show up to criticize and complain. Non-OSI licenses "don"t count" as open source. Someone employed by Microsoft is "beholden to corporate interests" and not to be trusted. Patreon is "asking for handouts." Raising money through GitHub sponsors is "supporting Microsoft's rent-seeking." VC funding means we're being set up for a "rug pull" or "enshitification." Open Core is "bait and switch."

None of this is hypothetical; each of these examples are actual things I've seen said about maintainers who take money for their work. One maintainer even told me he got criticized for selling t-shirts! Look. There are absolutely problems with every tactic we have to support maintainers. It's true that VC investment comes with strings attached that often lead to problems down the line. It sucks that Patreon or GitHub (and Stripe) take a cut of sponsor money. The additional restrictions imposed by PolyForm or the BSL really do go against the Freedom 0 ideal. I myself am often frustrated by discovering that some key feature I want out of an open core tool is only available to paid licensees.

But you can criticize these systems while still supporting and celebrating the maintainers! Yell at A16Z all you like, I don't care. (Neither do they.) But yelling at a maintainer because they took money from a VC is directing that anger in the wrong direction. The structural and societal problems that make all these different funding models problematic aren't the fault of the people trying to make a living doing open source. It's like yelling at someone for shopping at Dollar General when it's the only store they have access to. Dollar General's predatory business model absolutely sucks, as do the governmental policies that lead to food deserts, but none of that is on the shoulders of the person who needs milk and doesn't have alternatives.
United States

Wildfires Threaten Nuclear Weapons Plant In Texas (independent.co.uk) 68

An anonymous reader quotes a report from The Independent: Wildfires sweeping across Texas briefly forced the evacuation of America's main nuclear weapons facility as strong winds, dry grass and unseasonably warm temperatures fed the blaze. Pantex Plant, the main facility that assembles and disassembles America's nuclear arsenal, shut down its operations on Tuesday night as the Windy Deuce fire roared towards the Potter County location. Pantex re-opened and resumed operations as normal on Wednesday morning. Pantex is about 17 miles (27.36 kilometers) northeast of Amarillo and some 320 miles (515 kilometers) northwest of Dallas. Since 1975 it has been the US's main assembly and disassembly site for its atomic bombs. It assembled the last new bomb in 1991. "We have evacuated our personnel, non-essential personnel from the site, just in an abundance of caution," said Laef Pendergraft, a spokesperson for National Nuclear Security Administration's Production Office at Pantex. "But we do have a well-equipped fire department that has trained for these scenarios, that is on-site and watching and ready should any kind of real emergency arise on the plant site."

Slashdot Top Deals