×
Power

How a Screwdriver Slip Caused a Fatal 1946 Atomic Accident (bbc.com) 67

Long-time Slashdot reader theodp writes: A specially illustrated BBC story created by artist/writer Ben Platts-Mills tells the remarkable story of how a dangerous radioactive apparatus in the Manhattan Project killed a scientist in 1946.

"Less than a year after the Trinity atomic bomb test," Platts-Mills writes, "a careless slip with a screwdriver cost Louis Slotin his life. In 1946, Slotin, a nuclear physicist, was poised to leave his job at Los Alamos National Laboratories (formerly the Manhattan Project). When his successor came to visit his lab, he decided to demonstrate a potentially dangerous apparatus, called the "critical assembly". During the demo, he used his screwdriver to support a beryllium hemisphere over a plutonium core. It slipped, and the hemisphere dropped over the core, triggering a burst of radiation. He died nine days later."

In an interesting follow-up story, Platts-Mills explains how he pieced together what happened inside the room where 'The Blue Flash' occurred (it has been observed that many criticality accidents emit a blue flash of light).

15 years later there were more fatalities at a nuclear power plant after the Atomic Energy Commission opened the National Reactor Testing Station in a desert west of Idaho Falls, according to Wikipedia: The event occurred at an experimental U.S. Army plant known as the Argonne Low-Power Reactor, which the Army called the Stationary Low-Power Reactor Number One (SL-1)... Three trained military men had been working inside the reactor room when a mistake was made while reattaching a control rod to its motor assembly. With the central control rod nearly fully extended, the nuclear reactor rated at 3 MW rapidly increased power to 20 GW. This rapidly boiled the water inside the core.

As the steam expanded, a pressure wave of water forcefully struck the top of the reactor vessel, upon which two of the men stood. The explosion was so severe that the reactor vessel was propelled nine feet into the air, striking the ceiling before settling back into its original position. One man was impaled by a shield plug and lodged into the ceiling, where he died instantly. The other men died from their injuries within hours. The three men were buried in lead coffins, and that entire section of the site was buried.

"The core meltdown caused no damage to the area, although some radioactive nuclear fission products were released into the atmosphere."

This week Idaho Falls became one of the sites re-purposed for possible utility-scale clean energy projects as part of America's "Cleanup to Clean Energy" initiative.
GNU is Not Unix

Libreboot Creator Says After Coding a Fork for 'GNU Boot Project', FSF Sent a Cease-and-Desist Letter Over Its Name (libreboot.org) 105

Libreboot is a distribution of coreboot "aimed at replacing the proprietary BIOS firmware contained by most computers," according to Wikipedia. It was briefly part of the GNU project, until maintainer Leah Rowe and the GNU project agreed to part ways in 2017.

But here in 2023, the GNU project has created a fork of Libreboot named GNU Boot... The GNU Boot fork "currently does not have a website and does not have any releases of its own," points out Libreboot's Leah Rowe, adding "My intent is to help them, and they are free — encouraged — to re-use my work... " But things have gotten messy, writes Rowe: They forked Libreboot, due to disagreement with Libreboot's Binary Blob Reduction Policy. This is a pragmatic policy, enacted in November 2022, to increase the number of coreboot users by increasing the amount of hardware supported in Libreboot... I wish GNU Boot all the best success. Truly. Although I think their project is entirely misguided (for reasons explained by modern Libreboot policy), I do think there is value in it. It provides continuity for those who wish to use something resembling the old Libreboot project...

When GNU Boot first launched, as a failed hostile fork of Libreboot under the same name, I observed: their code repository was based on Libreboot from late 2022, and their website based on Libreboot in late 2021. Their same-named Libreboot site was announced during LibrePlanet 2023... [N]ow they are calling themselves GNU Boot, and it is indeed GNU, but it still has the same problem as of today: still based on very old Libreboot, and they don't even have a website. According to [the FSF's Savannah software repository], GNU Boot was created on 11 June 2023. Yet no real development, in over a month since then...

I've decided that I want to help them... I decided recently that I'd simply make a release for them, exactly to their specifications (GNU Free System Distribution Guidelines), talking favourably about FSF/GNU, and so on. I'm in a position to do it (thus scratching the itch), so why not? I did this release for them — it's designated non-GeNUine Boot 20230717, and I encourage them to re-use this in their project, to get off the ground. This completely leapfrogs their current development; it's months ahead. Months. It's 8 months ahead, since their current revision is based upon Libreboot from around ~October 2022...

The GNU Boot people actually sent me a cease and desist email, citing trademark infringement. Amazing...

I complied with their polite request and have renamed the project to non-GeNUine Boot. The release archive was re-compiled, under this new brand name and the website was re-written accordingly. Personally, I like the new name better.

Cloud

Building a Better Server? Oxide Computer Ships Its First Rack (thenewstack.io) 29

Oxide Computer Company spent four years working toward "The power of the cloud in your data center... bringing hyperscaler agility to the mainstream enterprise." And on June 30, Oxide finally shipped its very first server rack.

Long-time Slashdot reader destinyland shares this report: It's the culmination of years of work — to fulfill a long-standing dream. In December of 2019, Oxide co-founder Jess Frazelle had written a blog post remembering conversations over the year with people who'd been running their own workloads on-premises... "Hyperscalers like Facebook, Google, and Microsoft have what I like to call 'infrastructure privilege' since they long ago decided they could build their own hardware and software to fulfill their needs better than commodity vendors. We are working to bring that same infrastructure privilege to everyone else!"

Frazelle had seen a chance to make an impact with "better integration between the hardware and software stacks, better power distribution, and better density. It's even better for the environment due to the energy consumption wins."

Oxide CTO Bryan Cantrill sees real problems in the proprietary firmware that sits between hardware and system software — so Oxide's server eliminates the BIOS and UEFI altogether, and replaces the hardware-managing baseboard management controller (or BMC) with "a proper service processor." They even wrote their own custom, all-Rust operating system (named Hubris). On the Software Engineering Daily podcast, Cantrill says "These things boot like a rocket."

And it's all open source. "Everything we do is out there for people to see and understand..." Cantrill added. On the Changelog podcast Cantrill assessed its significance. "I don't necessarily view it as a revolution in its own right, so much as it is bringing the open source revolution to firmware."

Oxide's early funders include 92-year-old Pierre Lamond (who hired Andy Grove at Fairchild Semiconductor) — and customers who supported their vision. On Software Engineering Daily's podcast Cantrill points out that "If you're going to use a lot of compute, you actually don't want to rent it — you want to own it."
Power

US Pulls Authorization for Lithium Exploration Project in Southern Nevada, Citing Wildlife (apnews.com) 145

Tuesday North America's largest lithium mining operation cleared its last legal hurdle in federal appeals court, giving a green light to the mining of 6,000 acres in an 18,000-acre project site near Nevada's northern border.

But meanwhile, in Southern Nevada... Federal land managers have formally withdrawn their authorization of a Canadian mining company's lithium exploration project bordering a national wildlife refuge in southern Nevada after conservationists sought a court order to block it.

The Center for Biological Diversity and the Amargosa Conservancy said in a lawsuit filed July 7 that the project on the edge of the Ash Meadows National Wildlife Refuge outside Las Vegas posed an illegal risk to a dozen fish, snail and plant species currently protected under the Endangered Species Act. They filed an additional motion this week in federal court seeking a temporary injunction prohibiting Rover Metals from initiating the drilling of 30 bore sites in search of the highly sought-after metal used to manufacture batteries for electric vehicles.

But before a judge in Las Vegas could rule on the request, the Bureau of Land Management notified Rover Metals on Wednesday that its earlier acceptance of the company's notice of its intent to proceed "was in error... The agency has concluded that proposed operations are likely to result in disturbance to localized groundwaters that supply the connected surface waters associated with Threatened and Endangered species in local springs," said Angelita Bulletts, district manager of the bureau's southern Nevada district...

Conservationists said the reversal provides at least a temporary reprieve for the lush oasis in the Mojave Desert that is home to 25 species of fish, plants, insects and snails that are found nowhere else on Earth — one of the highest concentrations of endemic species in North America at one of the hottest, driest places on the planet.

The article ends with this quote from a director at the Center for Biological Diversity and the Amargosa Conservancy. "We need lithium for our renewable energy transition, but this episode sends a message loud and clear that some places are just too special to drill."
AI

Sixth 'Hutter Prize' Awarded for Achieving New Data Compression Milestone (hutter1.net) 64

Since 2006, Slashdot has been covering a contest CmdrTaco once summarized as "Compress Wikipedia and Win." It rewards progress on compressing a 1-billion-character excerpt of Wikipedia — approximately the amount that a human can read in a lifetime.

And today a new record was announced. The 1 billion characters have now been compressed to just 114,156,155 bytes — about 114 megabytes, or just 11.41% of the original size — by Saurabh Kumar, a New York-based quantitative developer for a high-frequency/algorithmic trading and financial services fund. The amount of each "Hutter Prize for Lossless Compression of Human Knowledge" increases based on how much compression is achieved (so if you compress the file x% better you receive x% of the prize). Kumar's compression was 1.04% smaller than the previous record, so they'll receive €5187.

But "The intention of this prize is to encourage development of intelligent compressors/programs as a path to AGI," said Marcus Hutter (now a senior researcher at Google DeepMind) in a 2020 interview with Lex Fridman.

17 years after their original post announcing the competition, Baldrson (Slashdot reader #78,598) returns to explain the contest's significance to AI research, starting with a quote from mathematician Gregory Chaitin — that "Compression is comprehension."

But they emphasize that the contest also has one specific hardware constraint rooted in theories of AI optimization: The Hutter Prize is geared toward research in that it restricts computation resources to the most general purpose hardware that is widely available. Why? As described by the seminal paper "The Hardware Lottery" by Sara Hooker, AI research is biased toward algorithms optimized for existing hardware infrastructure. While this hardware bias is justified for engineering (applying existing scientific understanding to the "utility function" of making money) to quote Sara Hooker, it "can delay research progress by casting successful ideas as failures."

The complaint that this is "mere" optimization ignores the fact that this was done on general purpose computation hardware, and is therefore in line with the spirit of Sara Hookers admonition to researchers in "The Hardware Lottery". By showing how to optimize within the constraint of general purpose computation, Saurabh's contribution may help point the way toward future directions in hardware architecture.

Movies

Code.org Embraces Barbie 9 Years After Helping Take Her Down (tynker.com) 75

Long-time Slashdot reader theodp writes: The number one movie in North America is Warner Bros. Discovery's Barbie, which Deadline reports has teamed up with Oppenheimer to fuel a mind-blowing $300M+ box office weekend. ["Oppenheimer Shatters Expectations with $80 Million Debut," read the headline at Variety.]

Now it seems everybody is trying to tap into Barbie buzz, including Microsoft's Xbox [which added Barbie and Ken's cars to Forza Horizon 5] and even Microsoft-backed education nonprofit Code.org. ("Are your students excited about Barbie The Movie? Have them try an HourOfCode [programming game] with Barbie herself!").

The idea is to inspire young students to become coders. But as Code.org shares Instagram images of a software developer Barbie, Slashdot reader theodp remembers when, nine years ago, Code.org's CEO "took to Twitter to blast Barbie and urge for her replacement." They'd joined a viral 2014 Computer Engineer Barbie protest that arose in response to the publication of Barbie F***s It Up Again, a scathing and widely reported-on blog post that prompted Mattel to pull the book Barbie: I Can Be a Computer Engineer immediately from Amazon. This may have helped lead to Barbie's loss of her crown as the most popular girls' toy in the ensuing 2014 holiday season to Disney's Frozen princesses Elsa and Anna, and got the Mattel exec who had to apologize for Computer Engineer Barbie called to the White House for a sit down a few months later. (Barbie got a brainy makeover soon thereafter)...

The following year, Disney-owned Lucasfilm and Code.org teamed up on Star Wars: Building a Galaxy with Code, a signature tutorial for the 2015 Hour of Code. Returning to a Disney princess theme in 2016, Disney and Code.org revealed a new Hour of Code tutorial featuring characters from the animated film Moana just a day ahead of its theatrical release. It was later noted that Moana's screenwriters included Pamela Ribon, who penned the 2014 Barbie-blasting blog post that ended Barbie's short reign as the Hour of Code role model of choice for girls.

Interestingly, Ribon seems to bear no Barbie grudges either, tweeting on the day of the Barbie movie release, "I was like holy s*** can't wait to see it."

To be fair, the movie's trailer promises "If you hate Barbie, this movie is for you," in a deconstruction where Barbie is played by D.C. movies' "Harley Quinn" actress Margot Robbie (Suicide Squad, Birds of Prey), whose other roles include Tonya Harding and the home-wrecking second wife in The Wolf of Wall Street.
Red Hat Software

RHEL Response Discussed by SFC Conference's Panel - Including a New Enterprise Linux Standard (sfconservancy.org) 66

Last weekend in Portland, Oregon, the Software Freedom Conservancy hosted a new conference called the Free and Open Source Software Yearly.

And long-time free software activist Bradley M. Kuhn (currently a policy fellow/hacker-in-residence for the Software Freedom Conservancy) hosted a lively panel discussion on "the recent change" to public source code releases for Red Hat Enterprise Linux which shed light on what may happen next. The panel also included:
  • benny Vasquez, the Chair of the AlmaLinux OS Foundation
  • Jeremy Alison, Samba co-founder and software engineer at CIQ (focused on Rocky Linux). Allison is also Jeremy Allison - Sam Slashdot reader #8,157.
  • James (Jim) Wright, Oracle's chief architect for Open Source policy/strategy/compliance/alliances

"Red Hat themselves did not reply to our repeated requests to join us on this panel... SUSE was also invited but let us know they were unable to send someone on short notice to Portland for the panel."

One interesting audience question for the panel came from Karsten Wade, a one-time Red Hat senior community architect who left Red Hat in April after 21 years, but said he was "responsible for bringing the CentOS team onboard to Red Hat." Wade argued that CentOS "was always doing a clean rebuild from source RPMS of their own..." So "isn't all of this thunder doing Red Hat's job for them, of trying to get everyone to say, 'This thing is not the equivalent to RHEL.'"

In response Jeremy Alison made a good point. "None of us here are the arbiters of whether it's good enough of a rebuild of Red Hat Linux. The customers are the arbiters." But this led to an audience member asking a very forward-looking question: what are the chances the community could adopt a new (and open) enterprise Linux standard that distributions could follow. AlmaLinux's Vasquez replied, "Chances are real high... I think everyone sees that as the obvious answer. I think that's the obvious next step. I'll leave it at that." And Oracle's Wright added "to the extent that the market asks us to standardize? We're all responsive."

When asked if they'd consider adding features not found in RHEL ("such as high-security gates through reproducible builds") AlmaLinux's Vasquez said "100% -- yeah. One of the things that we're kind of excited about is the opportunities that this opens for us. We had decided we were just going to focus on this north star of 1:1 Red Hat no matter what -- and with that limitation being removed, we have all kinds of options." And CIQ's Alison said "We're working on FIPS certification for an earlier version of Rocky, that Red Hat, I don't believe, FIPS certified. And we're planning to release that."

AlmaLinux's Vasquez emphasized later that "We're just going to build Enterprise Linux. Red Hat has done a great job of establishing a fantastic target for all of us, but they don't own the rights to enterprise Linux. We can make this happen, without forcing an uncomfortable conversation with Red Hat. We can get around this."

And Alison later applied a "Star Wars" quote to Red Hat's predicament. "The more things you try and grab, the more things slip through your fingers." That is, "The more somebody tries to exert control over a codebase, the more the pushback will occur from people who collaborate in that codebase." AlmaLinux's Vasquez also said they're already "in conversations" with independent software vendors about the "flow of support" into non-Red Hat distributions -- though that's always been the case. "Finding ways to reduce the barrier for those independent software vendors to add official support for us is, like, maybe more cumbersome now, but it's the same problem that we've had..."

Early in the discussion Oracle's Jim Wright pointed out that even Red Hat's own web site defines open source code as "designed to be publicly accessible — anyone can see, modify, and distribute the code as they see fit." ("Until now," Wright added pointedly...) There was some mild teasing of Oracle during the 50-minute discussion -- someone asked at one point if they'd re-license their proprietary implementation of ZFS under the GPL. But at the end of the panel, Oracle's Jim Wright still reminded the audience that "If you want to work on open source Linux, we are hiring."

Read Slashdot's transcript of highlights from the discussion.


Earth

Eating Less Meat 'Like Taking 8 Million Cars Off the Road' (bbc.com) 373

"Having big U.K. meat-eaters cut some of it out of their diet would be like taking 8 million cars off the road," reports the BBC: That's just one of the findings of new research that scientists say gives the most reliable calculation yet of how what we eat impacts our planet.

The Oxford University study is the first to pinpoint the difference high- and low-meat diets have on greenhouse gas emissions, researchers say... [Oxford University] professor Peter Scarborough, who is part of the Livestock Environment And People project surveyed 55,000 people who were divided into big meat-eaters, who ate more than 100g of meat a day, which equates to a big burger, low meat-eaters, whose daily intake was 50g or less, approximately a couple of chipolata sausages, fish-eaters, vegetarians and vegans... The research shows that a big meat-eater's diet produces an average of 10.24 kg of planet-warming greenhouse gasses each day. A low meat-eater produces almost half that at 5.37 kg per day. [Fish diet: 4.74 kg. "Vegetarian" diet: 4.16 kg] And for vegan diets — it's halved again to 2.47 kg a day.

The analysis is the first to look at the detailed impact of diets on other environmental measures all together. These are land use, water use, water pollution and loss of species, usually caused by loss of habitat because of expansion of farming. In all cases high meat-eaters had a significantly higher adverse impact than other groups...

A separate study also published in Nature Food in 2021 concluded that food production was responsible for a third of all global greenhouse gas emissions. And an independent review for the Department for the Environment Food and Rural Affairs (Defra) called for a 30% reduction in meat consumption by 2032 in order to meet the UK's net zero target.

"The meat industry said the analysis overstated the impact of eating meat."

Thanks to long-time Slashdot reader beforewisdom for sharing the article.
Movies

Hollywood Movie Aside, Just How Good a Physicist Was Oppenheimer? (science.org) 91

sciencehabit shares a report from Science: This week, the much anticipated movie Oppenheimer hits theaters, giving famed filmmaker Christopher Nolan's take on the theoretical physicist who during World War II led the Manhattan Project to develop the first atomic bomb. J. Robert Oppenheimer, who died in 1967, is known as a charismatic leader, eloquent public intellectual, and Red Scare victim who in 1954 lost his security clearance in part because of his earlier associations with suspected Communists. To learn about Oppenheimer the scientist, Science spoke with David C. Cassidy, a physicist and historian emeritus at Hofstra University. Cassidy has authored or edited 10 books, including J. Robert Oppenheimer and the American Century. How did Oppenheimer compare to Einstein? Did he actually make any substantiative contributions to THE Bomb? And why did he eventually lose his security clearance?
Moon

Scientists Have Found a Hot Spot on the Moon's Far Side (universetoday.com) 46

Wikipedia notes that "Today, the Moon has no active volcanoes even though a significant amount of magma may persist under the lunar surface."

But this week the New York Times reports that "The rocks beneath an ancient volcano on the moon's far side remain surprisingly warm, scientists have revealed using data from orbiting Chinese spacecraft." The findings, which appeared last week in the journal Nature, help explain what happened long ago beneath an odd part of the moon. The study also highlights the scientific potential of data gathered by China's space program, and how researchers in the United States have to circumvent obstacles to use that data...

The Chinese orbiters both had microwave instruments, common on many Earth-orbiting weather satellites but rare on interplanetary spacecraft. The data from Chang'e-1 and Chang'e-2 thus provided a different view of the moon, measuring the flow of heat up to 15 feet below the surface — and proved ideal for investigating the oddity... At Compton-Belkovich, the heat flow was as high as 180 milliwatts per square meter, or about 20 times the average for the highlands of the moon's far side. That measure corresponds to a temperature of minus 10 degrees Fahrenheit about six feet below the surface, or about 90 degrees warmer than elsewhere. "This one stuck out, as it was just glowing hot compared to anywhere else on the moon," said Matthew Siegler, a scientist at the Planetary Science Institute, headquartered in Tucson, Ariz., and who led the research...

"Now we need the geologists to figure out how you can produce that kind of feature on the moon without water, without plate tectonics," Dr. Siegler said.

Universe Today believes this could help scientists better understand the moon's past. "What makes this finding unique is the source of the hotspot isn't active volcanism, such as molten lava, but from radioactive elements within the now-solidified rock that was once molten lava billions of years ago."

Thanks to Slashdot reader rolodexter for sharing the news.
Mars

Rover Sampling Finds Organic Molecules In Water-Altered Rocks (arstechnica.com) 8

The Perseverance rover's Scanning Habitable Environments with Raman & Luminescence for Organics & Chemicals (SHERLOC) instrument, designed to analyze organic chemicals on Mars, has provided valuable insights into the presence and distribution of potential organic materials on the surface of Mars. The findings have been published in the journal Nature. An anonymous reader shares a report from Ars Technica: SHERLOC comes with a deep-UV laser to excite molecules into fluorescing, and the wavelengths they fluoresce at can tell us something about the molecules present. It's also got the hardware to do Raman spectroscopy simultaneously. Collectively, these two capabilities indicate what kinds of molecules are present, though they can't typically identify specific chemicals. And, critically, SHERLOC provides spatial information, telling us where sample-specific signals come from. This allows the instrument to determine which chemicals are located in the same spot in a rock and thus were likely formed or deposited together.

SHERLOC can sample rocks simply by being held near them. The new results are based on a set of samples from two rock formations found on the floor of the Jezero crater. In some cases, the imaging was done by pointing it directly at a rock; in others, the rock surface, and any dust and contaminants it contained, was abraded away by Perseverance before the imaging was done. SHERLOC identified a variety of signatures of potential organic material in these samples. There were a few cases where it was technically possible that the signatures were produced by a very specific chemical that lacked carbon (primarily cerium salts). But, given the choice between a huge range of organic molecules or a very specific salt, the researchers favor organic materials as the source. One thing that was clear was that the level of organic material present changed over time. The deeper, older layer called Seitah only had a tenth of the material found in the Maaz rocks that formed above them. The reason for this difference isn't clear, but it indicates that either the production or deposition of organic material on Mars has changed over time.

Between the different samples and the ability to resolve different regions of the samples, the researchers were able to identify distinct signals that each occurred in many samples. While it wasn't possible to identify the specific molecule responsible, they were able to say a fair bit about them. One signal came from samples that contained a ringed organic compound, along with sulfates. The most common signal came from a two-ringed organic molecule, and was associated with various salts: phosphate, sulfate, silicates, and potentially a perchlorate. Another likely contained a benzene ring associated with iron oxides. A different ringed compound was found in two of the samples. Overall, the researchers conclude that these differences are significant. The fact that distinct organic chemicals are consistently associated with different salts suggests that there were either several distinct ways of synthesizing the organics or that they were deposited and preserved under distinct conditions. Many of the salts seen here are also associated with either water-based deposition or water-driven chemical alteration of the rock -- again, consistent with the processes involved changing over time. Collectively, the researchers say this argues against the organic chemicals simply having been delivered to Mars on a meteorite.

Programming

Why Are There So Many Programming Languages? (acm.org) 160

Long-time Slashdot reader theodp writes: Recalling a past Computer History Museum look at the evolution of programming languages, Doug Meil ponders the age-old question of Why Are There So Many Programming Languages? in a new Communications of the ACM blog post.

"It's worth noting and admiring the audacity of PL/I (1964)," Meil writes, "which was aiming to be that 'one good programming language.' The name says it all: Programming Language 1. There should be no need for 2, 3, or 4. [Meil expands on this thought in Lessons from PL/I: A Most Ambitious Programming Language.] Though PL/I's plans of becoming the Highlander of computer programming didn't play out like the designers intended, they were still pulling on a key thread in software: why so many languages? That question was already being asked as far back as the early 1960's."

One of PL/I's biggest fans was Digital Research Inc. (DRI) founder Gary Kildall, who crafted the PL/I-inspired PL/M (Programming Language for Microcomputers) in 1973 for Intel. But IBM priced PL/I higher than the languages it sought to replace, contributing to PL/I's failure to gain traction. (Along the lines of how IBM's deal with Microsoft gave rise to a price disparity that was the undoing of Kildall's CP/M OS, bundled with every PC in a 'non-royalty' deal. Windows was priced at $40 while CP/M was offered 'a la carte' at $240.) As a comp.lang.pl1 poster explained in 2006, "The truth of the matter is that Gresham's Law: 'Bad money drives out good' or Ruskin's principle: 'The hoi polloi always prefer an inferior, cheap product over a superior, more expensive one' are what govern here."

Games

Mid-1990s Sega Document Leak Shows How It Lost the Second Console War To Sony (arstechnica.com) 35

An anonymous reader shares a report: Most of the changes on the Sega Retro wiki every day are tiny things, like single-line tweaks to game details or image swaps. Early Monday morning, the site got something else: A 47MB, 272-page PDF full of confidential emails, notes, and other documents from inside a company with a rich history, a strong new competitor, and deep questions about what to do next.

The document offers glimpses, windows, and sometimes pure numbers that explain how Sega went from a company that broke Nintendo's near-monopoly in the early 1990s to giving up on consoles entirely after the Dreamcast. Enthusiasts and historians can see the costs, margins, and sales of every Sega system sold in America by 1997 in detailed business plan spreadsheets. Sega's Wikipedia page will likely be overhauled with the information contained in inter-departmental emails, like the one where CEO Tom Kalinske assures staff (and perhaps himself) that "we are killing Sony" in Japan in March 1996.

"Wish I could get our staff, sales people, retailers, analysts, media, etc. to see and understand what's happening in Japan. They would then understand why we will win here in the US eventually," Kalinske wrote. By September 1996, this would not be the case, and Kalinske would tender his resignation. Not all of the compilation is quite so direct or relevant. There are E3 floor plans, nitpicks about marketing campaigns, and the occasional incongruity. There is a Post-It note stuck to the front of the "Brand Strategy" folder -- "Screw Technology, what is bootleg 96/97" -- that I will be thinking about for days.

Microsoft

Microsoft's Light-Based, Transistor-less Computer Solves Complex Optimization Problems at the Speed of Light (techspot.com) 65

"Picture a world where computing is not limited by the binary confines of zeros and ones, but instead, is free to explore the vast possibilities of continuous value data." That's Microsoft's research blog, describing its newly-developed Analog Iterative Machine, an analog optical computer designed for solving difficult optimization problems.

"For a multidisciplinary group of researchers at the Microsoft Research Lab in Cambridge, U.K., the mission was to build a new kind of computer that would transcend the limitations of the binary systems," says a Microsoft blog post.

Neowin describes it as a computer "that uses photons and electrons, rather than transistors, to process data." Light "passes through several layers, making impressions on each part of what's known as a 'modular array'," writes PC Gamer. "It's this process of projecting light through the array that replaces the function of a standard transistor."

Microsoft says it can "solve practical problems at the speed of light." And "it's already shown potential for surpassing state-of-the art digital (silicon-based) technology," adds TechSpot, "or even the most powerful quantum computers being designed right now." The AIM machine is built using commodity opto-electronic technologies that are low-cost and scalable, Microsoft says, and is based on an "asynchronous data flow architecture" which doesn't require data exchange between storage units and "compute locations."

AIM isn't designed for general purpose computing tasks, though. The analog optical computer is useful to solve difficult "optimization problems" like the well-known travelling salesman riddle, Microsoft says, which are at the heart of many, math-intensive industries including finance, logistics, transportation, energy, healthcare, and manufacturing. When it comes to crunching all the possible combinations of an exponentially growing problem, traditional, digital computers struggle to provide a solution in a "timely, energy-efficient and cost-effective manner."

AIM was conceived to address two simultaneous trends, Microsoft explains, which are sidestepping the unraveling of Moore's Law and overcoming the limitations of specialized machines designed for solving optimization problems... AIM works at the speed of light, and it seemingly provides a 100x increase in performance compared to the most advanced digital approaches available today. For now, AIM is still a research project with limited access for potential customers. The machine, however, is already being tested by UK financial company Barclays, which is using it to track transactions of money into stock purchases.

Microsoft says it's now releasing its "AIM simulator as a service, allowing selected users to get first-hand experience. The initial users are the team's collaborators at Princeton University and at Cambridge University."
Open Source

Linux Foundation's Yocto Project Expands LTS to 4 Years (linuxfoundation.org) 4

Wikipedia defines the Yocto Project as "a Linux Foundation collaborative open source project whose goal is to produce tools and processes that enable the creation of Linux distributions for embedded and IoT software that are independent of the underlying architecture of the embedded hardware."

This week the Linux Foundation shared an update on the 12-year-old Yocto Project: In an effort to support the community, The Yocto Project announced the first Long Term Support (LTS) release in October 2020. Today, we are delighted to announce that we are expanding the LTS release and extending the lifecycle from 2 to 4 years as standard.

The continued growth of the Yocto Project coincides with the welcomed addition of Exein as a Platinum Member, joining AMD/Xilinx, Arm, AWS, BMW Group, Cisco, Comcast, Intel, Meta and WindRiver. As a Member, Exein brings its embedded security expertise across billions of devices to the core of the Yocto Project...

"The Yocto Project has been at the forefront of OS technologies for over a decade," said Andrew Wafaa, Yocto Project Chairperson. "The adaptability and variety of the tooling provided are clearly making a difference to the community. We are delighted to welcome Exein as a member as their knowledge and experience in providing secure Yocto Project based builds to customers will enable us to adapt to the modern landscape being set by the US Digital Strategy and the EU Cyber Resilience Act."

"We're extremely excited to become a Platinum Partner of the Yocto Project," said Gianni Cuozzo, founder and CEO of Exein. "The Yocto Project is the most important project in the embedded Linux space, powering billions of devices every year. We take great pride in contributing our extensive knowledge and expertise in embedded security to foster a future that is both enhanced and secure for Yocto-powered devices. We are dedicated to supporting the growth of the Yocto Project as a whole, aiming to improve its support for modern languages like Rust, and assist developers and OEMs in aligning with the goals outlined in the EU Cyber Resilience Act."

Science

The Man Who Broke Bowling (gq.com) 60

theodp writes: In The Man Who Broke Bowling, GQ's Eric Wills profiles professional bowler Jason Belmonte, whose two-handed bowling technique made him both an outcast as well as one of bowling's greatest, changing the sport forever. Unlike the rest of us, a 7-year-old Belmonte was unconvinced by the taunts used to prompt kids into switching from bowling two-handed to one-handed ("It was, Come on, you're a big boy now. It's time to bowl properly," Belmonte recalls). As a result, Belmonte was able to develop a 600-rpm throw when most pro bowlers averaged 350-400, imparting a spin that "sends the pins into concussion protocol." Wills writes:

"When he first alighted on the professional bowling scene, Belmonte resembled an alien species: one that bowled with two hands. And not some granny shot, to be clear, but a kickass power move in which he uses two fingers (and no thumb) on his right hand, palms the front of the ball with his left, and then, on his approach, which is marked by a distinctive shuffle step, rocks the ball back before launching it with a liquid, athletic whip, his delivery producing an eye-popping hook, his ball striking the pins like a mini mortar explosion. Not everyone welcomed his arrival. He's been called a cheat, told to go back to his native Australia; a PBA Hall of Famer once called the two-hander a 'cancer to an already diseased sport.'

If you're interested in more on the technical aspects of bowling -- Belmonte's installed a tracking system in his parent's bowling center back in Australia that generates reams of data he can sift through to find areas for improvement -- Wikipedia goes into some of the physics of bowling balls.

AI

'AI is Killing the Old Web' 108

Rapid changes, fueled by AI, are impacting the large pockets of the internet, argues a new column. An excerpt: In recent months, the signs and portents have been accumulating with increasing speed. Google is trying to kill the 10 blue links. Twitter is being abandoned to bots and blue ticks. There's the junkification of Amazon and the enshittification of TikTok. Layoffs are gutting online media. A job posting looking for an "AI editor" expects "output of 200 to 250 articles per week." ChatGPT is being used to generate whole spam sites. Etsy is flooded with "AI-generated junk."

Chatbots cite one another in a misinformation ouroboros. LinkedIn is using AI to stimulate tired users. Snapchat and Instagram hope bots will talk to you when your friends don't. Redditors are staging blackouts. Stack Overflow mods are on strike. The Internet Archive is fighting off data scrapers, and "AI is tearing Wikipedia apart." The old web is dying, and the new web struggles to be born. The web is always dying, of course; it's been dying for years, killed by apps that divert traffic from websites or algorithms that reward supposedly shortening attention spans. But in 2023, it's dying again -- and, as the litany above suggests, there's a new catalyst at play: AI.
Social Networks

Russian Coup Aided by Telegram, VPNs as Government Blocks Google News (nytimes.com) 140

Yevgeny V. Prigozhin heads the Russia-backed paramilitary Wagner Group — and was also "a close confidant of Russian president Vladimir Putin until he launched an alleged coup," according to Wikipedia.

The New York Times notes Prigozhin's remarkable ability to bypass government censorship: Despite years of creeping Kremlin control over the internet, the mercenary tycoon Yevgeny V. Prigozhin continued to comment live on Saturday through videos, audio recordings and statements posted on the messaging app Telegram.

His remarkable continued access to a public platform amid a crisis demonstrated both the limits of official restrictions and the rise of Telegram as a powerful mode of communication since the start of the war in Ukraine in February 2022. The app, along with the proliferation of virtual private networks, has effectively loosened the information controls that the Russian authorities had tightened for years.

Russian internet service providers began blocking access to Google News shortly after the authorities accused Mr. Prigozhin of organizing an armed uprising on Friday. But while unconfirmed reports surfaced of Telegram outages in some Russian cities, people within Russia continued to post on the app.

CNN just reported that Prigozhin's paramilitary group "has claimed control of several military facilities and has dispatched some of his troops towards Moscow... Russian security forces in body armor and equipped with automatic weapons have taken up a position near a highway linking Moscow with southern Russia, according to photos published by the Russian business newspaper Vedomosti Saturday."

UPDATE: CNN now reports Prigozhin "says he is turning his forces around from a march toward Moscow shortly after the Belarusian government claimed President Alexander Lukashenko had reached a deal with Prigozhin to halt the march."
China

Declassified US Intelligence: Still No Evidence for Covid 'Lab Leak' Theory (reuters.com) 167

Reuters reports: U.S. intelligence agencies found no direct evidence that the COVID-19 pandemic stemmed from an incident at China's Wuhan Institute of Virology, a report declassified on Friday said.
America's Director of National Intelligence was responding to March legislation requiring declassification (within 90 days) of any information on possible links between the Wuhan Institute of Virology (or "WIV") and the origin of the COVID-19 pandemic. One key finding in the just-released report?

"We continue to have no indication that the Wuhan Institute of Virology's pre-pandemic research holdings included SARS-CoV-2 or a close progenitor, nor any direct evidence that a specific research-related incident occurred involving WIV personnel before the pandemic that could have caused the COVID pandemic." The information available to the U.S. Intelligence Community "indicates that the WIV first possessed SARS-CoV-2 in late December 2019, when WIV researchers isolated and identified the virus from samples from patients diagnosed with pneumonia of unknown causes."

And in addition, "All Intelligence Community agencies assess that SARS-CoV-2 was not developed as a biological weapon."

Beyond that, the report also emphasizes that "Almost all Intelligence Community agencies assess that SARS-CoV-2 was not genetically engineered," adding "Most agencies assess that SARS-CoV-2 was not laboratory-adapted; some are unable to make a determination." The National Intelligence Council and four other Intelligence Community agencies assess that the initial human infection with SARS-CoV-2 most likely was caused by natural exposure to an infected animal that carried SARS-CoV-2 or a close progenitor, a virus that probably would be more than 99 percent similar to SARS-CoV-2...

The Central Intelligence Agency and another agency remain unable to determine the precise origin of the COVID-19 pandemic, as both hypotheses rely on significant assumptions or face challenges with conflicting reporting.

The only two outliers appear to be the Department of Energy, which gives "low confidence" support to the lab-leak theory, and the FBI (whose Trump-appointed director "said he couldn't share many details of the agency's assessment because they were classified.")

Addressing rumors online, the report notes that the lab has performed public health-related research with the army, such as work on vaccines and therapeutics. This included working "with several viruses, including coronaviruses, but no known viruses that could plausibly be a progenitor of SARS-CoV-2."

And while several researchers were ill in the fall of 2019, their symptoms "were consistent with but not diagnostic of COVID-19... [T]he researchers' symptoms could have been caused by a number of diseases and some of the symptoms were not consistent with COVID-19... [T]hey experienced a range of symptoms consistent with colds or allergies with accompanying symptoms typically not associated with COVID-19, and some of them were confirmed to have been sick with other illnesses unrelated to COVID-19." And there's no indication any of them were ever hospitalized for COVID-19 symptoms.
AI

Stack Overflow Moderators Stop Work in Protest of Lax AI-Generated Content Guidelines (gizmodo.com) 41

Moderators of Stack Overflow have announced a strike in protest of the company's ban on moderating AI-generated content, claiming that this policy allows incorrect information and plagiarism to proliferate on the platform. Gizmodo reports: Last week in a post -- which has been downvoted at least 283 times -- Stack Overflow announced its new moderation policy that will only remove AI-generated content in specific instances, claiming that over-moderation of posts made with artificial intelligence was turning away human contributors. The company also said in its post that a strict standard of evidence needed to be used moving forward in order to manage AI content, and that that standard of evidence hasn't applied to most suspensions issued by moderators thus far. This directive was also communicated to the platform's moderation team privately before being posted publicly. The moderators of the website are claiming that this directive will allow AI content, which can frequently be incorrect, to run rampant on the forum while expressing discontent with Stack Overflow for not communicating this new policy more effectively.

"Stack Overflow, Inc. has decreed a near-total prohibition on moderating AI-generated content in the wake of a flood of such content being posted to and subsequently removed from the Stack Exchange network, tacitly allowing the proliferation of incorrect information ("hallucinations") and unfettered plagiarism on the Stack Exchange network. This poses a major threat to the integrity and trustworthiness of the platform and its content," the mods write in their letter to Stack Overflow. "Stack Overflow, Inc. has decreed a near-total prohibition on moderating AI-generated content in the wake of a flood of such content being posted to and subsequently removed from the Stack Exchange network, tacitly allowing the proliferation of incorrect information ("hallucinations") and unfettered plagiarism on the Stack Exchange network. This poses a major threat to the integrity and trustworthiness of the platform and its content," the mods write in their letter to Stack Overflow.

Stack Overflow moderators, like those at Wikipedia, are volunteers tasked with maintaining the integrity of the platform. The moderators say that they tried to express their concerns with the company's new policy through proper channels, but their anxieties fell on deaf ears. The mods plan to strike indefinitely, and will cease all actions including closing posts, deleting posts, flagging answers, and other tasks that help with website upkeep until AI policy has been retracted.

Slashdot Top Deals