NASA

Fastest Object Ever Made By Humans Continues Circling the Sun, 500x Faster Than Sound (sciencealert.com) 61

An anonymous reader shared this report from ScienceAlert: NASA's Parker Solar Probe, tasked with taking a close-up look at the Sun's outer corona, has just equalled the record for the fastest-moving human-made object ever. The previous record holder? The Parker Solar Probe, again. The probe was recorded traveling at 635,266 kilometers (394,736 miles) per hour on June 29, the second time it's reached that speed since it launched in 2018. We're talking around 500 times faster than the speed of sound here. It's on course to get even faster too, with a top speed of around 692,000 kph (430,000 mph) expected when it makes its closest approach to the Sun in 2025.
It's the probe's 20th approach to the sun, according to the article, with the probe using Venus "to create a sort of gravity-powered slingshot," according to the article. (NASA has created a nice interactive 3D model of the probe...)

Besides collecting particle samples in 2021, "The probe is eventually going to get nice and close to the swirling mass of ultra-hot plasma surrounding the Sun, and take a wealth of different measurements to help improve our scientific understanding of it."
Programming

Rust Leaps Forward on Language Popularity Index (infoworld.com) 59

An anonymous reader shared this report from InfoWorld: Rust has leaped to its highest position ever in the monthly Tiobe index of language popularity, scaling to the 13th spot this month, with placement in the top 10 anticipated in an upcoming edition. Previously, Rust has never gone higher than 17th place in the Tiobe Programming Index. Tiobe CEO Paul Jansen attributed Rust's ascent in the just-released July index to a February 2024 U.S. White House report recommending Rust over C/C+ for safety reasons. He also credited the growing community and ecosystem support for the language. "Rust is finally moving up."
The article adds that these rankings are based on "the number of skilled engineers worldwide, courses, and third-party vendors pertaining to languages, examining websites such as Google, Amazon, Wikipedia, and more than 20 others to determine the monthly numbers."
  1. Python
  2. C++
  3. C
  4. Java
  5. C#
  6. JavaScript
  7. Go
  8. Visual Basic
  9. Fortran
  10. SQL

Interestingly, Rust has just moved into the top ten on the rival rankings from the rival Pypl Popularity of Programming Language index (which according to the article "assesses how often languages are searched on in Google.")

  1. Python
  2. Java
  3. JavaScript
  4. C#
  5. C/C++
  6. R
  7. PHP
  8. TypeScript
  9. Swift
  10. Rust

Power

Three Mile Island Considers Nuclear Restart (reuters.com) 94

An anonymous reader quotes a report from Reuters: Constellation Energy is in talks with the Pennsylvania governor's office and state lawmakers to help fund a possible restart of part of its Three Mile Island power facility, the site of a nuclear meltdown in the 1970s, three sources familiar with the discussions said on Tuesday. The conversations, which two sources described as "beyond preliminary," signal that Constellation is advancing plans to revive part of the southern Pennsylvania nuclear generation site, which operated from 1974 to 2019. The nuclear unit Constellation is considering restarting is separate from the one that melted down.

The sources said that a shut Michigan nuclear plant, which was recently awarded a $1.5 billion conditional loan to restart from the administration of U.S. President Joe Biden, could serve as a private-public sector blueprint for Three Mile Island. The sources asked not to be named due to the sensitivity of the discussions. "Though we have determined it would be technically feasible to restart the unit, we have not made any decision on a restart as there are many economic, commercial, operational and regulatory considerations remaining," Constellation spokesperson Dave Snyder said in an email. Snyder did not comment on the specifics of discussions about reopening the Pennsylvania site.

Last month, Constellation told Reuters that it had cleared an engineering study of Three Mile Island, though it was unknown if the Baltimore, Maryland-based energy company would move forward with plans to reopen the site. Constellation also said that given the current premium placed on nuclear energy, acquiring other sites was generally off the table and the company would instead look to expand its existing fleet. The Three Mile Island unit that could be restarted is different to the site's unit 2, which experienced a partial meltdown in 1979 in the most famous commercial nuclear accident in U.S. history.
The report notes that "no U.S. nuclear power plant has been reopened after shutting." A restart will not only be costly, but it will be challenged over safety and environmental concerns.
Open Source

Developer Successfully Boots Up Linux on Google Drive (ersei.net) 42

Its FOSS writes: When it comes to Linux, we get to see some really cool, and sometimes quirky projects (read Hannah Montana Linux) that try to show off what's possible, and that's not a bad thing. One such quirky undertaking has recently surfaced, which sees a sophomore trying to one-up their friend, who had booted Linux off NFS. With their work, they have been able to run Arch Linux on Google Drive.
Their ultimate idea included FUSE (which allows running file-system code in userspace). The developer's blog post explains that when Linux boots, "the kernel unpacks a temporary filesystem into RAM which has the tools to mount the real filesystem... it's very helpful! We can mount a FUSE filesystem in that step and boot normally.... " Thankfully, Dracut makes it easy enough to build a custom initramfs... I decide to build this on top of Arch Linux because it's relatively lightweight and I'm familiar with how it work."
Doing testing in an Amazon S3 container, they built an EFI image — then spent days trying to enable networking... And the adventure continues. ("Would it be possible to manually switch the root without a specialized system call? What if I just chroot?") After they'd made a few more tweaks, "I sit there, in front of my computer, staring. It can't have been that easy, can it? Surely, this is a profane act, and the spirit of Dennis Ritchie ought't've stopped me, right? Nobody stopped me, so I kept going..." I build the unified EFI file, throw it on a USB drive under /BOOT/EFI, and stick it in my old server... This is my magnum opus. My Great Work. This is the mark I will leave on this planet long after I am gone: The Cloud Native Computer.

Despite how silly this project is, there are a few less-silly uses I can think of, like booting Linux off of SSH, or perhaps booting Linux off of a Git repository and tracking every change in Git using gitfs. The possibilities are endless, despite the middling usefulness.

If there is anything I know about technology, it's that moving everything to The Cloud is the current trend. As such, I am prepared to commercialize this for any company wishing to leave their unreliable hardware storage behind and move entirely to The Cloud. Please request a quote if you are interested in True Cloud Native Computing.

Unfortunately, I don't know what to do next with this. Maybe I should install Nix?

Open Source

FreeBSD Contributor Mocks Gloomy Predictions for the Open Source Movement (acm.org) 94

In Communications of the ACM, long-time FreeBSD contributor Poul-Henning Kamp mocks the idea that the free and open-source software movement has "come apart" and "will end in tears and regret." Economists and others focused on money — like my bank — have had a lot of trouble figuring out the free and open source software (FOSS) phenomenon, and eventually they seem to have reached the conclusion that it just makes no sense. So, they go with the flow. Recently, very serious people in the FOSS movement have started to write long and thoughtful opinion pieces about how it has all come apart and will end in tears and regret. Allow me to disagree...
What follows is a humorous history of how the Open Source movement bested a series of ill-conceived marketing failures starting after the "utterly bad" 1980s when IBM had an "unimaginably huge monopoly" — and an era of vendor lock-in from companies trying to be the next IBM: Out of that utter market failure came Minix, (Net/Free/Open)BSD, and Linux, at a median year of approximately 1991. I can absolutely guarantee that if we had been able to buy a reasonably priced and solid Unix for our 32-bit PCs — no strings attached — nobody would be running FreeBSD or Linux today, except possibly as an obscure hobby. Bill Gates would also have had a lot less of our money...
The essay moves on to when "that dot-com thing happened, fueled by the availability of FOSS operating systems, which did a much better job than any operating system you could buy — not just for the price, but in absolute terms of performance on any given piece of hardware. Thus, out of utter market failure, the FOSS movement was born."

And ultimately, the essay ends with our present day, and the phenomenon of companies that "make a business out of FOSS or derivatives thereof..." The "F" in FOSS was never silent. In retrospect, it seems clear that open source was not so much the goal itself as a means to an end, which is freedom: freedom to fix broken things, freedom from people who thought they could clutch the source code tightly and wield our ignorance of it as a weapon to force us all to pay for and run Windows Vista. But the FOSS movement has won what it wanted, and no matter how much oldsters dream about their glorious days as young revolutionaries, it is not coming back; the frustrations and anger of IT in 2024 are entirely different from those of 1991.

One very big difference is that more people have realized that source code is a liability rather than an asset. For some, that realization came creeping along the path from young teenage FOSS activists in the late 1990s to CIOs of BigCorp today. For most of us, I expect, it was the increasingly crushing workload of maintaining legacy code bases...

Microsoft

Christie's Likens Microsoft's Work On MS-DOS To Einstein's Work In Physics 110

Longtime Slashdot reader theodp writes: "If Einstein paved the way for a new era in physics," explains auction house Christie's in a promotion piece for its upcoming offering of 150+ "objects of scientific and historical importance" from the Paul G. Allen Collection (including items from the shuttered Living Computers Museum), "Mr. Allen and his collaborators ushered in a new era of computing. Starting with MS-DOS in 1981, Microsoft then went on to revolutionize personal computing with the launch of Windows in 1985."

Christie's auction and characterization of MS-DOS as an Allen and Microsoft innovation comes 30 years after the death of Gary Kildall, whose unpublished memoir, the Seattle Times reported in Kildall's July 1994 obituary, called DOS "plain and simple theft" of Kildall's CP/M OS. PC Magazine's The Rise of DOS: How Microsoft Got the IBM PC OS Contract notes that Paul Allen himself traced the genesis of MS-DOS back to a phone call Allen made to Seattle Computer Products owner Rod Brock in which Microsoft licensed Tim Paterson's CP/M-inspired QDOS (Quick and Dirty Operating System) for $10,000 plus a royalty of $15,000 for every company that licensed the software. A shrewd buy-low-sell-high business deal, yes, but hardly an Einstein-caliber breakthrough idea.
Education

High School AP CS A Exam Takers Struggled Again With Java Array Question 159

theodp writes: As with last year," tweeted College Board's AP Program Chief Trevor Packer, "the most challenging free-response question on this year's AP Computer Science A exam was Q4 on 2D Array." While it takes six pages of the AP CS A exam document [PDF] to ask question 4 (of 4), the ask of students essentially boils down to using Java to move from the current location in a 2-D grid to either immediately below or to the right of that location based on which neighbor contains the lesser value, and adding the value at that location to a total (suggested Java solution, alternative Excel VBA solution). Much like rules of the children's game Pop-O-Matic Trouble, moves are subject to the constraint that you cannot move to the right or ahead if it takes you to an invalid position (beyond the grid dimensions).

Ironically, many of the AP CS A students who struggled with the grid coding problem were likely exposed by their schools from kindergarten on to more than a decade's worth of annual Hour of Code tutorials that focused on the concepts of using code to move about in 2-D grids. The move-up-down-left-right tutorials promoted by schools came from tech-backed nonprofit Code.org and its tech giant partners and have been taught over the years by the likes of Bill Gates, Mark Zuckerberg, and President Obama, as well as characters from Star Wars, Disney Princess movies, and Microsoft Minecraft.

The news of American high school students struggling again with fairly straightforward coding problems after a year-long course of instruction comes not only as tech companies and tech-tied nonprofits lobby state lawmakers to pass bills making CS a high school graduation requirement in the US, but also as a new report from King's College urges lawmakers and educators to address a stark decline in the number of UK students studying computing at secondary school, which is blamed on the replacement of more approachable ICT (Information and Communications Technology) courses with more rigorous computer science courses in 2013 (a switch pushed by Google and Microsoft), which it notes students have perceived as too difficult and avoided taking.
Sci-Fi

William Gibson's 'Neuromancer' to Become a Series on Apple TV+ 149

It's been adapted into a graphic novel, a videogame, a radio play, and an opera, according to Wikipedia — which also describes years of trying to adapt Neuromancer into a movie. "The landmark 1984 cyberpunk novel has been on Hollywood's wishlist for decades," writes Gizmodo, "with multiple filmmakers attempting to bring it to the big screen." (Back in 2010, Slashdot's CmdrTaco even posted an update with the headline "Neuromancer Movie In Your Future?" with a 2011 story promising the movie deal was "moving forward....")

But now Deadline reports it's becoming a 10-episode series on Apple TV+ (co-produced by Apple Studios) starring Callum Turner and Brianna Middleton: Created for television by Graham Roland and JD Dillard, Neuromancer follows a damaged, top-rung super-hacker named Case (Turner) who is thrust into a web of digital espionage and high stakes crime with his partner Molly (Middleton), a razor-girl assassin with mirrored eyes, aiming to pull a heist on a corporate dynasty with untold secrets.
More from Gizmodo: "We're incredibly excited to be bringing this iconic property to Apple TV+," Roland and Dillard said in a statement. "Since we became friends nearly 10 years ago, we've looked for something to team up on, so this collaboration marks a dream come true. Neuromancer has inspired so much of the science fiction that's come after it and we're looking forward to bringing television audiences into Gibson's definitive 'cyberpunk' world."
The novel launched Gibson's "Sprawl" trilogy of novels (building on the dystopia in his 1982 short story "Burning Chrome"), also resurrecting the "Molly Millions" character from Johnny Mnemonic — an even earlier short story from 1981...
Science

Get Ready For Nuclear Clocks (arxiv.org) 50

Long-time Slashdot reader jrronimo says JILA physicist Jun Ye's group "has made a breakthrough towards the next stage of precision timekeeping."

From their paper recently published to arXiv: Optical atomic clocks use electronic energy levels to precisely keep track of time. A clock based on nuclear energy levels promises a next-generation platform for precision metrology and fundamental physics studies.... These results mark the start of nuclear-based solid-state optical clocks and demonstrate the first comparison of nuclear and atomic clocks for fundamental physics studies. This work represents a confluence of precision metrology, ultrafast strong field physics, nuclear physics, and fundamental physics.
The Media

Citing 'Crisis' in Local Reporting, Associated Press Creates Sister Organization to Seek Grants (apnews.com) 25

Founded in 1846, the not-for-profit Associated Press distributes its news stories to other news outlets. But are free online sites putting those outlets at risk?

This week the Associated Press wrote that a "crisis" in local and state news reporting "shows little signs of abating" — and that it's now setting up "a sister organization that will seek to raise money" for those outlets. The organization, which will have a board of directors independent of the AP, will solicit philanthropic spending to boost this news coverage, both within the AP and through outside organizations, the news outlet said Tuesday. "We feel we have to lean in at this point, not pull back," said Daisy Veerasingham, the AP's president and CEO. "But the supporting mechanism — the local newspaper market that used to support this — can't afford to do that anymore." Veerasingham said she's been encouraged by preliminary talks with some funders who have expressed concern about the state of local journalism...

The local news industry has collapsed over the past two decades, with the number of journalists working in newspapers dropping from 75,000 to 31,000 in 2022, according to Northwestern University. More than half of the nation's counties have no local news outlets or only one.

The AP's CEO offered this succinct summary of their goal. "We want to add new products and services to help the industry."
Bitcoin

Linux Foundation Announces Intent to Form LF Decentralized Trust (linuxfoundation.org) 9

This week the Linux Foundation announced a new organization for decentralized systems and technologies, with an aim of "fostering innovation and collaboration" in both their development and deployment.

It will build on existing Linux Foundation blockchain and digital identity projects, according to the announcement, while supporting "a rapidly growing decentralized technology landscape." To foster this broader ecosystem, LF Decentralized Trust will encompass the growing portfolio of Hyperledger projects and host new open source software, communities, standards, and specifications that are critical to the macro shift toward decentralized systems of distributed trust....

LF Decentralized Trust's expanded project and member ecosystem will be both essential to emerging tokenized assets classes and networks, as well as to modernizing the core infrastructure for finance, trade, government, healthcare, and more. LF Decentralized Trust will serve as a neutral home for the open development of a broad range of ledger, identity, security, interoperability, scale, implementation, and related technologies... LF Decentralized Trust will also include new directed funding models that will drive strategic investments by members into individual projects and project resources.

"With LF Decentralized Trust, we're expanding our commitment to open source innovation by embracing a wider array of decentralized technologies," said Jim Zemlin, Executive Director of the Linux Foundation. "This new, elevated foundation will enable the community to build a more robust ecosystem that drives forward transparency, security, and efficiency in global infrastructure."

"After eight years of advancing the development of blockchain, decentralized identity and related technologies via the Hyperledger community, the time has come to broaden our effort and impact," said Daniela Barbosa, General Manager, Blockchain and Identity, the Linux Foundation. "Ledgers and ledger technologies are but one component of the decentralized systems that will underpin a digital-first global economy. LF Decentralized Trust is where we will gather and grow an expanded community and portfolio of technologies to deliver the transparency, reliability, security and efficiency needed to successfully upgrade critical systems around the world."

The announcement includes quotes of support from numerous companies including Oracle, Siemens, Visa, Accenture, Citi, and Hitachi. Some highlights:
  • "The formation of the LF Decentralized Trust reflects the growing demand for open source resources that are critical to the management and functionality of decentralized systems." — CEO of Digital Asset
  • "The adoption of decentralized infrastructure is at an inflection point, reflecting the increasing demand from both enterprises and consumers for more secure and transparent digital transactions. As the industry leader for onchain data, blockchain abstraction, and interoperability, we're excited to see the formation of the LF Decentralized Trust and to expand our collaboration with leading financial institutions on advancing tokenized assets and the onchain economy at large." — CMO at Chainlink Labs.
  • "As a founding member of the Hyperledger Foundation, and given our unique position in the financial markets, we recognize the vast potential for open-source innovation and decentralized technologies when it comes to reducing risk, increasing resiliency and improving security. The expansion of Hyperledger Foundation into LF Decentralized Trust represents an exciting opportunity to continue expanding these groundbreaking technologies." — a managing director at DTCC

NASA

Mars Rover's SHELOC Instrument Back Online (nasa.gov) 14

Longtime Slashdot reader thephydes writes: NASA Jet Propulsion Laboratory (JPL) has announced that the SHERLOC (Scanning Habitable Environments with Raman & Luminescence for Organics and Chemicals) instrument on the Perseverance rover has been brought back online "Six months of running diagnostics, testing, imagery and data analysis, troubleshooting, and retesting couldn't come with a better conclusion," said SHERLOC principal investigator Kevin Hand of JPL.

JPL writes in a press release. "Mounted on the rover's robotic arm, SHERLOC uses cameras, spectrometers, and a laser to search for organics and minerals that have been altered by watery environments and may be signs of past microbial life." In addition to its black-and-white context camera, SHERLOC is assisted by WATSON, a color camera for taking close-up images of rock grains and surface textures.
The instrument stopped working this past January when it encountered an issue where the "movable lens cover designed to protect the instrument's spectrometer and one of its cameras from dust became frozen in a position that prevented SHERLOC from collecting data," says JPL.

"Analysis by the SHERLOC team pointed to the malfunction of a small motor responsible for moving the protective lens cover as well as adjusting focus for the spectrometer and the Autofocus and Context Imager (ACI) camera. By testing potential solutions on a duplicate SHERLOC instrument at JPL, the team began a long, meticulous evaluation process to see if, and how, the lens cover could be moved into the open position."
The Matrix

Researchers Upend AI Status Quo By Eliminating Matrix Multiplication In LLMs 72

Researchers from UC Santa Cruz, UC Davis, LuxiTech, and Soochow University have developed a new method to run AI language models more efficiently by eliminating matrix multiplication, potentially reducing the environmental impact and operational costs of AI systems. Ars Technica's Benj Edwards reports: Matrix multiplication (often abbreviated to "MatMul") is at the center of most neural network computational tasks today, and GPUs are particularly good at executing the math quickly because they can perform large numbers of multiplication operations in parallel. [...] In the new paper, titled "Scalable MatMul-free Language Modeling," the researchers describe creating a custom 2.7 billion parameter model without using MatMul that features similar performance to conventional large language models (LLMs). They also demonstrate running a 1.3 billion parameter model at 23.8 tokens per second on a GPU that was accelerated by a custom-programmed FPGA chip that uses about 13 watts of power (not counting the GPU's power draw). The implication is that a more efficient FPGA "paves the way for the development of more efficient and hardware-friendly architectures," they write.

The paper doesn't provide power estimates for conventional LLMs, but this post from UC Santa Cruz estimates about 700 watts for a conventional model. However, in our experience, you can run a 2.7B parameter version of Llama 2 competently on a home PC with an RTX 3060 (that uses about 200 watts peak) powered by a 500-watt power supply. So, if you could theoretically completely run an LLM in only 13 watts on an FPGA (without a GPU), that would be a 38-fold decrease in power usage. The technique has not yet been peer-reviewed, but the researchers -- Rui-Jie Zhu, Yu Zhang, Ethan Sifferman, Tyler Sheaves, Yiqiao Wang, Dustin Richmond, Peng Zhou, and Jason Eshraghian -- claim that their work challenges the prevailing paradigm that matrix multiplication operations are indispensable for building high-performing language models. They argue that their approach could make large language models more accessible, efficient, and sustainable, particularly for deployment on resource-constrained hardware like smartphones. [...]

The researchers say that scaling laws observed in their experiments suggest that the MatMul-free LM may also outperform traditional LLMs at very large scales. The researchers project that their approach could theoretically intersect with and surpass the performance of standard LLMs at scales around 10^23 FLOPS, which is roughly equivalent to the training compute required for models like Meta's Llama-3 8B or Llama-2 70B. However, the authors note that their work has limitations. The MatMul-free LM has not been tested on extremely large-scale models (e.g., 100 billion-plus parameters) due to computational constraints. They call for institutions with larger resources to invest in scaling up and further developing this lightweight approach to language modeling.
Piracy

South Korean ISP 'Infected' 600,000 Torrenting Subscribers With Malware (torrentfreak.com) 21

An anonymous reader quotes a report from TorrentFreak: Last week, an in-depth investigative report from JBTC revealed that Korean Internet provider KT, formerly known as Korea Telecom, distributed malware onto subscribers' computers to interfere with and block torrent traffic. File-sharing continues to be very popular in South Korea, but operates differently than in most other countries. "Webhard" services, short for Web Hard Drive, are particularly popular. These are paid BitTorrent-assisted services, which also offer dedicated web seeds, to ensure that files remain available.

Webhard services rely on the BitTorrent-enabled 'Grid System', which became so popular in Korea that ISPs started to notice it. Since these torrent transfers use a lot of bandwidth, which is very costly in the country, providers would rather not have this file-sharing activity on their networks. KT, one of South Korea's largest ISPs with over 16 million subscribers, was previously caught meddling with the Grid System. In 2020, their throttling activities resulted in a court case, where the ISP cited 'network management' costs as the prime reason to interfere. The Court eventually sided with KT, ending the case in its favor, but that wasn't the end of the matter. An investigation launched by the police at the time remains ongoing. New reports now show that the raid on KT's datacenter found that dozens of devices were used in the 'throttling process' and they were doing more than just limiting bandwidth.

When Webhard users started reporting problems four years ago, they didn't simply complain about slow downloads. In fact, the main concern was that several Grid-based Webhard services went offline or reported seemingly unexplainable errors. Since all complaining users were KT subscribers, fingers were pointed in that direction. According to an investigation by Korean news outlet JBTC, the Internet provider actively installed malware on computers of Webhard services. This activity was widespread and effected an estimated 600,000 KT subscribers. The Gyeonggi Southern Police Agency, which carried out the raid and investigation, believes this was an organized hacking attempt. A dedicated KT team allegedly planted malware to eavesdrop on subscribers and interfere with their private file transfers. [...] Why KT allegedly distributed the malware and what it precisely intended to do is unclear. The police believe there were internal KT discussions about network-related costs, suggesting that financial reasons played a role.

Netscape

Slashdot Asks: What Do You Remember About the Web in 1994? (fastcompany.com) 171

"The Short Happy Reign of the CD-ROM" was just one article in a Fast Company series called 1994 Week. As the week rolled along they also re-visited Yahoo, Netscape, and how the U.S. Congress "forced the videogame industry to grow up."

But another article argues that it's in web pages from 1994 that "you can start to see in those weird, formative years some surprising signs of what the web would be, and what it could be." It's hard to say precisely when the tipping point was. Many point to September '93, when AOL users first flooded Usenet. But the web entered a new phase the following year. According to an MIT study, at the start of 1994, there were just 623 web servers. By year's end, it was estimated there were at least 10,000, hosting new sites including Yahoo!, the White House, the Library of Congress, Snopes, the BBC, sex.com, and something called The Amazing FishCam. The number of servers globally was doubling every two months. No one had seen growth quite like that before. According to a press release announcing the start of the World Wide Web Foundation that October, this network of pages "was widely considered to be the fastest-growing network phenomenon of all time."

As the year began, Web pages were by and large personal and intimate, made by research institutions, communities, or individuals, not companies or brands. Many pages embodied the spirit, or extended the presence, of newsgroups on Usenet, or "User's Net." (Snopes and the Internet Movie Database, which landed on the Web in 1993, began as crowd-sourced projects on Usenet.) But a number of big companies, including Microsoft, Sun, Apple, IBM, and Wells Fargo, established their first modest Web outposts in 1994, a hint of the shopping malls and content farms and slop factories and strip mines to come. 1994 also marked the start of banner ads and online transactions (a CD, pizzas), and the birth of spam and phishing...

[B]ack in '94, the salesmen and oilmen and land-grabbers and developers had barely arrived. In the calm before the storm, the Web was still weird, unruly, unpredictable, and fascinating to look at and get lost in. People around the world weren't just writing and illustrating these pages, they were coding and designing them. For the most part, the design was non-design. With a few eye-popping exceptions, formatting and layout choices were simple, haphazard, personal, and — in contrast to most of today's web — irrepressibly charming. There were no table layouts yet; cascading style sheets, though first proposed in October 1994 by Norwegian programmer Håkon Wium Lie, wouldn't arrive until December 1996... The highways and megalopolises would come later, courtesy of some of the world's biggest corporations and increasingly peopled by bots, but in 1994 the internet was still intimate, made by and for individuals... Soon, many people would add "under construction" signs to their Web pages, like a friendly request to pardon our dust. It was a reminder that someone was working on it — another indication of the craft and care that was going into this never-ending quilt of knowledge.

The article includes screenshots of Netscape in action from browser-emulating site OldWeb.Today (albeit without using a 14.4 kbps modems). "Look in and think about how and why this web grew the way it did, and what could have been. Or try to imagine what life was like when the web wasn't worldwide yet, and no one knew what it really was."

Slashdot reader tedlistens calls it "a trip down memory lane," offering "some telling glimpses of the future, and some lessons for it too." The article revisits 1994 sites like Global Network Navigator, Time-Warner's Pathfinder, and Wired's online site HotWired as well as 30-year-old versions of the home pages for Wells Fargo and Microsoft.

What did they miss? Share your own memories in the comments.

What do you remember about the web in 1994?
Space

Tuesday SpaceX Launches a NOAA Satellite to Improve Weather Forecasts for Earth and Space (space.com) 20

Tuesday a SpaceX Falcon Heavy rocket will launch a special satellite — a state-of-the-art weather-watcher from America's National Oceanic and Atmospheric Administration.

It will complete a series of four GOES-R satellite launches that began in 2016. Space.com drills down into how these satellites have changed weather forecasts: More than seven years later, with three of the four satellites in the series orbiting the Earth, scientists and researchers say they are pleased with the results and how the advanced technology has been a game changer. "I think it has really lived up to its hype in thunderstorm forecasting. Meteorologists can see the convection evolve in near real-time and this gives them enhanced insight on storm development and severity, making for better warnings," John Cintineo, a researcher from NOAA's National Severe Storms Laboratory , told Space.com in an email.

"Not only does the GOES-R series provide observations where radar coverage is lacking, but it often provides a robust signal before radar, such as when a storm is strengthening or weakening. I'm sure there have been many other improvements in forecasts and environmental monitoring over the last decade, but this is where I have most clearly seen improvement," Cintineo said. In addition to helping predict severe thunderstorms, each satellite has collected images and data on heavy rain events that could trigger flooding, detected low clouds and fog as it forms, and has made significant improvements to forecasts and services used during hurricane season. "GOES provides our hurricane forecasters with faster, more accurate and detailed data that is critical for estimating a storm's intensity, including cloud top cooling, convective structures, specific features of a hurricane's eye, upper-level wind speeds, and lightning activity," Ken Graham, director of NOAA's National Weather Service told Space.com in an email.

Instruments such as the Advanced Baseline Imager have three times more spectral channels, four times the image quality, and five times the imaging speed as the previous GOES satellites. The Geostationary Lightning Mapper is the first of its kind in orbit on the GOES-R series that allows scientists to view lightning 24/7 and strikes that make contact with the ground and from cloud to cloud. "GOES-U and the GOES-R series of satellites provides scientists and forecasters weather surveillance of the entire western hemisphere, at unprecedented spatial and temporal scales," Cintineo said. "Data from these satellites are helping researchers develop new tools and methods to address problems such as lightning prediction, sea-spray identification (sea-spray is dangerous for mariners), severe weather warnings, and accurate cloud motion estimation. The instruments from GOES-R also help improve forecasts from global and regional numerical weather models, through improved data assimilation."

The final satellite, launching Tuesday, includes a new sensor — the Compact Coronagraph — "that will monitor weather outside of Earth's atmosphere, keeping an eye on what space weather events are happening that could impact our planet," according to the article.

"It will be the first near real time operational coronagraph that we have access to," Rob Steenburgh, a space scientist at NOAA's Space Weather Prediction Center, told Space.com on the phone. "That's a huge leap for us because up until now, we've always depended on a research coronagraph instrument on a spacecraft that was launched quite a long time ago."
Unix

X Window System Turns 40 52

Ancient Slashdot reader ewhac writes: On June 19, 1984, Robert Scheifler announced on MIT's Project Athena mailing list a new graphical windowing system he'd put together. Having cribbed a fair bit of code from the existing windowing toolkit called W, Scheifler named his new system X, thus giving birth to the X Window System. Scheifler prophetically wrote at the time, "The code seems fairly solid at this point, although there are still some deficiencies to be fixed up."

The 1980's and 1990's saw tremendous activity in the development of graphical displays and user interfaces, and X was right in the middle of it all, alongside Apple, Sun, Xerox, Apollo, Silicon Graphics, NeXT, and many others. Despite the fierce, well-funded competition, and heated arguments about how many buttons a mouse should have, X managed to survive, due in large part to its Open Source licensing and its flexible design, allowing it to continue to work well even as graphical hardware rapidly advanced. As such, it was ported to dozens of platforms over the years (including a port to the Amiga computer by Dale Luck in the late 1980's). 40 years later, despite its warts, inconsistencies, age, and Wayland promising for the last ten years to be coming Real Soon Now, X remains the windowing system for UNIX-like platforms.
Privacy

Proton Seeks To Secure Its Privacy-Focused Future With a Nonprofit Model (arstechnica.com) 19

Proton, the secure-minded email and productivity suite, is becoming a nonprofit foundation, but it doesn't want you to think about it in the way you think about other notable privacy and web foundations. From a report: "We believe that if we want to bring about large-scale change, Proton can't be billionaire-subsidized (like Signal), Google-subsidized (like Mozilla), government-subsidized (like Tor), donation-subsidized (like Wikipedia), or even speculation-subsidized (like the plethora of crypto "foundations")," Proton CEO Andy Yen wrote in a blog post announcing the transition. "Instead, Proton must have a profitable and healthy business at its core."

The announcement comes exactly 10 years to the day after a crowdfunding campaign saw 10,000 people give more than $500,000 to launch Proton Mail. To make it happen, Yen, along with co-founder Jason Stockman and first employee Dingchao Lu, endowed the Proton Foundation with some of their shares. The Proton Foundation is now the primary shareholder of the business Proton, which Yen states will "make irrevocable our wish that Proton remains in perpetuity an organization that places people ahead of profits." Among other members of the Foundation's board is Sir Tim Berners-Lee, inventor of HTML, HTTP, and almost everything else about the web.

Of particular importance is where Proton and the Proton Foundation are located: Switzerland. As Yen noted, Swiss foundations do not have shareholders and are instead obligated to act "in accordance with the purpose for which they were established." While the for-profit entity Proton AG can still do things like offer stock options to recruits and even raise its own capital on private markets, the Foundation serves as a backstop against moving too far from Proton's founding mission, Yen wrote.

AI

AI Researcher Warns Data Science Could Face a Reproducibility Crisis (beabytes.com) 56

Long-time Slashdot reader theodp shared this warning from a long-time AI researcher arguing that data science "is due" for a reckoning over whether results can be reproduced. "Few technological revolutions came with such a low barrier of entry as Machine Learning..." Unlike Machine Learning, Data Science is not an academic discipline, with its own set of algorithms and methods... There is an immense diversity, but also disparities in skill, expertise, and knowledge among Data Scientists... In practice, depending on their backgrounds, data scientists may have large knowledge gaps in computer science, software engineering, theory of computation, and even statistics in the context of machine learning, despite those topics being fundamental to any ML project. But it's ok, because you can just call the API, and Python is easy to learn. Right...?

Building products using Machine Learning and data is still difficult. The tooling infrastructure is still very immature and the non-standard combination of data and software creates unforeseen challenges for engineering teams. But in my views, a lot of the failures come from this explosive cocktail of ritualistic Machine Learning:

- Weak software engineering knowledge and practices compounded by the tools themselves;
- Knowledge gap in mathematical, statistical, and computational methods, encouraged black boxing API;
- Ill-defined range of competence for the role of data scientist, reinforced by a pool of candidates with an unusually wide range of backgrounds;
- A tendency to follow the hype rather than the science.


- What can you do?

- Hold your data scientists accountable using Science.
- At a minimum, any AI/ML project should include an Exploratory Data Analysis, whose results directly support the design choices for feature engineering and model selection.
- Data scientists should be encouraged to think outside-of-the box of ML, which is a very small box - Data scientists should be trained to use eXplainable AI methods to provide context about the algorithm's performance beyond the traditional performance metrics like accuracy, FPR, or FNR.
- Data scientists should be held at similar standards than other software engineering specialties, with code review, code documentation, and architectural designs.

The article concludes, "Until such practices are established as the norm, I'll remain skeptical of Data Science."
Linux

'Blue Screen of Death' Comes To Linux (phoronix.com) 109

In 2016, Phoronix remembered how the early days of Linux kernel mode-setting (KMS) had brought hopes for improved error messages. And one long-awaited feature was errors messages for "Direct Rendering Manager" (or DRM) drivers — something analgous to the "Blue Screen of Death" Windows gives for critical errors.

Now Linux 6.10 is introducing a new DRM panic handler infrastructure enabling messages when a panic occurs, Phoronix reports today. "This is especially important for those building a kernel without VT/FBCON support where otherwise viewing the kernel panic message isn't otherwise easily available." With Linux 6.10 the initial DRM Panic code has landed as well as wiring up the DRM/KMS driver support for the SimpleDRM, MGAG200, IMX, and AST drivers. There is work underway on extending DRM Panic support to other drivers that we'll likely see over the coming kernel cycles for more widespread support... On Linux 6.10+ with platforms having the DRM Panic driver support, this "Blue Screen of Death" functionality can be tested via a route such as echo c > /proc/sysrq-trigger.
The article links to a picture shared on Mastodon by Red Hat engineer Javier Martinez Canillas of the error message being generated on a BeaglePlay single board computer.

Phoronix also points out that some operating systems have even considered QR codes for kernel error messages...

Slashdot Top Deals