×
Programming

Creator of JSON Unveils New Programming Language 'Misty' (crockford.com) 157

He specified the JSON notation, and developed tools like JSLint and the minifier JSMin. His Wikipedia entry says he was also a senior JavaScript architect at PayPal — but he's probably better known for writing O'Reilly's book JavaScript: the Good Parts.

But Doug Crockford has a new challenge. O'Reilly's monthly tech newsletter says Crockford "has created a new programming language called Misty. It is designed to be used both by students and professional programmers."

The language's official site calls it "a dynamic, general-purpose, transitional, actor language. It has a gentle syntax that is intended to benefit students, as well as advanced features such as capability security and lambdas with lexical scoping..." The language is quite strict in its use of spaces and indentation. In most programming languages, code spacing and formatting are underspecified, which leads to many incompatible conventions of style, some promoting bug formation, and all promoting time-wasting arguments, incompatibilities, and hurt feelings. Misty instead allows only one convention which is strictly enforced. This liberates programmers to focus their attention on more important matters.

Indentation is in increments of 4 spaces. The McKeeman Form is extended by three special rules to make this possible:


indentation
The spaces required by the current nesting.

increase_indentation
Append four spaces to the indentation.

decrease_indentation
Remove four spaces from the indentation.


The indentation is the number of spaces required at the beginning of a line as determined by its nesting level.


indent
increase_indentation linebreak

outdent
decrease_indentation linebreak


The linebreak rule allows the insertion of a comment, ends the line, and checks the indentation of the next line. Multiple comments and blank lines may appear wherever a line can end.

AI

Which AI Model Provides the 'Best' Answers? (arstechnica.com) 30

An anonymous reader quotes a report from Ars Technica: For those looking for a more rigorous way of comparing various models, the folks over at the Large Model Systems Organization (LMSys) have set up Chatbot Arena, a platform for generating Elo-style rankings for LLMs based on a crowdsourced blind-testing website. Chatbot Arena users can enter any prompt they can think of into the site's form to see side-by-side responses from two randomly selected models. The identity of each model is initially hidden, and results are voided if the model reveals its identity in the response itself. The user then gets to pick which model provided what they judge to be the "better" result, with additional options for a "tie" or "both are bad." Only after providing a pairwise ranking does the user get to see which models they were judging, though a separate "side-by-side" section of the site lets users pick two specific models to compare (without the ability to contribute a vote on the result).

Since its public launch back in May, LMSys says it has gathered over 130,000 blind pairwise ratings across 45 different models (as of early December). Those numbers seem poised to increase quickly after a recent positive review from OpenAI's Andrej Karpathy that has already led to what LMSys describes as "a super stress test" for its servers. Chatbot Arena's thousands of pairwise ratings are crunched through a Bradley-Terry model, which uses random sampling to generate an Elo-style rating estimating which model is most likely to win in direct competition against any other. Interested parties can also dig into the raw data of tens of thousands of human prompt/response ratings for themselves or examine more detailed statistics, such as direct pairwise win rates between models and confidence interval ranges for those Elo estimates.

Chatbot Arena's latest public leaderboard update shows a few proprietary models easily beating out a wide range of open-source alternatives. OpenAI's ChatGPT-4 Turbo leads the pack by a wide margin, with only an older GPT-4 model ("0314," which was discontinued in June) coming anywhere close on the ratings scale. But even months-old, defunct versions of GPT-3.5 Turbo outrank the highest-rated open-source models available in Chatbot Arena's testbed. Anthropic's proprietary Claude models also feature highly in Chatbot Arena's top rankings. Oddly enough, though, the site's blind human testing tends to rank the older Claude-1 slightly higher than the subsequent releases of Claude-2.0 and Claude-2.1. Among the tested non-proprietary models, the Llama-based Tulu 2 and 01.ai's Yi get rankings that are comparable to some older GPT-3.5 implementations. Past that, there's a slow but steady decline until you get to models like Dolly and StableLM at the bottom of the pack (amid older versions of many models that have more recent, higher-ranking updates on Chatbot Arena's charts).

Programming

40 years of Turbo Pascal: Memories of the Coding Dinosaur that Revolutionized IDEs (theregister.com) 113

TechSpot remembers that Turbo Pascal "stands out as one of the first instances of an integrated development environment (IDE), providing a text-based interface through which developers could write their code, compile it, and finally link it with runtime libraries." The early IDE, written in Assembly, eschewed the use of floppies, instead building the code directly in RAM for an unprecedented performance boost.

The language demonstrated superior speed, greater convenience, and a more affordable price compared to its competition. Philippe Kahn, Borland's CEO who initially conceptualized turning the new language into an all-in-one product, decided to sell the software via mail orders for just $49.95, establishing a market presence for the then-newly founded company.

It was called "Turbo" because its use of RAM made it considerable faster, adds the Register: Anders Hejlsberg, who would later go on to join Microsoft as part of the C# project, is widely credited as creator of the language, with Borland boss Philippe Kahn identifying the need for the all-in-one tool...

Version 1 had limitations. Source code files, for example, were limited to 64 KB. It would only produce .COM executable files for DOS and CP/M — although other architectures and operating systems were supported. It would also run from a single floppy disk, saving users from endless swapping in a world where single drives were the norm and a hard disk seemed impossibly exotic — and expensive... However, it was with version 4, in 1987, that Turbo Pascal changed dramatically. For one, support for CP/M and CP/M-86 was dropped, and the compiler would generate .EXE executables under DOS, lifting the .COM restrictions...

For this writer, 1989's version 5.5 was peak Turbo Pascal. Object-oriented programming features turned up, including classes and inheritance, and a step-by-step debugger. Version 6 and 7 brought in inline assembly and support for the creation of Windows executables and DLLs respectively, but version 7 also marked the end of the line as far as Borland was concerned. Turbo Pascal for Windows would turn up, but was eventually superseded by Delphi.

However, the steamroller of tools such as Visual Basic 3 ensured that Borland never had the same success in Windows that it enjoyed under DOS. As for Turbo Pascal, several versions were eventually released by Borland as freeware including version 1 for DOS, 5.5, and 7.

I once took a computer programming course taught entirely in Pascal. (Functions, subroutines, and procedures...)

Any Slashdot readers have their own memories to share about Pascal?
Digital

Retro Computing Enthusiast Tries Restoring a 1986 DEC PDP-11 Minicomputer (youtube.com) 52

More than half a century ago, Digital Equipment Corporation released the first of their 16-bit PDP-11 minicomputers, continuing the PDP-11 line until 1997.

This week long-time Slashdot reader Shayde writes: I've been working on a 1986 PDP/11 that I basically got as a "barn find" from an estate sale a year ago. The project has absolutely had it's ups and downs, as the knowledgebase for these machines is aging quickly. I'm hoping to restore my own expertise with this build, but it's been challenging finding parts, technical details, and just plain information.

I leaned pretty heavily on the folks at the Vintage Computing Federation, as well as connections I've made in the industry — and made some great progress... Check it out if you're keen on retrocomputing and old minicomputers and DEC gear.

The entire saga is chronicled in three videos titled "Barn Find PDP 11/73 — Will it boot" — part 1, part 2, and this week's latest video. "What started as a curiosity has turned into an almost 10-month-long project," it concludes, creeping up hopefully on the possibility of an awe-struck glimpse at the PDP-11's boot sequence (over two minutes long)

"So cool," responded Jeremiah Cornelius (Slashdot reader #137) in a comment on the submitted Slashdot story. "I have huge affection for these beasts. I cut my teeth in High School on a DEC PDP11/70 and AT&T SysV, and a little RSTS/E in 1979-82. We switched systems by loading different cakelid platters into the washing-machine drives, and toggling the magenta keys.

"I've thought about the Blinkenlights 7/10 scale emulator, tha uses an RPi, but I envy you and hope you have fun."
Science

Light Can Be Reflected Not Only In Space But Also In Time (scientificamerican.com) 51

Anna Demming reports via Scientific American: [A]lthough so far there's no way to unscramble an egg, in certain carefully controlled scenarios within relatively simple systems, researchers have managed to turn back time. The trick is to create a certain kind of reflection. First, imagine a regular spatial reflection, like one you see in a silver-backed glass mirror. Here reflection occurs because for a ray of light, silver is a very different transmission medium than air; the sudden change in optical properties causes the light to bounce back, like a Ping-Pong ball hitting a wall. Now imagine that instead of changing at particular points in space, the optical properties all along the ray's path change sharply at a specific moment in time. Rather than recoiling in space, the light would recoil in time, precisely retracing its tracks, like the Ping-Pong ball returning to the player who last hit it. This is a "time reflection." Time reflections have fascinated theorists for decades but have proved devilishly tricky to pull off in practice because rapidly and sufficiently changing a material's optical properties is no small task. Now, however, researchers at the City University of New York have demonstrated a breakthrough: the creation of light-based time reflections. To do so, physicist Andrea Alu and his colleagues devised a "metamaterial" with adjustable optical properties that they could tweak within fractions of a nanosecond to halve or double how quickly light passes through. Metamaterials have properties determined by their structures; many are composed of arrays of microscopic rods or rings that can be tuned to interact with and manipulate light in ways that no natural material can. Bringing their power to bear on time reflections, Alu says, revealed some surprises. "Now we are realizing that [time reflections] can be much richer than we thought because of the way that we implement them," he adds. [...]

The device Alu and his collaborators developed is essentially a waveguide that channels microwave-frequency light. A densely spaced array of switches along the waveguide connects it to capacitor circuits, which can dynamically add or remove material for the light to encounter. This can radically shift the waveguide's effective properties, such as how easily it allows light to pass through. "We are not changing the material; we are adding or subtracting material," Alu says. "That is why the process can be so fast." Time reflections come with a range of counterintuitive effects that have been theoretically predicted but never demonstrated with light. For instance, what is at the beginning of the original signal will be at the end of the reflected signal -- a situation akin to looking at yourself in a mirror and seeing the back of your head. In addition, whereas a standard reflection alters how light traverses space, a time reflection alters light's temporal components -- that is, its frequencies. As a result, in a time-reflected view, the back of your head is also a different color. Alu and his colleagues observed both of these effects in the team's device. Together they hold promise for fueling further advances in signal processing and communications -- two domains that are vital for the function of, say, your smartphone, which relies on effects such as shifting frequencies.

Just a few months after developing the device, Alu and his colleagues observed more surprising behavior when they tried creating a time reflection in that waveguide while shooting two beams of light at each other inside it. Normally colliding beams of light behave as waves, producing interference patterns where their overlapping peaks and troughs add up or cancel out like ripples on water (in "constructive" or "destructive" interference, respectively). But light can, in fact, act as a pointlike projectile, a photon, as well as a wavelike oscillating field -- that is, it has "wave-particle duality." Generally a particular scenario will distinctly elicit just one behavior or the other, however. For instance, colliding beams of light don't bounce off each other like billiard balls! But according to Alu and his team's experiments, when a time reflection occurs, it seems that they do. The researchers achieved this curious effect by controlling whether the colliding waves were interfering constructively or destructively -- whether they were adding or subtracting from each other -- when the time reflection occurred. By controlling the specific instant when the time reflection took place, the scientists demonstrated that the two waves bounce off each other with the same wave amplitudes that they started with, like colliding billiard balls. Alternatively they could end up with less energy, like recoiling spongy balls, or even gain energy, as would be the case for balls at either end of a stretched spring. "We can make these interactions energy-conserving, energy-supplying or energy-suppressing," Alu says, highlighting how time reflections could provide a new control knob for applications that involve energy conversion and pulse shaping, in which the shape of a wave is changed to optimize a pulse's signal.

AI

ChatGPT Tops Wikipedia's Most-Viewed Articles of 2023 List (thehill.com) 12

According to the Wikimedia Foundation, the page on "ChatGPT" was the most-viewed English article on Wikipedia in 2023, attracting nearly 50 million page views. The Hill reports: Wikimedia Foundation said English Wikipedia pages attracted more than 84 billion total page views in 2023, and ChatGPT topped its annual top 25 chart with a total of 49.5 million page views. The chatbot, created by Sam Altman's OpenAI, soared in popularity this year, as much of the public got its first chance to use artificial intelligence hands-on. The AI system debuted just more than a year ago, Nov. 30, 2022, and surpassed 100 million users, the nonprofit said.

Following ChatGPT, "Deaths in 2023" was the second most-popular page with 42.7 million views; "2023 Cricket World Cup" came in third place with 38.2 million views; "Indian Premier League" placed fourth with 32 million views; and "Oppenheimer (film)" rounded out the top five with 28.3 million views. The rest of the list includes articles on sports, film/television, celebrities and some current events.
The full list of the top 25 most popular English Wikipedia articles in 2023 is available here.
Bug

Cicadas Are So Loud, Fiber Optic Cables Can 'Hear' Them (wired.com) 22

An anonymous reader quotes a report from Wired: One of the world's most peculiar test beds stretches above Princeton, New Jersey. It's a fiber optic cable strung between three utility poles that then runs underground before feeding into an "interrogator." This device fires a laser through the cable and analyzes the light that bounces back. It can pick up tiny perturbations in that light caused by seismic activity or even loud sounds, like from a passing ambulance. It's a newfangled technique known as distributed acoustic sensing, or DAS. Because DAS can track seismicity, other scientists are increasingly using it to monitor earthquakes and volcanic activity. (A buried system is so sensitive, in fact, that it can detect people walking and driving above.) But the scientists in Princeton just stumbled upon a rather noisier use of the technology.

In the spring of 2021, Sarper Ozharar -- a physicist at NEC Laboratories, which operates the Princeton test bed -- noticed a strange signal in the DAS data. "We realized there were some weird things happening," says Ozharar. "Something that shouldn't be there. There was a distinct frequency buzzing everywhere." The team suspected the "something" wasn't a rumbling volcano -- not inNew Jersey -- but the cacophony of the giant swarm of cicadas that had just emerged from underground, a population known as Brood X. A colleague suggested reaching out to Jessica Ware, an entomologist and cicada expert at the American Museum of Natural History, to confirm it. "I had been observing the cicadas and had gone around Princeton because we were collecting them for biological samples," says Ware. "So when Sarper and the team showed that you could actually hear the volume of the cicadas, and it kind of matched their patterns, I was really excited."

Add insects to the quickly growing list of things DAS can spy on. Thanks to some specialized anatomy, cicadas are the loudest insects on the planet, but all sorts of other six-legged species make a lot of noise, like crickets and grasshoppers. With fiber optic cables, entomologists might have stumbled upon a powerful new way to cheaply and constantly listen in on species -- from afar. "Part of the challenge that we face in a time when there's insect decline is that we still need to collect data about what population sizes are, and what insects are where," says Ware. "Once we are able to familiarize ourselves with what's possible with this type of remote sensing, I think we can be really creative."

AI

1960s Chatbot ELIZA Beat OpenAI's GPT-3.5 In a Recent Turing Test Study (arstechnica.com) 57

An anonymous reader quotes a report from Ars Technica: In a preprint research paper titled "Does GPT-4 Pass the Turing Test?", two researchers from UC San Diego pitted OpenAI's GPT-4 AI language model against human participants, GPT-3.5, and ELIZA to see which could trick participants into thinking it was human with the greatest success. But along the way, the study, which has not been peer-reviewed, found that human participants correctly identified other humans in only 63 percent of the interactions -- and that a 1960s computer program surpassed the AI model that powers the free version of ChatGPT. Even with limitations and caveats, which we'll cover below, the paper presents a thought-provoking comparison between AI model approaches and raises further questions about using the Turing test to evaluate AI model performance.

In the recent study, listed on arXiv at the end of October, UC San Diego researchers Cameron Jones (a PhD student in Cognitive Science) and Benjamin Bergen (a professor in the university's Department of Cognitive Science) set up a website called turingtest.live, where they hosted a two-player implementation of the Turing test over the Internet with the goal of seeing how well GPT-4, when prompted different ways, could convince people it was human. Through the site, human interrogators interacted with various "AI witnesses" representing either other humans or AI models that included the aforementioned GPT-4, GPT-3.5, and ELIZA, a rules-based conversational program from the 1960s. "The two participants in human matches were randomly assigned to the interrogator and witness roles," write the researchers. "Witnesses were instructed to convince the interrogator that they were human. Players matched with AI models were always interrogators."

The experiment involved 652 participants who completed a total of 1,810 sessions, of which 1,405 games were analyzed after excluding certain scenarios like repeated AI games (leading to the expectation of AI model interactions when other humans weren't online) or personal acquaintance between participants and witnesses, who were sometimes sitting in the same room. Surprisingly, ELIZA, developed in the mid-1960s by computer scientist Joseph Weizenbaum at MIT, scored relatively well during the study, achieving a success rate of 27 percent. GPT-3.5, depending on the prompt, scored a 14 percent success rate, below ELIZA. GPT-4 achieved a success rate of 41 percent, second only to actual humans.
"Ultimately, the study's authors concluded that GPT-4 does not meet the success criteria of the Turing test, reaching neither a 50 percent success rate (greater than a 50/50 chance) nor surpassing the success rate of human participants," reports Ars. "The researchers speculate that with the right prompt design, GPT-4 or similar models might eventually pass the Turing test. However, the challenge lies in crafting a prompt that mimics the subtlety of human conversation styles. And like GPT-3.5, GPT-4 has also been conditioned not to present itself as human."

"It seems very likely that much more effective prompts exist, and therefore that our results underestimate GPT-4's potential performance at the Turing Test," the authors write.
Science

Nikola Tesla's Historic Wardenclyffe Lab Site At Risk After Devastating Fire (arstechnica.com) 22

Jennifer Ouellette reports via Ars Technica: Back in 2012, a crowdfunding effort on Indigogo successfully raised the funds necessary to purchase the Wardenclyffe Tower site on Long Island, New York, where Serbian inventor Nikola Tesla once tried to build an ambitious wireless transmission station. The goal was to raise additional funds to build a $20 million Tesla Science Center there, with a museum, an educational center, and a technological innovation program. The nonprofit group behind the project finally broke ground this April after years of basic restoration work -- only to experience a devastating setback last week, two days before Thanksgiving, when a fire broke out.

Over 100 firefighters from 17 local departments responded and battled the flames throughout the night, as residual embers led to two additional outbreaks. One firefighter sustained bruised ribs after falling off a ladder, but there were no other injuries or fatalities. Once the blaze was extinguished, the TSC group called in their engineers to assess the damage and make recommendations for repairs. While an investigation is ongoing as to the cause of the fire, Fire Chief Sean McCarrick said during a press conference on Tuesday, November 28, that they had ruled out arson. According to project architect Mark Thaler, there was nothing flammable in the lab that could have caused the fire, although the back buildings had wood-frame roofs. The original brick building, designed by Stanford White, is still standing, although there is considerable damage to the structure of the roof, steel girders, chimney, cupola, and a portion of a wall. Some elements have been irreparably destroyed, but fortunately all museum artifacts in TSC's collection were stored offsite. The most pressing concern is that water from the firehoses saturated the brick walls, according to Thaler, since the upcoming colder winter temperatures could freeze that moisture and cause the brick work to break apart and collapse. The engineers have also recommended adding strategic wall supports to both the interior and exterior to shore up the structure.

All of this comes with a hefty price tag: $3 million for immediate remediation to seal the roof and dry the building in order to stave off further damage. The building was insured, but that insurance won't come close to covering the cost. The TSC group has set up a 60-day Indiegogo campaign to raise those funds, which is separate from the $14 million it had already raised toward their targeted $20 million goal. "The best way to help right now is to donate if you can," said TSC Executive Director Marc Alessi. "We've never needed it more. We need to secure this lab, stop the water intrusion and future damage. And then we need to complete this project." [...]
"Buildings burn down and can then be rebuilt," said John Gaiman, deputy county executive for Suffolk County. "The ideas behind them, the person, the history, the narrative that was created over 100 years ago still exists, and that will continue."
AI

Tech Conference Collapses After Organizer Admits To Making Fake 'Auto-Generated' Female Speaker (404media.co) 158

Samantha Cole reports via 404 Media: The founder of a software developer conference has been accused of creating fake female speakers to bolster diversity numbers -- and some speakers are dropping out, with the event just nine days away. Devternity is an online conference for developers that's invite-only for speakers. In the past, it reportedly drew hundreds of attendees both when it was in-person in Latvia and even more after it moved online. Eduard Sizovs founded the event in 2015.

Engineer Gergely Orosz tweeted on Thursday that he'd discovered fake speakers listed on the Devternity site. Two women -- Anna Boyko, listed as a staff engineer at Coinbase, and Natalie Stadler, a "software craftswoman" at Coinbase -- were included on the site as speakers but appear to not exist in real life. Neither have an online presence beyond the Devternity website itself. Orosz found archived versions of the Devternity site where Boyko and Stadler were listed; Stadler's listing was up for years, according to archives from 2021.

Sizovs responded to these claims in a 916-word tweet, admitting that he'd made at least one fake speaker, Stadler, in the process of building the Devternity site and then left her up. He said that the profile was "auto-generated, with a random title, random Twitter handle, random picture," and that while he noticed it was still on the site, he delayed taking it off because it wasn't a "quick fix" and that "it's better to have that demo persona while I am searching for the replacement speakers," he wrote. In his tweet, Sizovs did not elaborate on why he believed this was "better." Sizovs wrote that after this year's upcoming conference "achieved a worse-than-expected level of diversity of speakers," author and programmer Sandi Metz, "Software Craftswoman, Tech Influencer @ Instagram" Julia Kirsina, and head of developer relations at Amazon Web Services Kristine Howard were the only three women he was able to bring on as speakers. But two of the three dropped out, he said [...].

Security

Researchers Figure Out How To Bypass Fingerprint Readers In Most Windows PCs (arstechnica.com) 25

An anonymous reader quotes a report from Ars Technica: [L]ast week, researchers at Blackwing Intelligence published an extensive document showing how they had managed to work around some of the most popular fingerprint sensors used in Windows PCs. Security researchers Jesse D'Aguanno and Timo Teras write that, with varying degrees of reverse-engineering and using some external hardware, they were able to fool the Goodix fingerprint sensor in a Dell Inspiron 15, the Synaptic sensor in a Lenovo ThinkPad T14, and the ELAN sensor in one of Microsoft's own Surface Pro Type Covers. These are just three laptop models from the wide universe of PCs, but one of these three companies usually does make the fingerprint sensor in every laptop we've reviewed in the last few years. It's likely that most Windows PCs with fingerprint readers will be vulnerable to similar exploits.

Blackwing's post on the vulnerability is also a good overview of exactly how fingerprint sensors in a modern PC work. Most Windows Hello-compatible fingerprint readers use "match on chip" sensors, meaning that the sensor has its own processors and storage that perform all fingerprint scanning and matching independently without relying on the host PC's hardware. This ensures that fingerprint data can't be accessed or extracted if the host PC is compromised. If you're familiar with Apple's terminology, this is basically the way its Secure Enclave is set up. Communication between the fingerprint sensor and the rest of the system is supposed to be handled by the Secure Device Connection Protocol (SCDP). This is a Microsoft-developed protocol that is meant to verify that fingerprint sensors are trustworthy and uncompromised, and to encrypt traffic between the fingerprint sensor and the rest of the PC.

Each fingerprint sensor was ultimately defeated by a different weakness. The Dell laptop's Goodix fingerprint sensor implemented SCDP properly in Windows but used no such protections in Linux. Connecting the fingerprint sensor to a Raspberry Pi 4, the team was able to exploit the Linux support plus "poor code quality" to enroll a new fingerprint that would allow entry into a Windows account. As for the Synaptic and ELAN fingerprint readers used by Lenovo and Microsoft (respectively), the main issue is that both sensors supported SCDP but that it wasn't actually enabled. Synaptic's touchpad used a custom TLS implementation for communication that the Blackwing team was able to exploit, while the Surface fingerprint reader used cleartext communication over USB for communication. "In fact, any USB device can claim to be the ELAN sensor (by spoofing its VID/PID) and simply claim that an authorized user is logging in," wrote D'Aguanno and Teras.
"Though all of these exploits ultimately require physical access to a device and an attacker who is determined to break into your specific laptop, the wide variety of possible exploits means that there's no single fix that can address all of these issues, even if laptop manufacturers are motivated to implement them," concludes Ars.

Blackwing recommends all Windows Hello fingerprint sensors enable SCDP, the protocol Microsoft developed to try to prevent this exploit. PC makers should also "have a qualified expert third party audit [their] implementation" to improve code quality and security.
United States

Fewer People Moving in California Are Moving Into the State Than Anywhere Else (sfgate.com) 265

America's census bureau looked at how many people relocated into each state from another state, compared to the total number of people making a move in that state. The state with the lowest "inmigration" ratio? California.

From 2021 through 2022, "California's inmigration rate was 11.1% last year..." reports SFGate. "For comparison, nearby Oregon had a inmigration rate of 21%."

But the census bureau cautions that California — America's most populous state — "also had a relatively large base of movers overall" — over 4 million — which could help explain its low ratio in several statistics. SFGate reports: California's outmigration rate — defined as the "number of people moving out of a state as a share of that state's total number of movers" — was also below the national migration average. Texas had the country's lowest outmigration rate, at 11.7%, according to the Census Bureau's analysis.
California and Texas are America's two most populous states. (The total population of California is 39 million — roughly 11.7% of America's population — while Texas has another 30 million. Oregon's population is just 4,240,137.) Interestingly, most people moving to California arrived from... Texas. (44,279). At the same time, 102,422 people moved from California to Texas, with another 74,157 moving from California to Arizona.

New York state also lost 91,201 people to Florida, and another 75,103 people to New Jersey. The second-highest number of people (31,225) who moved from a different state to California came from New York...

According to the San Francisco Chronicle, California saw a net loss of 340,000 residents between 2021 and 2022, with most of the people who left heading to Florida or Arizona.

Power

US Energy Department Funds Next-Gen Semiconductor Projects to Improve Power Grids (energy.gov) 20

America's long-standing Advanced Research Projects Agency (or ARPA) developed the foundational technologies for the internet.

This week its energy division announced $42 million for projects enabling a "more secure and reliable" energy grid, "allowing it to utilize more solar, wind, and other clean energy." But specifically, they funded 15 projects across 11 states to improve the reliability, resiliency, and flexibility of the grid "through the next-generation semiconductor technologies." Streamlining the coordinated operation of electricity supply and demand will improve operational efficiency, prevent unforeseen outages, allow faster recovery, minimize the impacts of natural disasters and climate-change fueled extreme weather events, and redcude grid operating costs and carbon intensity.
Some highlights:
  • The Georgia Institute of Technology will develop a novel semiconductor switching device to improve grid control, resilience, and reliability.
  • Michigan's Great Lakes Crystal Technologies (will develop a diamond semiconductor transistor to support the control infrastructure needed for an energy grid with more distributed generation sources and more variable loads
  • Lawrence Livermore National Laboratory will develop an optically-controlled semiconductor transistor to enable future grid control systems to accommodate higher voltage and current than state-of-the-art devices.
  • California's Opcondys will develop a light-controlled grid protection device to suppress destructive, sudden transient surges on the grid caused by lightning or electromagnetic pulses.
  • Albuquerque's Sandia National Laboratories will develop novel a solid-state surge arrester protecting the grid from very fast electromagnetic pulses that threaten grid reliability and performance.

America's Secretary of Energy said the new investment "will support project teams across the country as they develop the innovative technologies we need to strengthen our grid security and bring reliable clean electricity to more families and businesses — all while combatting the climate crisis."


Businesses

How to Support Local Retailers on 'Small Business Saturday' (nbcnews.com) 34

America celebrates "Small Business Saturday" today with special celebrations everywhere from Houston, Texas to Buffalo, New York

NBC News reports: Sandwiched between Black Friday and Cyber Monday — historically the biggest and busiest retail days of the year — there's another standout shopping event: Small Business Saturday. Started by American Express in 2010 and co-sponsored by the U.S. Small Business Administration since 2011, Small Business Saturday aims to create awareness about the impact shoppers have when they buy "small" year round, whether they physically visit stores or shop online.

This year, 85% of consumers say they're likely to shop "small" during the holiday season, according to the American Express 2023 Shop Small Impact Study. That represents a multibillion dollar opportunity — consumers are expected to spend an estimated $125 billion at small businesses this holiday season, up 42% from $88 billion in 2022, as reported by Intuit QuickBooks.

Like CBS News, NBC has compiled its list of small businesses that can ship their products to you — and suggests leaving positive reviews online for your favorite small businesses. ("Amazon, for example, now adds badges to product pages on its site if items are sold by small businesses.")
They also recommend interacting with your favorite small businesses on social media — while "the American Express small-business map allows you to input your zip code so it can recommend local shops in your area and beyond. Google also has a 'small business' filter on desktop and mobile, and one for Google Maps on mobile."

The UK's "Small Business Saturday" will happen next week, on the first Saturday in December.
AI

Microsoft Touted OpenAI's Independence Nine Days Before Hiring Top Talent 43

theodp writes: In a panel on AI at the Paris Peace Forum just 10 days ago, Microsoft President Brad Smith gave Meta Chief AI Scientist Yann LeCun a lecture on the importance of OpenAI's nonprofit independence.

"Meta is owned by shareholders," Smith argued. "OpenAI is owned by a nonprofit . Which would you have more confidence in? Getting your technology from a nonprofit? Or a for-profit company that is entirely controlled by one human being?"

But on Sunday, Microsoft CEO Satya Nadella pretty much trashed Smith's argument with his announcement that Microsoft was hiring OpenAI's co-founders and some of its top talent to head up a "new advanced AI research team." Another case of Embrace, Extend, and Extinguish?
Idle

1993's 'Second Reality' Demo Recreated for the Apple II (deater.net) 34

Long-time Slashdot reader deater writes: The Second Reality demoscene demo from 1993 is one of the most well known demos of all time, pushing a 486 running DOS to its limits. There have been remakes for other architectures over the years, including Atari ST, Gameboy color, and Commodore 64. At this past Demosplash 2023 demoparty a version for the Apple II was released (and won 1st place), which was quite a challenge as the Apple II graphics have essentially none of the hardware acceleration available on the other platforms.
Science

Archaeologists Unearth a Secret Lost Language From 3,000 Years Ago (sciencealert.com) 123

"And no, it's not COBOL," jokes long-time Slashdot reader schwit1, sharing this report from ScienceAlert: A secret text has been discovered in Türkiye, scattered among tens of thousands of ancient clay tablets, which were written in the time of the Hittite Empire during the second millennium BCE. No one yet knows what the curious cuneiform script says, but it seems to be a long-lost language from more than 3,000 years ago.

Experts say the mysterious idiom is unlike any other ancient written language found in the Middle East, although it seems to share roots with other Anatolian-Indo-European languages. The sneaky scrawlings start at the end of a cultic ritual text written in Hittite — the oldest known Indo-European tongue — after an introduction that essentially translates to: "From now on, read in the language of the country of Kalasma"... Currently, there are no available photos of the newly discovered tablet with Kalamaic writings, as experts are still working out how to translate it. Schwemer and his colleagues hope to publish their results along with images of their discovery sometime next year.

Role Playing (Games)

Source Code To Infocom's Text Adventure Interpreters Now Available 19

Slashdot reader Mononymous writes: Back in 2019, digital archivist Jason Scott released the source code to Infocom's classic text adventures. Now the other piece of the puzzle is available: the source code (mostly in assembly, with some C and Pascal) to their microcomputer interpreters.

Infocom, publisher of the best-selling Zork series, ported their text adventures to most of the diverse microcomputer platforms of the 1980s by using an early virtual machine, known as the Z-machine or ZIP. This enabled them to sell games simultaneously for everything from the TI-99/4A to the Commodore 128. Hobbyists reverse-engineered the technology in the 1990s to create modern implementations, but now the original source code can be studied directly.
Games

Open-Source 4K Dungeon Keeper Remake Spent 15 Years In the Making (pcgamer.com) 55

Rick Lane reports via PC Gamer: KeeperFX has been in the process of rescuing Dungeon Keeper for a decade and a half. The project originally started in 2008, and experienced something of a bumpy road up until 2016. Since then, though, it has gradually added support for Windows 7, 10, and 11, support for hi-res and 4k screens, modernized controls, and even additional campaigns. With this latest version, KeeperFX's developers say "all original Dungeon Keeper code has been rewritten, establishing KeeperFX as a true open-source standalone game." 1.0 also introduces some new features, such as higher framerates, AI that is better at digging and less likely to "instantly" throw its entire army at you, and "higher quality landview speeches" for the additional campaigns. That refers to the introductions and epilogues to missions which, in the game's original campaign, were voiced by Richard Ridings, aka Daddy Pig.

Perhaps most intriguing of all, KeeperFX's 1.0 adds a couple of new units to play with. First up is the Druid, a sort-of color-flipped version of the Warlock who uses ice spells rather than fire. The other unit is the excitingly named Time Mage, a recolor of the Wizard who can cast teleport and speed spells, and also turn enemy units into chickens (presumably through rapid devolution). You won't find these units in the original campaign, but you will encounter them in the custom campaigns bundled with the 1.0 version.
You can download KeeperFX here, although it still requires you to own Dungeon Keeper "for copyright reasons."
Displays

iOS Beta Adds 'Spatial Video' Recording. Blogger Calls Them 'Astonishing', 'Breathtaking', 'Compelling' (daringfireball.net) 95

MacRumors writes that the second beta of iOS 17.2 "adds a new feature that allows an iPhone 15 Pro or iPhone 15 Pro Max to record Spatial Video" — that is, in the immersive 3D format for the yet-to-be-released Apple Vision Pro (where it can be viewed in the "Photos" app): Spatial Video recording can be enabled by going to the Settings app, tapping into the Camera section, selecting Formats, and toggling on "Spatial Video for Apple Vision Pro..." Spatial Videos taken with an iPhone 15 Pro can be viewed on the iPhone as well, but the video appears to be a normal video and not a Spatial Video.
Tech blogger John Gruber got to test the technology, watching the videos on a (still yet-to-be-released) Vision Pro headset. "I'm blown away once again," he wrote, calling the experience "astonishing."

"Before my demo, I provided Apple with my eyeglasses prescription, and the Vision Pro headset I used had appropriate corrective lenses in place. As with my demo back in June, everything I saw through the headset looked incredibly sharp..." The Vision Pro experience is highly dependent upon foveated rendering, which Wikipedia succinctly describes as "a rendering technique which uses an eye tracker integrated with a virtual reality headset to reduce the rendering workload by greatly reducing the image quality in the peripheral vision (outside of the zone gazed by the fovea)..." It's just incredible, though, how detailed and high resolution the overall effect is...

Plain old still photos look amazing. You can resize the virtual window in which you're viewing photos to as large as you can practically desire. It's not merely like having a 20-foot display — a size far more akin to that of a movie theater screen than a television. It's like having a 20-foot display with retina quality resolution, and the best brightness and clarity of any display you've ever used... And then there are panoramic photos... Panoramic photos viewed using Vision Pro are breathtaking. There is no optical distortion at all, no fish-eye look. It just looks like you're standing at the place where the panoramic photo was taken — and the wider the panoramic view at capture, the more compelling the playback experience is. It's incredible...

As a basic rule, going forward, I plan to capture spatial videos of people, especially my family and dearest friends, and panoramic photos of places I visit. It's like teleportation... When you watch regular (non-spatial) videos using Vision Pro, or view regular still photography, the image appears in a crisply defined window in front of you. Spatial videos don't appear like that at all. I can't describe it any better today than I did in June: it's like watching — and listening to — a dream, through a hazy-bordered portal opened into another world...

Nothing you've ever viewed on a screen, however, can prepare you for the experience of watching these spatial videos, especially the ones you will have shot yourself, of your own family and friends. They truly are more like memories than videos... [T]he ones I shot myself were more compelling, and took my breath away... Prepare to be moved, emotionally, when you experience this.

Slashdot Top Deals