Books

Ian Fleming Published the James Bond Novel 'Moonraker' 70 Years Ago Today (cbr.com) 61

"The third James Bond novel was published on this day in 1955," writes long-time Slashdot reader sandbagger. Film buff Christian Petrozza shares some history: In 1979, the market was hot amid the studios to make the next big space opera. Star Wars blew up the box office in 1977 with Alien soon following and while audiences eagerly awaited the next installment of George Lucas' The Empire Strikes Back, Hollywood was buzzing with spacesuits, lasers, and ships that cruised the stars. Politically, the Cold War between the United States and Russia was still a hot topic, with the James Bond franchise fanning the flames in the media entertainment sector. Moon missions had just finished their run in the early 70s and the space race was still generationally fresh. With all this in mind, as well as the successful run of Roger Moore's fun and campy Bond, the time seemed ripe to boldly take the globe-trotting Bond where no spy has gone before.

Thus, 1979's Moonraker blasted off to theatres, full of chrome space-suits, laser guns, and jetpacks, the franchise went full-boar science fiction to keep up with the Joneses of current Hollywood's hottest genre. The film was a commercial smash hit, grossing 210 million worldwide. Despite some mixed reviews from critics, audiences seemed jazzed about seeing James Bond in space.

When it comes to adaptations of the novella that Ian Fleming wrote of the same name, Moonraker couldn't be farther from its source material, and may as well be renamed completely to avoid any association... Ian Fleming's original Moonraker was more of a post-war commentary on the domestic fears of modern weapons being turned on Europe by enemies who were hired for science by newer foes. With Nazi scientists being hired by both the U.S. and Russia to build weapons of mass destruction after World War II, this was less of a Sci-Fi and much more of a cautionary tale.

They argue that filming a new version of Moonraker could "find a happy medium between the glamor and the grit of the James Bond franchise..."
Networking

Eric Raymond, John Carmack Mourn Death of 'Bufferbloat' Fighter Dave Taht (x.com) 18

Wikipedia remembers Dave Täht as "an American network engineer, musician, lecturer, asteroid exploration advocate, and Internet activist. He was the chief executive officer of TekLibre."

But on X.com Eric S. Raymond called him "one of the unsung heroes of the Internet, and a close friend of mine who I will miss very badly." Dave, known on X as @mtaht because his birth name was Michael, was a true hacker of the old school who touched the lives of everybody using X. His work on mitigating bufferbloat improved practical TCP/IP performance tremendously, especially around video streaming and other applications requiring low latency. Without him, Netflix and similar services might still be plagued by glitches and stutters.
Also on X, legendary game developer John Carmack remembered that Täht "did a great service for online gamers with his long campaign against bufferbloat in routers and access points. There is a very good chance your packets flow through some code he wrote." (Carmack also says he and Täht "corresponded for years".)

Long-time Slashdot reader TheBracket remembers him as "the driving force behind ">the Bufferbloat project and a contributor to FQ-CoDel, and CAKE in the Linux kernel."

Dave spent years doing battle with Internet latency and bufferbloat, contributing to countless projects. In recent years, he's been working with Robert, Frank and myself at LibreQoS to provide CAKE at the ISP level, helping Starlink with their latency and bufferbloat, and assisting the OpenWrt project.
Eric Raymond remembered first meeting Täht in 2001 "near the peak of my Mr. Famous Guy years. Once, sometimes twice a year he'd come visit, carrying his guitar, and crash out in my basement for a week or so hacking on stuff. A lot of the central work on bufferbloat got done while I was figuratively looking over his shoulder..."

Raymond said Täht "lived for the work he did" and "bore deteriorating health stoically. While I know him he went blind in one eye and was diagnosed with multiple sclerosis." He barely let it slow him down. Despite constantly griping in later years about being burned out on programming, he kept not only doing excellent work but bringing good work out of others, assembling teams of amazing collaborators to tackle problems lesser men would have considered intractable... Dave should have been famous, and he should have been rich. If he had a cent for every dollar of value he generated in the world he probably could have bought the entire country of Nicaragua and had enough left over to finance a space program. He joked about wanting to do the latter, and I don't think he was actually joking...

In the invisible college of people who made the Internet run, he was among the best of us. He said I inspired him, but I often thought he was a better and more selfless man than me. Ave atque vale, Dave.

Weeks before his death Täht was still active on X.com, retweeting LWN's article about "The AI scraperbot scourge", an announcement from Texas Instruments, and even a Slashdot headline.

Täht was also Slashdot reader #603,670, submitting stories about network latency, leaving comments about AI, and making announcements about the Bufferbloat project.
Wikipedia

Wikimedia Drowning in AI Bot Traffic as Crawlers Consume 65% of Resources 73

Web crawlers collecting training data for AI models are overwhelming Wikipedia's infrastructure, with bot traffic growing exponentially since early 2024, according to the Wikimedia Foundation. According to data released April 1, bandwidth for multimedia content has surged 50% since January, primarily from automated programs scraping Wikimedia Commons' 144 million openly licensed media files.

This unprecedented traffic is causing operational challenges for the non-profit. When Jimmy Carter died in December 2024, his Wikipedia page received 2.8 million views in a day, while a 1.5-hour video of his 1980 presidential debate caused network traffic to double, resulting in slow page loads for some users.

Analysis shows 65% of the foundation's most resource-intensive traffic comes from bots, despite bots accounting for only 35% of total pageviews. The foundation's Site Reliability team now routinely blocks overwhelming crawler traffic to prevent service disruptions. "Our content is free, our infrastructure is not," the foundation said, announcing plans to establish sustainable boundaries for automated content consumption.
AI

Vibe Coded AI App Generates Recipes With Very Few Guardrails 76

An anonymous reader quotes a report from 404 Media: A "vibe coded" AI app developed by entrepreneur and Y Combinator group partner Tom Blomfield has generated recipes that gave users instruction on how to make "Cyanide Ice Cream," "Thick White Cum Soup," and "Uranium Bomb," using those actual substances as ingredients. Vibe coding, in case you are unfamiliar, is the new practice where people, some with limited coding experience, rapidly develop software with AI assisted coding tools without overthinking how efficient the code is as long as it's functional. This is how Blomfield said he made RecipeNinja.AI. [...] The recipe for Cyanide Ice Cream was still live on RecipeNinja.AI at the time of writing, as are recipes for Platypus Milk Cream Soup, Werewolf Cream Glazing, Cholera-Inspired Chocolate Cake, and other nonsense. Other recipes for things people shouldn't eat have been removed.

It also appears that Blomfield has introduced content moderation since users discovered they could generate dangerous or extremely stupid recipes. I wasn't able to generate recipes for asbestos cake, bullet tacos, or glue pizza. I was able to generate a recipe for "very dry tacos," which looks not very good but not dangerous. In a March 20 blog on his personal site, Blomfield explained that he's a startup founder turned investor, and while he has experience with PHP and Ruby on Rails, he has not written a line of code professionally since 2015. "In my day job at Y Combinator, I'm around founders who are building amazing stuff with AI every day and I kept hearing about the advances in tools like Lovable, Cursor and Windsurf," he wrote, referring to AI-assisted coding tools. "I love building stuff and I've always got a list of little apps I want to build if I had more free time."

After playing around with them, he wrote, he decided to build RecipeNinja.AI, which can take a prompt as simple as "Lasagna," and generate an image of the finished dish along with a step-by-stape recipe which can use ElevenLabs's AI generated voice to narrate the instruction so the user doesn't have to interact with a device with his tomato sauce-covered fingers. "I was pretty astonished that Windsurf managed to integrate both the OpenAI and Elevenlabs APIs without me doing very much at all," Blomfield wrote. "After we had a couple of problems with the open AI Ruby library, it quickly fell back to a raw ruby HTTP client implementation, but I honestly didn't care. As long as it worked, I didn't really mind if it used 20 lines of code or two lines of code." Having some kind of voice controlled recipe app sounds like a pretty good idea to me, and it's impressive that Blomfield was able to get something up and running so fast given his limited coding experience. But the problem is that he also allowed users to generate their own recipes with seemingly very few guardrails on what kind of recipes are and are not allowed, and that the site kept those results and showed them to other users.
AI

Copilot Can't Beat a 2013 'TouchDevelop' Code Generation Demo for Windows Phone 18

What happens when you ask Copilot to "write a program that can be run on an iPhone 16 to select 15 random photos from the phone, tint them to random colors, and display the photos on the phone"?

That's what TouchDevelop did for the long-discontinued Windows Phone in a 2013 Microsoft Research 'SmartSynth' natural language code generation demo. ("Write scripts by tapping on the screen.")

Long-time Slashdot reader theodp reports on what happens when, 14 years later, you pose the same question to Copilot: "You'll get lots of code and caveats from Copilot, but nothing that you can execute as is. (Compare that to the functioning 10 lines of code TouchDevelop program). It's a good reminder that just because GenAI can generate code, it doesn't necessarily mean it will generate the least amount of code, the most understandable or appropriate code for the requestor, or code that runs unchanged and produces the desired results.
theodp also reminds us that TouchDevelop "was (like BASIC) abandoned by Microsoft..." Interestingly, a Microsoft Research video from CS Education Week 2011 shows enthusiastic Washington high school students participating in an hour-long TouchDevelop coding lesson and demonstrating the apps they created that tapped into music, photos, the Internet, and yes, even their phone's functionality. This shows how lacking iPhone and Android still are today as far as easy programmability-for-the-masses goes. (When asked, Copilot replied that Apple's Shortcuts app wasn't up to the task).
Science

A New Image File Format Efficiently Stores Invisible Light Data (arstechnica.com) 11

An anonymous reader quotes a report from Ars Technica: Imagine working with special cameras that capture light your eyes can't even see -- ultraviolet rays that cause sunburn, infrared heat signatures that reveal hidden writing, or specific wavelengths that plants use for photosynthesis. Or perhaps using a special camera designed to distinguish the subtle visible differences that make paint colors appear just right under specific lighting. Scientists and engineers do this every day, and they're drowning in the resulting data. A new compression format called Spectral JPEG XL might finally solve this growing problem in scientific visualization and computer graphics. Researchers Alban Fichet and Christoph Peters of Intel Corporation detailed the format in a recent paper published in the Journal of Computer Graphics Techniques (JCGT). It tackles a serious bottleneck for industries working with these specialized images. These spectral files can contain 30, 100, or more data points per pixel, causing file sizes to balloon into multi-gigabyte territory -- making them unwieldy to store and analyze.

[...] The current standard format for storing this kind of data, OpenEXR, wasn't designed with these massive spectral requirements in mind. Even with built-in lossless compression methods like ZIP, the files remain unwieldy for practical work as these methods struggle with the large number of spectral channels. Spectral JPEG XL utilizes a technique used with human-visible images, a math trick called a discrete cosine transform (DCT), to make these massive files smaller. Instead of storing the exact light intensity at every single wavelength (which creates huge files), it transforms this information into a different form. [...]

According to the researchers, the massive file sizes of spectral images have reportedly been a real barrier to adoption in industries that would benefit from their accuracy. Smaller files mean faster transfer times, reduced storage costs, and the ability to work with these images more interactively without specialized hardware. The results reported by the researchers seem impressive -- with their technique, spectral image files shrink by 10 to 60 times compared to standard OpenEXR lossless compression, bringing them down to sizes comparable to regular high-quality photos. They also preserve key OpenEXR features like metadata and high dynamic range support.
The report notes that broader adoption "hinges on the continued development and refinement of the software tools that handle JPEG XL encoding and decoding."

Some scientific applications may also see JPEG XL's lossy approach as a drawback. "Some researchers working with spectral data might readily accept the trade-off for the practical benefits of smaller files and faster processing," reports Ars. "Others handling particularly sensitive measurements might need to seek alternative methods of storage."
The Internet

Why the Internet Archive is More Relevant Than Ever (npr.org) 64

It's "live-recording the World Wide Web," according to NPR, with a digital library that includes "hundreds of billions of copies of government websites, news articles and data."

They described the 29-year-old nonprofit Internet Archive as "more relevant than ever." Every day, about 100 terabytes of material are uploaded to the Internet Archive, or about a billion URLs, with the assistance of automated crawlers. Most of that ends up in the Wayback Machine, while the rest is digitized analog media — books, television, radio, academic papers — scanned and stored on servers. As one of the few large-scale archivists to back up the web, the Internet Archive finds itself in a particularly unique position right now... Thousands of [U.S. government] datasets were wiped — mostly at agencies focused on science and the environment — in the days following Trump's return to the White House...

The Internet Archive is among the few efforts that exist to catch the stuff that falls through the digital cracks, while also making that information accessible to the public. Six weeks into the new administration, Wayback Machine director [Mark] Graham said, the Internet Archive had cataloged some 73,000 web pages that had existed on U.S. government websites that were expunged after Trump's inauguration...

According to Graham, based on the big jump in page views he's observed over the past two months, the Internet Archive is drawing many more visitors than usual to its services — journalists, researchers and other inquiring minds. Some want to consult the archive for information lost or changed in the purge, while others aim to contribute to the archival process.... "People are coming and rallying behind us," said Brewster Kahle, [the founder and current director of the Internet Archive], "by using it, by pointing at things, helping organize things, by submitting content to be archived — data sets that are under threat or have been taken down...."

A behemoth of link rot repair, the Internet Archive rescues a daily average of 10,000 dead links that appear on Wikipedia pages. In total, it's fixed more than 23 million rotten links on Wikipedia alone, according to the organization.

Though it receives some money for its preservation work for libraries, museums, and other organizations, it's also funded by donations. "From the beginning, it was important for the Internet Archive to be a nonprofit, because it was working for the people," explains founder Brewster Kahle on its donations page: Its motives had to be transparent; it had to last a long time. That's why we don't charge for access, sell user data, or run ads, even while we offer free resources to citizens everywhere. We rely on the generosity of individuals like you to pay for servers, staff, and preservation projects. If you can't imagine a future without the Internet Archive, please consider supporting our work. We promise to put your donation to good use as we continue to store over 99 petabytes of data, including 625 billion webpages, 38 million texts, and 14 million audio recordings.
Two interesting statistics from NPR's article:

Thanks to long-time Slashdot reader jtotheh for sharing the news.


Space

Another Large Black Hole In 'Our' Galaxy (arxiv.org) 49

RockDoctor (Slashdot reader #15,477) writes: A recent paper on ArXiv reports a novel idea about the central regions of "our" galaxy.

Remember the hoopla a few years ago about radio-astronomical observations producing an "image" of our central black hole — or rather, an image of the accretion disc around the black hole — long designated by astronomers as "Sagittarius A*" (or SGR-A*)? If you remember the image published then, one thing should be striking — it's not very symmetrical. If you think about viewing a spinning object, then you'd expect to see something with a "mirror" symmetry plane where we would see the rotation axis (if someone had marked it). If anything, that published image has three bright spots on a fainter ring. And the spots are not even approximately the same brightness.

This paper suggests that the image we see is the result of the light (radio waves) from SGR-A* being "lensed" by another black hole, near (but not quite on) the line of sight between SGR-A* and us. By various modelling approaches, they then refine this idea to a "best-fit" of a black hole with mass around 1000 times the Sun, orbiting between the distance of the closest-observed star to SGR-A* ("S2" — most imaginative name, ever!), and around 10 times that distance. That's far enough to make a strong interaction with "S2" unlikely within the lifetime of S2 before it's accretion onto SGR-A*.)

The region around SGR-A* is crowded. Within 25 parsecs (~80 light years, the distance to Regulus [in the constellation Leo] or Merak [in the Great Bear]) there is around 4 times more mass in several millions of "normal" stars than in the SGR-A* black hole. Finding a large (not "super massive") black hole in such a concentration of matter shouldn't surprise anyone.

This proposed black hole is larger than anything which has been detected by gravitational waves (yet) ; but not immensely larger — only a factor of 15 or so. (The authors also anticipate the "what about these big black holes spiralling together?" question : quote "and the amplitude of gravitational waves generated by the binary black holes is negligible.")

Being so close to SGR-A*, the proposed black hole is likely to be moving rapidly across our line of sight. At the distance of "S2" it's orbital period would be around 26 years (but the "new" black hole is probably further out than than that). Which might be an explanation for some of the variability and "flickering" reported for SGR-A* ever since it's discovery.

As always, more observations are needed. Which, for SGR-A* are frequently being taken, so improving (or ruling out) this explanation should happen fairly quickly. But it's a very interesting, and fun, idea.

Open Source

Developer Loads Steam On a $100 ARM Single Board Computer (interfacinglinux.com) 24

"There's no shortage of videos showing Steam running on expensive ARM single-board computers with discrete GPUs," writes Slashdot reader VennStone. "So I thought it would be worthwhile to make a guide for doing it on (relatively) inexpensive RK3588-powered single-board computers, using Box86/64 and Armbian." The guides I came across were out of date, had a bunch of extra steps thrown in, or were outright incorrect... Up first, we need to add the Box86 and Box64 ARM repositories [along with dependencies, ARMHF architecture, and the Mesa graphics driver]...
The guide closes with a multi-line script and advice to "Just close your eyes and run this. It's not pretty, but it will download the Steam Debian package, extract the needed bits, and set up a launch script." (And then the final step is sudo reboot now.)

"At this point, all you have to do is open a terminal, type 'steam', and tap Enter. You'll have about five minutes to wait... Check out the video to see how some of the tested games perform." At 720p, performance is all over the place, but the games I tested typically managed to stay above 30 FPS. This is better than I was expecting from a four-year-old SOC emulating x86 titles under ARM.

Is this a practical way to play your Steam games? Nope, not even a little bit. For now, this is merely an exercise in ludicrous neatness. Things might get a wee bit better, considering Collabora is working on upstream support for RK3588 and Valve is up to something ARM-related, but ya know, "Valve Time"...

"You might be tempted to enable Steam Play for your Windows games, but don't waste your time. I mean, you can try, but it ain't gonna work."
Unix

Rebooting A Retro PDP-11 Workstation - and Its Classic 'Venix' UNIX (blogspot.com) 36

This week the "Old Vintage Computing Research" blog published a 21,000-word exploration of the DEC PDP-11, the 16-bit minicomputer sold by Digital Equipment Corporation. Slashdot reader AndrewZX calls the blog post "an excellent deep dive" into the machine's history and capabilities "and the classic Venix UNIX that it ran." The blogger still owns a working 1984 DEC Professional 380, "a tank of a machine, a reasonably powerful workstation, and the most practical PDP-adjacent thing you can actually slap on a (large) desk."

But more importantly, "It runs PRO/VENIX, the only official DEC Unix option for the Pros." In that specific market it was almost certainly the earliest such licensed Unix (in 1983) and primarily competed against XENIX, Microsoft's dominant "small Unix," which first emerged for XT-class systems as SCO XENIX in 1984. You'd wonder how rogue processes could be prevented from stomping on each other in such systems when neither the Intel 8086/8088 nor the IBM PC nor the PC/XT had a memory management unit, and the answer was not to try and just hope for the best. It was for this reason that IBM's own Unix variant PC/IX, developed by Interactive Systems Corporation under contract as their intended AT&T killer, was multitasking but single-user since in such an architecture there could be no meaningful security guarantees...

One of Venix's interesting little idiosyncrasies, seen in all three Pro versions, was the SUPER> prompt when you've logged on as root (there is also a MAINT> prompt when you're single-user...

Although Bill Gates had been their biggest nemesis early on, most of the little Unices that flourished in the 1980s and early 90s met their collective demise at the hands of another man: Linus Torvalds. The proliferation of free Unix alternatives like Linux on commodity PC hardware caused the bottom to fall out of the commercial Unix market.

The blogger even found a 1989 log for the computer's one and only guest login session — which seems to consist entirely of someone named tom trying to exit vi.

But the most touching part of the article comes when the author discovers a file named /thankyou that they're certain didn't come with the original Venix. It's an ASCII drawing of a smiling face, under the words "THANK YOU FOR RESCUING ME".

"It's among the last files created on the system before it came into my possession..."

It's all a fun look back to a time when advances in semiconductor density meant microcomputers could do nearly as much as the more expensive minicomputers (while taking up less space) — leaving corporations pondering the new world that was coming: As far back as 1974, an internal skunkworks unit had presented management with two small systems prototypes described as a PDP-8 in a VT50 terminal and a portable PDP-11 chassis.

Engineers were intrigued but sales staff felt these smaller versions would cut into their traditional product lines, and [DEC president Ken] Olsen duly cancelled the project, famously observing no one would want a computer in their home.

AI

Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End (futurism.com) 121

Founded in 1979, the Association for the Advancement of AI is an international scientific society. Recently 25 of its AI researchers surveyed 475 respondents in the AAAI community about "the trajectory of AI research" — and their results were surprising.

Futurism calls the results "a resounding rebuff to the tech industry's long-preferred method of achieving AI gains" — namely, adding more hardware: You can only throw so much money at a problem. This, more or less, is the line being taken by AI researchers in a recent survey. Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed...

"The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced," Stuart Russel, a computer scientist at UC Berkeley who helped organize the report, told New Scientist. "I think that, about a year ago, it started to become obvious to everyone that the benefits of scaling in the conventional sense had plateaued...." In November last year, reports indicated that OpenAI researchers discovered that the upcoming version of its GPT large language model displayed significantly less improvement, and in some cases, no improvements at all than previous versions did over their predecessors. In December, Google CEO Sundar Pichai went on the record as saying that easy AI gains were "over" — but confidently asserted that there was no reason the industry couldn't "just keep scaling up."

Cheaper, more efficient approaches are being explored. OpenAI has used a method known as test-time compute with its latest models, in which the AI spends more time to "think" before selecting the most promising solution. That achieved a performance boost that would've otherwise taken mountains of scaling to replicate, researchers claimed. But this approach is "unlikely to be a silver bullet," Arvind Narayanan, a computer scientist at Princeton University, told New Scientist.

Facebook

After Meta Blocks Whistleblower's Book Promotion, It Becomes an Amazon Bestseller (thetimes.com) 39

After Meta convinced an arbitrator to temporarily prevent a whistleblower from promoting their book about the company (titled: Careless People), the book climbed to the top of Amazon's best-seller list. And the book's publisher Macmillan released a defiant statement that "The arbitration order has no impact on Macmillan... We will absolutely continue to support and promote it." (They added that they were "appalled by Meta's tactics to silence our author through the use of a non-disparagement clause in a severance agreement.")

Saturday the controversy was even covered by Rolling Stone: [Whistleblower Sarah] Wynn-Williams is a diplomat, policy expert, and international lawyer, with previous roles including serving as the Chief Negotiator for the United Nations on biosafety liability, according to her bio on the World Economic Forum...

Since the book's announcement, Meta has forcefully responded to the book's allegations in a statement... "Eight years ago, Sarah Wynn-Williams was fired for poor performance and toxic behavior, and an investigation at the time determined she made misleading and unfounded allegations of harassment. Since then, she has been paid by anti-Facebook activists and this is simply a continuation of that work. Whistleblower status protects communications to the government, not disgruntled activists trying to sell books."

But the negative coverage continues, with the Observer Sunday highlighting it as their Book of the Week. "This account of working life at Mark Zuckerberg's tech giant organisation describes a 'diabolical cult' able to swing elections and profit at the expense of the world's vulnerable..."

Though ironically Wynn-Williams started their career with optimism about Facebook's role in the app internet.org. . "Upon witnessing how the nascent Facebook kept Kiwis connected in the aftermath of the 2011 Christchurch earthquake, she believed that Mark Zuckerberg's company could make a difference — but in a good way — to social bonds, and that she could be part of that utopian project...

What internet.org involves for countries that adopt it is a Facebook-controlled monopoly of access to the internet, whereby to get online at all you have to log in to a Facebook account. When the scales fall from Wynn-Williams's eyes she realises there is nothing morally worthwhile in Zuckerberg's initiative, nothing empowering to the most deprived of global citizens, but rather his tool involves "delivering a crap version of the internet to two-thirds of the world". But Facebook's impact in the developing world proves worse than crap. In Myanmar, as Wynn-Williams recounts at the end of the book, Facebook facilitated the military junta to post hate speech, thereby fomenting sexual violence and attempted genocide of the country's Muslim minority. "Myanmar," she writes with a lapsed believer's rue, "would have been a better place if Facebook had not arrived." And what is true of Myanmar, you can't help but reflect, applies globally...

"Myanmar is where Wynn-Williams thinks the 'carelessness' of Facebook is most egregious," writes the Sunday Times: In 2018, UN human rights experts said Facebook had helped spread hate speech against Rohingya Muslims, about 25,000 of whom were slaughtered by the Burmese military and nationalists. Facebook is so ubiquitous in Myanmar, Wynn-Williams points out, that people think it is the entire internet. "It's no surprise that the worst outcome happened in the place that had the most extreme take-up of Facebook." Meta admits it was "too slow to act" on abuse in its Myanmar services....

After Wynn-Williams left Facebook, she worked on an international AI initiative, and says she wants the world to learn from the mistakes we made with social media, so that we fare better in the next technological revolution. "AI is being integrated into weapons," she explains. "We can't just blindly wander into this next era. You think social media has turned out with some issues? This is on another level."

The Courts

Climatologist Michael Mann Finally Won a $1M Defamation Suit - But Then a Judge Threw It Out (msn.com) 64

Slashdot has run nearly a dozen stories about Michael Mann, one of America's most prominent climate scientists and a co-creator of the famous "hockey stick" graph of spiking temperatures. In 2012 Mann sued two bloggers for defamation — and last year Mann finally won more than $1 million, reports the Washington Post. "A jury found that two conservative commentators had defamed him by alleging that he was like a child molester in the way he had 'molested and tortured' climate data."

But "Now, a year after that ruling, the case has taken a turn that leaves Mann in the position of the one who owes money." On Wednesday, a judge sanctioned Mann's legal team for "bad-faith trial misconduct" for overstating how much the scientist lost in potential grant funding as a result of reputational harm. The lawyers had shown jurors a chart that listed one grant amount Mann didn't get at $9.7 million, though in other testimony Mann said it was worth $112,000. And when comparing Mann's grant income before and after the negative commentary, the lawyers cited a disparity of $2.8 million, but an amended calculation pegged it at $2.37 million.


The climate scientist's legal team said it was preparing to fight the setbacks in court. Peter J. Fontaine, one of Mann's attorneys, wrote in an email that Mann "believes that the court committed errors of fact and law and will pursue these matters further." Fontaine emphasized that the original decision — that Mann was defamed by the commentary — still stands. "We have reviewed the recent rulings by the D.C. Superior Court and are pleased to note that the court has upheld the jury's verdict," he said.

Thanks to Slashdot reader UsuallyReasonable for sharing the news.
AI

AI Coding Assistant Refuses To Write Code, Tells User To Learn Programming Instead (arstechnica.com) 96

An anonymous reader quotes a report from Ars Technica: On Saturday, a developer using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice. According to a bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."

The AI didn't stop at merely refusing -- it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities." [...] The developer who encountered this refusal, posting under the username "janswist," expressed frustration at hitting this limitation after "just 1h of vibe coding" with the Pro Trial version. "Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding." One forum member replied, "never saw something like that, i have 3 files with 1500+ loc in my codebase (still waiting for a refactoring) and never experienced such thing."

Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding" -- a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor's philosophical pushback seems to directly challenge the effortless "vibes-based" workflow its users have come to expect from modern AI coding assistants.

Firefox

Mozilla Warns Users To Update Firefox Before Certificate Expires (bleepingcomputer.com) 28

Mozilla is urging Firefox users to update their browsers to version 128 or later (or ESR 115.13 for extended support users) before March 14, 2025, to avoid security risks and add-on disruptions caused by the expiration of a key root certificate. "On 14 March a root certificate (the resource used to prove an add-on was approved by Mozilla) will expire, meaning Firefox users on versions older than 128 (or ESR 115) will not be able to use their add-ons," warns a Mozilla blog post. "We want developers to be aware of this in case some of your users are on older versions of Firefox that may be impacted." BleepingComputer reports: A Mozilla support document explains that failing to update Firefox could expose users to significant security risks and practical issues, which, according to Mozilla, include:

- Malicious add-ons can compromise user data or privacy by bypassing security protections.
- Untrusted certificates may allow users to visit fraudulent or insecure websites without warning.
- Compromised password alerts may stop working, leaving users unaware of potential account breaches.

It is noted that the problem impacts Firefox on all platforms, including Windows, Android, Linux, and macOS, except for iOS, where there's an independent root certificate management system. Mozilla says that users relying on older versions of Firefox may continue using their browsers after the expiration of the certificate if they accept the security risks, but the software's performance and functionality may be severely impacted.

Wikipedia

Photographers Are on a Mission to Fix Wikipedia's Famously Bad Celebrity Portraits (404media.co) 29

A volunteer group called WikiPortraits is working to address Wikipedia's issue of featuring outdated and unflattering portraits by providing high-quality, openly licensed images. Since 2024, they have covered global festivals, taken thousands of images, and improved representation of underrepresented individuals, though challenges with funding and media credentials remain. 404 Media reports: This portrait problem stems from Wikipedia's mission to provide free reliable information. All media on the site must be openly licensed, so that anyone can use it free of charge. That, in turn, means that most photos of notable people on the site are of notably poor quality. "No professional photographers ever have their photos on Wikipedia, because they want to make money from the photos," said Jay Dixit, a writing professor and amateur Wikipedia photographer. "It's actually the norm that most celebrities have poor photos on Wikipedia, if they have photos at all. It's just some civilian at an airport being like, 'Oh my god, it's Pete Davidson,' click with an iPhone."

Dixit is part of a team of volunteer photographers, called WikiPortraits, that's trying to fix that problem. "It's been in the back of our minds for quite a while now," said Kevin Payravi, one of WikiPortraits' cofounders. "Last year, finally, we decided to make this a reality, and we got a couple of credentials for Sundance 2024 [a major film festival]. We sent a couple photographers there, we set up a portrait studio, and that was our first organized effort here in the U.S. to take good quality photos of people for Wikipedia."

Since last January, WikiPortraits photographers have covered around 10 global festivals and award ceremonies, and taken nearly 5,000 freely-licensed photos of celebrity attendees. And the celebrity attendees are often quite excited about it. [...] WikiPortraits photos are currently used on Wikipedia articles in over 120 languages, and they're viewed up to 80 million times per month from those pages alone. In January, for example, Payravi said that over 1,500 WikiPortraits photos were used on articles that collectively received 140 million views. Many WikiPortraits photos have also been used by a variety of news outlets around the world, including CNN Brasil, Times of Israel, and multiple non-English-language smaller news organizations.
"[N]ot being an official news or photo agency means WikiPortraits sometimes faces problems getting media credentials to cover events," notes 404 Media. "Funding poses another main challenge."

"Photographers must already own a professional-quality camera, and usually have to cover the cost of getting to events and at least part of their lodging. Although WikiPortraits sometimes receives rapid grants from the Wikimedia Foundation and private donors to cover costs, Payravi said he still likes to run a 'tight ship.'"
AI

DuckDuckGo Is Amping Up Its AI Search Tool 21

An anonymous reader quotes a report from The Verge: DuckDuckGo has big plans for embedding AI into its search engine. The privacy-focused company just announced that its AI-generated answers, which appear for certain queries on its search engine, have exited beta and now source information from across the web -- not just Wikipedia. It will soon integrate web search within its AI chatbot, which has also exited beta. DuckDuckGo first launched AI-assisted answers -- originally called DuckAssist -- in 2023. The feature is billed as a less obnoxious version of tools like Google's AI Overviews, designed to offer more concise responses and let you adjust how often you see them, including turning the responses off entirely. If you have DuckDuckGo's AI-generated answers set to "often," you'll still only see them around 20 percent of the time, though the company plans on increasing the frequency eventually.

Some of DuckDuckGo's AI-assisted answers bring up a box for follow-up questions, redirecting you to a conversation with its Duck.ai chatbot. As is the case with its AI-assisted answers, you don't need an account to use Duck.ai, and it comes with the same emphasis on privacy. It lets you toggle between GPT-4o mini, o3-mini, Llama 3.3, Mistral Small 3, and Claude 3 Haiku, with the advantage being that you can interact with each model anonymously by hiding your IP address. DuckDuckGo also has agreements with the AI company behind each model to ensure your data isn't used for training.

Duck.ai also rolled out a feature called Recent Chats, which stores your previous conversations locally on your device rather than on DuckDuckGo's servers. Though Duck.ai is also leaving beta, that doesn't mean the flow of new features will stop. In the next few weeks, Duck.ai will add support for web search, which should enhance its ability to respond to questions. The company is also working on adding voice interaction on iPhone and Android, along with the ability to upload images and ask questions about them. ... [W]hile Duck.ai will always remain free, the company is considering including access to more advanced AI models with its $9.99 per month subscription.
AI

What Happened When Conspiracy Theorists Talked to OpenAI's GPT-4 Turbo? (washingtonpost.com) 134

A "decision science partner" at a seed-stage venture fund (who is also a cognitive-behavioral decision science author and professional poker player) explored what happens when GPT-4 Turbo converses with conspiracy theorists: Researchers have struggled for decades to develop techniques to weaken the grip of conspiracy theories and cult ideology on adherents. This is why a new paper in the journal Science by Thomas Costello of MIT's Sloan School of Management, Gordon Pennycook of Cornell University and David Rand, also of Sloan, is so exciting... In a pair of studies involving more than 2,000 participants, the researchers found a 20 percent reduction in belief in conspiracy theories after participants interacted with a powerful, flexible, personalized GPT-4 Turbo conversation partner. The researchers trained the AI to try to persuade the participants to reduce their belief in conspiracies by refuting the specific evidence the participants provided to support their favored conspiracy theory.

The reduction in belief held across a range of topics... Even more encouraging, participants demonstrated increased intentions to ignore or unfollow social media accounts promoting the conspiracies, and significantly increased willingness to ignore or argue against other believers in the conspiracy. And the results appear to be durable, holding up in evaluations 10 days and two months later... Why was AI able to persuade people to change their minds? The authors posit that it "simply takes the right evidence," tailored to the individual, to effect belief change, noting: "From a theoretical perspective, this paints a surprisingly optimistic picture of human reasoning: Conspiratorial rabbit holes may indeed have an exit. Psychological needs and motivations do not inherently blind conspiracists to evidence...."

It is hard to walk away from who you are, whether you are a QAnon believer, a flat-Earther, a truther of any kind or just a stock analyst who has taken a position that makes you stand out from the crowd. And that's why the AI approach might work so well. The participants were not interacting with a human, which, I suspect, didn't trigger identity in the same way, allowing the participants to be more open-minded. Identity is such a huge part of these conspiracy theories in terms of distinctiveness, putting distance between you and other people. When you're interacting with AI, you're not arguing with a human being whom you might be standing in opposition to, which could cause you to be less open-minded.

Answering questions from Slashdot readers in 2005, Wil Wheaton described playing poker against the cognitive-behavioral decision science author who wrote this article...
Programming

Google Calls for Measurable Memory-Safety Standards for Software (googleblog.com) 44

Memory safety bugs are "eroding trust in technology and costing billions," argues a new post on Google's security blog — adding that "traditional approaches, like code auditing, fuzzing, and exploit mitigations — while helpful — haven't been enough to stem the tide."

So the blog post calls for a "common framework" for "defining specific, measurable criteria for achieving different levels of memory safety assurance." The hope is this gives policy makers "the technical foundation to craft effective policy initiatives and incentives promoting memory safety" leading to "a market in which vendors are incentivized to invest in memory safety." ("Customers will be empowered to recognize, demand, and reward safety.")

In January the same Google security researchers helped co-write an article noting there are now strong memory-safety "research technologies" that are sufficiently mature: memory-safe languages (including "safer language subsets like Safe Buffers for C++"), mathematically rigorous formal verification, software compartmentalization, and hardware and software protections. (With hardware protections including things like ARM's Memory Tagging Extension and the (Capability Hardware Enhanced RISC Instructions, or "CHERI", architecture.) Google's security researchers are now calling for "a blueprint for a memory-safe future" — though Importantly, the idea is "defining the desired outcomes rather than locking ourselves into specific technologies."

Their blog post this week again urges a practical/actionable framework that's commonly understood, but one that supports different approaches (and allowing tailoring to specific needs) while enabling objective assessment: At Google, we're not just advocating for standardization and a memory-safe future, we're actively working to build it. We are collaborating with industry and academic partners to develop potential standards, and our joint authorship of the recent CACM call-to-action marks an important first step in this process... This commitment is also reflected in our internal efforts. We are prioritizing memory-safe languages, and have already seen significant reductions in vulnerabilities by adopting languages like Rust in combination with existing, wide-spread usage of Java, Kotlin, and Go where performance constraints permit. We recognize that a complete transition to those languages will take time. That's why we're also investing in techniques to improve the safety of our existing C++ codebase by design, such as deploying hardened libc++.

This effort isn't about picking winners or dictating solutions. It's about creating a level playing field, empowering informed decision-making, and driving a virtuous cycle of security improvement... The journey towards memory safety requires a collective commitment to standardization. We need to build a future where memory safety is not an afterthought but a foundational principle, a future where the next generation inherits a digital world that is secure by design.

The security researchers' post calls for "a collective commitment" to eliminate memory-safety bugs, "anchored on secure-by-design practices..." One of the blog post's subheadings? "Let's build a memory-safe future together."

And they're urging changes "not just for ourselves but for the generations that follow."
Space

Earth Safe From 'City-Killer' Asteroid 2024 YR4 34

Asteroid 2024 YR4, once considered a significant impact risk, has been reassigned to Torino Scale Level Zero and therefore poses no hazard to Earth. "The NASA JPL Center for Near-Earth Object Studies (CNEOS) now lists the 2024 YR4 impact probability as 0.00005 (0.005%) or 1-in-20,000 for its passage by Earth in 2032," Richard Binzel, Professor of Planetary Science at the Massachusetts Institute of Technology (MIT) and creator of the Torino scale exclusively told Space.com. "That's impact probability zero folks!" From the report: Discovered in Dec. 2024, 2024 YR4 quickly climbed to the top of NASA's Sentry Risk table, at one point having a 1 in 32 chance of hitting Earth. This elevated it to Level 3 on the Torino scale, a system used since 1999 to categorize potential Earth impact events. Level 3, which falls within the yellow band of the Torino Scale, is described as: "A close encounter, meriting attention by astronomers. Current calculations give a 1% or greater chance of collision capable of localized destruction."

This conforms to the second part of the Torino scale level 3 description, which states: "Most likely, new telescopic observations will lead to re-assignment to Level 0. Attention by public and by public officials is merited if the encounter is less than a decade away." "Asteroid 2024 YR4 has now been reassigned to Torino Scale Level Zero, the level for 'No Hazard' as additional tracking of its orbital path has reduced its possibility of intersecting the Earth to below the 1-in-1000 threshold," Binzel continued. "1-in-1000 is the threshold established for downgrading to Level 0 for any object smaller than 100 meters; YR4 has an estimated size of 164 feet (50 meters)."

[...] While 2024 YR4 poses no threat, it will still have a major scientific impact when it passes Earth in 2028 and again in 2032. On Dec. 17, the asteroid will come to within 5 million miles of Earth. Then, on Dec.22, 2032, 2024 YR4 will pass within just 167,000 miles of our planet. For context, the moon is 238,855 miles away.

Slashdot Top Deals