×
Oracle

Oracle Is Moving Its World Headquarters To Nashville (cnbc.com) 48

Oracle Chairman Larry Ellison said Tuesday that the company is moving its world headquarters to Nashville, Tennessee, to be closer to a major health-care epicenter. CNBC reports: In a wide-ranging conversation with Bill Frist, a former U.S. Senate Majority Leader, Ellison said Oracle is moving a "huge campus" to Nashville, "which will ultimately be our world headquarters." He said Nashville is an established health center and a "fabulous place to live," one that Oracle employees are excited about. "It's the center of the industry we're most concerned about, which is the health-care industry," Ellison said. The announcement was seemingly spur-of-the-moment. "I shouldn't have said that," Ellison told Frist, a longtime health-care industry veteran who represented Tennessee in the Senate. The pair spoke during a fireside chat at the Oracle Health Summit in Nashville.

Nashville has been a major player in the health-care scene for decades, and the city is now home to a vibrant network of health systems, startups and investment firms. The city's reputation as a health-care hub was catalyzed when HCA Healthcare, one of the first for-profit hospital companies in the U.S., was founded there in 1968. HCA helped attract troves of health-care professionals to Nashville, and other organizations quickly followed suit. Oracle has been developing its new $1.2 billion campus in the city for about three years, according to The Tennessean. "Our people love it here, and we think it's the center of our future," Ellison said.

Privacy

96% of US Hospital Websites Share Visitor Info With Meta, Google, Data Brokers (theregister.com) 21

An anonymous reader quotes a report from The Guardian: Hospitals -- despite being places where people implicitly expect to have their personal details kept private -- frequently use tracking technologies on their websites to share user information with Google, Meta, data brokers, and other third parties, according to research published today. Academics at the University of Pennsylvania analyzed a nationally representative sample of 100 non-federal acute care hospitals -- essentially traditional hospitals with emergency departments -- and their findings were that 96 percent of their websites transmitted user data to third parties. Additionally, not all of these websites even had a privacy policy. And of the 71 percent that did, 56 percent disclosed specific third-party companies that could receive user information.

The researchers' latest work builds on a study they published a year ago of 3,747 US non-federal hospital websites. That found 98.6 percent tracked and transferred visitors' data to large tech and social media companies, advertising firms, and data brokers. To find the trackers on websites, the team checked out each hospitals' homepage on January 26 using webXray, an open source tool that detects third-party HTTP requests and matches them to the organizations receiving the data. They also recorded the number of third-party cookies per page. One name in particular stood out, in terms of who was receiving website visitors' information. "In every study we've done, in any part of the health system, Google, whose parent company is Alphabet, is on nearly every page, including hospitals," [Dr Ari Friedman, an assistant professor of emergency medicine at the University of Pennsylvania] observed. "From there, it declines," he continued. "Meta was on a little over half of hospital webpages, and the Meta Pixel is notable because it seems to be one of the grabbier entities out there in terms of tracking."

Both Meta and Google's tracking technologies have been the subject of criminal complaints and lawsuits over the years -- as have some healthcare companies that shared data with these and other advertisers. In addition, between 20 and 30 percent of the hospitals share data with Adobe, Friedman noted. "Everybody knows Adobe for PDFs. My understanding is they also have a tracking division within their ad division." Others include telecom and digital marketing companies like The Trade Desk and Verizon, plus tech giants Oracle, Microsoft, and Amazon, according to Friedman. Then there's also analytics firms including Hotjar and data brokers such as Acxiom. "And two thirds of hospital websites had some kind of data transfer to a third-party domain that we couldn't even identify," he added. Of the 71 hospital website privacy policies that the team found, 69 addressed the types of user information that was collected. The most common were IP addresses (80 percent), web browser name and version (75 percent), pages visited on the website (73 percent), and the website from which the user arrived (73 percent). Only 56 percent of these policies identified the third-party companies receiving user information.
In lieu of any federal data privacy law in the U.S., Friedman recommends users protect their personal information via the browser-based tools Ghostery and Privacy Badger, which identify and block transfers to third-party domains.
Open Source

Linux Foundation Launches Valkey As A Redis Fork (phoronix.com) 12

Michael Larabel reports via Phoronix: Given the recent change by Redis to adopt dual source-available licensing for all their releases moving forward (Redis Source Available License v2 and Server Side Public License v1), the Linux Foundation announced today their fork of Redis. The Linux Foundation went public today with their intent to fork Valkey as an open-source alternative to the Redis in-memory store. Due to the Redis licensing changes, Valkey is forking from Redis 7.2.4 and will maintain a BSD 3-clause license. Google, AWS, Oracle, and others are helping form this new Valkey project.

The Linux Foundation press release shares: "To continue improving on this important technology and allow for unfettered distribution of the project, the community created Valkey, an open source high performance key-value store. Valkey supports the Linux, macOS, OpenBSD, NetBSD, and FreeBSD platforms. In addition, the community will continue working on its existing roadmap including new features such as a more reliable slot migration, dramatic scalability and stability improvements to the clustering system, multi-threaded performance improvements, triggers, new commands, vector search support, and more. Industry participants, including Amazon Web Services (AWS), Google Cloud, Oracle, Ericsson, and Snap Inc. are supporting Valkey. They are focused on making contributions that support the long-term health and viability of the project so that everyone can benefit from it."

Open Source

Linux Distributors' Alliance Continues Long-Term Support for Linux 4.14 (zdnet.com) 19

"Until recently, Linux kernel developers have been the ones keeping long-term support (LTS) versions of the Linux kernel patched and up to date," writes ZDNet.

"Then, because it was too much work with too little support, the Linux kernel developers decided to no longer support the older kernels." Greg Kroah-Hartman, the Linux kernel maintainer for the stable branch, announced that the Linux 4.14.336 release was the last maintenance update to the six-year-old LTS Linux 4.14 kernel series. It was the last of the line for 4.14. Or was it?

Kroah-Hartman had stated, "All users of the 4.14 kernel series must upgrade." Maybe not. OpenELA, a trade association of the Linux distributors CIQ (the company backing Rocky Linux), Oracle, and SUSE, is now offering — via its kernel-lts — a new lease on life for 4.14.

This renewed version, tagged with the following format — x.y.z-openela — is already out as v4.14.339-openela. The OpenELA acknowledges the large debt they owe to Kroah-Hartman and Sasha Levin of the Linux Kernel Stable project but underlines that their project is not affiliated with them or any of the other upstream stable maintainers. That said, the OpenELA team will automatically pull most LTS-maintained stable tree patches from the upstream stable branches. When there are cases where patches can't be applied cleanly, OpenELA kernel-lts maintainers will deal with these issues. In addition, a digest of non-applied patches will accompany each release of its LTS kernel, in mbox format.

"The OpenELA kernel-lts project is the first forum for enterprise Linux distribution vendors to pool our resources," an Oracle Linux SVP tells ZDNet, "and collaborate on those older kernels after upstream support for those kernels has ended." And the CEO of CIQ adds that after community support has ended, "We believe that open collaboration is the best way to maintain foundational enterprise infrastructure.

"Through OpenELA, vendors, users, and the open source community at large can work together to provide the longevity that professional IT organizations require for enterprise Linux."
China

China Intensifies Push To 'Delete America' From Its Technology (wsj.com) 160

A directive known as Document 79 ramps up Beijing's effort to replace U.S. tech with homegrown alternatives. From a report: For American tech companies in China, the writing is on the wall. It's also on paper, in Document 79. The 2022 Chinese government directive expands a drive that is muscling U.S. technology out of the country -- an effort some refer to as "Delete A," for Delete America. Document 79 was so sensitive that high-ranking officials and executives were only shown the order and weren't allowed to make copies, people familiar with the matter said. It requires state-owned companies in finance, energy and other sectors to replace foreign software in their IT systems by 2027.

American tech giants had long thrived in China as they hot-wired the country's meteoric industrial rise with computers, operating systems and software. Chinese leaders want to sever that relationship, driven by a push for self-sufficiency and concerns over the country's long-term security. The first targets were hardware makers. Dell, International Business Machines and Cisco Systems have gradually seen much of their equipment replaced by products from Chinese competitors.

Document 79, named for the numbering on the paper, targets companies that provide the software -- enabling daily business operations from basic office tools to supply-chain management. The likes of Microsoft and Oracle are losing ground in the field, one of the last bastions of foreign tech profitability in the country. The effort is just one salvo in a yearslong push by Chinese leader Xi Jinping for self-sufficiency in everything from critical technology such as semiconductors and fighter jets to the production of grain and oilseeds. The broader strategy is to make China less dependent on the West for food, raw materials and energy, and instead focus on domestic supply chains.

Oracle

Oracle's Plans for Java in 2024 (infoworld.com) 75

"Oracle's plans to evolve Java in 2024 involve OpenJDK projects," writes InfoWorld, citing a recent video by Oracle Java developer relations representative Nicolai Parlog. (Though many improvements may not be usable until 2025 or later...) - For Project Babylon, Parlog cited plans for code reflection, expanding the reflection API, and allowing transformation of Java code inside a method. The goal is to allow developers to write Java code that libraries then can interpret as a mathematical function, for example. The Babylon team in coming weeks plans to publish work on use cases such as auto-differentiating, C# LINQ emulation, and GPU programming.

- In Project Leyden, which is aimed at improving startup times, plans for 2024 involve refining the concept of condensers and working toward the production-readiness of prototype condensers.

- In Project Amber, current features in preview include string templates, a simplified main method, and statements before this() and super(). "I expect all three to finalize in 2024," said Parlog. Under exploration are capabilities such as primitive types in patterns and with expressions.

- In Project Valhalla, work will focus on value classes and objects, which provide class instances that have only final instance fields and lack object identity [to] significantly reduce the run time overhead of boxed Integer, Double, and Byte objects...

- In Project Lilliput, aimed at downsizing Java object headers in the HotSpot JVM and reducing Java's memory footprint, work now centers on polishing a fast-locking scheme.

- Project Panama, for interconnecting JVM and native C code, "has three irons in the fire," Parlog said.

Open Source

Hans Reiser Sends a Letter From Prison (arstechnica.com) 181

In 2003, Hans Reiser answered questions from Slashdot's readers...

Today Wikipedia describes Hans Reiser as "a computer programmer, entrepreneur, and convicted murderer... Prior to his incarceration, Reiser created the ReiserFS computer file system, which may be used by the Linux kernel but which is now scheduled for removal in 2025, as well as its attempted successor, Reiser4."

This week alanw (Slashdot reader #1,822), spotted a development on the Linux kernel mailing list. "Hans Reiser (imprisoned for the murder of his wife) has written a letter, asking it to be published to Slashdot." Reiser writes: I was asked by a kind Fredrick Brennan for my comments that I might offer on the discussion of removing ReiserFS V3 from the kernel. I don't post directly because I am in prison for killing my wife Nina in 2006.

I am very sorry for my crime — a proper apology would be off topic for this forum, but available to any who ask.

A detailed apology for how I interacted with the Linux kernel community, and some history of V3 and V4, are included, along with descriptions of what the technical issues were. I have been attending prison workshops, and working hard on improving my social skills to aid my becoming less of a danger to society. The man I am now would do things very differently from how I did things then.

Click here for the rest of Reiser's introduction, along with a link to the full text of the letter...

The letter is dated November 26, 2023, and ends with an address where Reiser can be mailed. Ars Technica has a good summary of Reiser's lengthy letter from prison — along with an explanation for how it came to be. With the ReiserFS recently considered obsolete and slated for removal from the Linux kernel entirely, Fredrick R. Brennan, font designer and (now regretful) founder of 8chan, wrote to the filesystem's creator, Hans Reiser, asking if he wanted to reply to the discussion on the Linux Kernel Mailing List (LKML). Reiser, 59, serving a potential life sentence in a California prison for the 2006 murder of his estranged wife, Nina Reiser, wrote back with more than 6,500 words, which Brennan then forwarded to the LKML. It's not often you see somebody apologize for killing their wife, explain their coding decisions around balanced trees versus extensible hashing, and suggest that elementary schools offer the same kinds of emotional intelligence curriculum that they've worked through in prison, in a software mailing list. It's quite a document...

It covers, broadly, why Reiser believes his system failed to gain mindshare among Linux users, beyond the most obvious reason. This leads Reiser to detail the technical possibilities, his interpersonal and leadership failings and development, some lingering regrets about dealings with SUSE and Oracle and the Linux community at large, and other topics, including modern Russian geopolitics... Reiser asks that a number of people who worked on ReiserFS be included in "one last release" of the README, and to "delete anything in there I might have said about why they were not credited." He says prison has changed him in conflict resolution and with his "tendency to see people in extremes...."

Reiser writes that he understood the difficulty ahead in getting the Linux world to "shift paradigms" but lacked the understanding of how to "make friends and allies of people" who might initially have felt excluded. This is followed by a heady discussion of "balanced trees instead of extensible hashing," Oracle's history with implementing balanced trees, getting synchronicity just right, I/O schedulers, block size, seeks and rotational delays on magnetic hard drives, and tails. It leads up to a crucial decision in ReiserFS' development, the hard non-compatible shift from V3 to Reiser 4. Format changes, Reiser writes, are "unwanted by many for good reasons." But "I just had to fix all these flaws, fix them and make a filesystem that was done right. It's hard to explain why I had to do it, but I just couldn't rest as long as the design was wrong and I knew it was wrong," he writes. SUSE didn't want a format change, but Reiser, with hindsight, sees his pushback as "utterly inarticulate and unsociable." The push for Reiser 4 in the Linux kernel was similar, "only worse...."

He encourages people to "allow those who worked so hard to build a beautiful filesystem for the users to escape the effects of my reputation." Under a "Conclusion" sub-heading, Reiser is fairly succinct in summarizing a rather wide-ranging letter, minus the minutiae about filesystem architecture.

I wish I had learned the things I have been learning in prison about talking through problems, and believing I can talk through problems and doing it, before I had married or joined the LKML. I hope that day when they teach these things in Elementary School comes.

I thank Richard Stallman for his inspiration, software, and great sacrifices,

It has been an honor to be of even passing value to the users of Linux. I wish all of you well.



It both is and is not a response to Brennan's initial prompt, asking how he felt about ReiserFS being slated for exclusion from the Linux kernel. There is, at the moment, no reply to the thread started by Brennan.

The Almighty Buck

The World Could Get Its First Trillionaire Within 10 Years (apnews.com) 287

An anonymous reader quotes a report from the Associated Press: The world could have its first trillionaire within a decade, anti-poverty organization Oxfam International said Monday in its annual assessment of global inequalities timed to the gathering of political and business elites at the Swiss ski resort of Davos. Oxfam, which for years has been trying to highlight the growing disparities between the super-rich and the bulk of the global population during the World Economic Forum's annual meeting, reckons the gap has been "supercharged" since the coronavirus pandemic.

The group said the fortunes of the five richest men -- Tesla CEO Elon Musk, Bernard Arnault and his family of luxury company LVMH, Amazon founder Jeff Bezos, Oracle founder Larry Ellison and investment guru Warren Buffett -- have spiked by 114% in real terms since 2020, when the world was reeling from the pandemic. Oxfam's interim executive director said the report showed that the world is entering a "decade of division." "We have the top five billionaires, they have doubled their wealth. On the other hand, almost 5 billion people have become poorer," Amitabh Behar said in an interview in Davos, Switzerland, where the forum's annual meeting takes place this week.

"Very soon, Oxfam predicts that we will have a trillionaire within a decade," Behar said, referring to a person who has a thousand billion dollars. "Whereas to fight poverty, we need more than 200 years." If someone does reach that trillion-dollar milestone -- and it could be someone not even on any list of richest people right now -- he or she would have the same value as oil-rich Saudi Arabia. [...] To calculate the top five richest billionaires, Oxfam used figures from Forbes as of November 2023. Their total wealth then was $869 billion, up from $340 billion in March 2020, a nominal increase of 155%. For the bottom 60% of the global population, Oxfam used figures from the UBS Global Wealth Report 2023 and from the Credit Suisse Global Wealth Databook 2019. Both used the same methodology.
Some of the measures Oxfam said should be considered to reduce global inequality include the permanent taxation of the wealthiest in every country, more effective taxation of big corporations and a renewed drive against tax avoidance. "To end extreme inequality, governments must radically redistribute the power of billionaires and corporations back to ordinary people," reports Oxfam. "A more equal world is possible if governments effectively regulate and reimagine the private sector."
Operating Systems

Biggest Linux Kernel Release Ever Welcomes bcachefs File System, Jettisons Itanium (theregister.com) 52

Linux kernel 6.7 has been released, including support for the new next-gen copy-on-write (COW) bcachefs file system. The Register reports: Linus Torvalds announced the release on Sunday, noting that it is "one of the largest kernel releases we've ever had." Among the bigger and more visible changes are a whole new file system, along with fresh functionality for several existing ones; improved graphics support for several vendors' hardware; and the removal of an entire CPU architecture. [...] The single biggest feature of 6.7 is the new bcachefs file system, which we examined in March 2022. As this is the first release of Linux to include the new file system, it definitely would be premature to trust any important data to it yet, but this is a welcome change. The executive summary is that bcachefs is a next-generation file system that, like Btrfs and ZFS, provides COW functionality. COW enables the almost instant creation of "snapshots" of all or part of a drive or volume, which enables the OS to make disk operations transactional: In other words, to provide an "undo" function for complex sets of disk write operations.

Having a COW file system on Linux isn't new. The existing next-gen file system in the kernel, Btrfs, also supports COW snapshots. The version in 6.7 sees several refinements. It inherits a feature implemented for Steam OS: Two Btrfs file systems with the same ID can be mounted simultaneously, for failover scenarios. It also has improved quota support and a new raid_stripe_tree that improves handling of arrays of dissimilar drives. Btrfs remains somewhat controversial. Red Hat banished it from RHEL years ago (although Oracle Linux still offers it) but SUSE's distros depend heavily upon it. It will be interesting to see how quickly SUSE's Snapper tool gains support for bcachefs: This new COW contender may reveal unquestioned assumptions built into the code. Since Snapper is also used in several non-SUSE distros, including Spiral Linux, Garuda, and siduction, they're tied to Btrfs as well.

The other widely used FOSS next-gen file system, OpenZFS, also supports COW, but licensing conflicts prevent ZFS being fully integrated into the Linux kernel. So although multiple distros (such as NixOS, Proxmox, TrueNAS Scale, Ubuntu, and Void Linux) support ZFS, it must remain separate and distinct. This results in limitations, such as the ZFS Advanced Read Cache being separate from Linux's page cache. Bcachefs is all-GPL and doesn't suffer from such limitations. It aims to supply the important features of ZFS, such as integrated volume management, while being as fast as ext4 or XFS, and also surpass Btrfs in both performance and, crucially, reliability.
A full list of changes in this release can be viewed via KernelNewbies.
AI

GPT and Other AI Models Can't Analyze an SEC Filing, Researchers Find (cnbc.com) 50

According to researchers from a startup called Patronus AI, ChatGPT and other chatbots that rely on large language models frequently fail to answer questions derived from Securities and Exchange Commission filings. CNBC reports: Even the best-performing artificial intelligence model configuration they tested, OpenAI's GPT-4-Turbo, when armed with the ability to read nearly an entire filing alongside the question, only got 79% of answers right on Patronus AI's new test, the company's founders told CNBC. Oftentimes, the so-called large language models would refuse to answer, or would "hallucinate" figures and facts that weren't in the SEC filings. "That type of performance rate is just absolutely unacceptable," Patronus AI co-founder Anand Kannappan said. "It has to be much much higher for it to really work in an automated and production-ready way." [...]

Patronus AI worked to write a set of more than 10,000 questions and answers drawn from SEC filings from major publicly traded companies, which it calls FinanceBench. The dataset includes the correct answers, and also where exactly in any given filing to find them. Not all of the answers can be pulled directly from the text, and some questions require light math or reasoning. Qian and Kannappan say it's a test that gives a "minimum performance standard" for language AI in the financial sector. Patronus AI tested four language models: OpenAI's GPT-4 and GPT-4-Turbo, Anthropic's Claude 2 and Meta's Llama 2, using a subset of 150 of the questions it had produced. It also tested different configurations and prompts, such as one setting where the OpenAI models were given the exact relevant source text in the question, which it called "Oracle" mode. In other tests, the models were told where the underlying SEC documents would be stored, or given "long context," which meant including nearly an entire SEC filing alongside the question in the prompt.

GPT-4-Turbo failed at the startup's "closed book" test, where it wasn't given access to any SEC source document. It failed to answer 88% of the 150 questions it was asked, and only produced a correct answer 14 times. It was able to improve significantly when given access to the underlying filings. In "Oracle" mode, where it was pointed to the exact text for the answer, GPT-4-Turbo answered the question correctly 85% of the time, but still produced an incorrect answer 15% of the time. But that's an unrealistic test because it requires human input to find the exact pertinent place in the filing -- the exact task that many hope that language models can address. Llama 2, an open-source AI model developed by Meta, had some of the worst "hallucinations," producing wrong answers as much as 70% of the time, and correct answers only 19% of the time, when given access to an array of underlying documents. Anthropic's Claude 2 performed well when given "long context," where nearly the entire relevant SEC filing was included along with the question. It could answer 75% of the questions it was posed, gave the wrong answer for 21%, and failed to answer only 3%. GPT-4-Turbo also did well with long context, answering 79% of the questions correctly, and giving the wrong answer for 17% of them.

AMD

Meta and Microsoft To Buy AMD's New AI Chip As Alternative To Nvidia's (cnbc.com) 16

Meta, OpenAI, and Microsoft said at an AMD investor event today that they will use AMD's newest AI chip, the Instinct MI300X, as an alternative to Nvidia's expensive graphic processors. "If AMD's latest high-end chip is good enough for the technology companies and cloud service providers building and serving AI models when it starts shipping early next year, it could lower costs for developing AI models and put competitive pressure on Nvidia's surging AI chip sales growth," reports CNBC. From the report: "All of the interest is in big iron and big GPUs for the cloud," AMD CEO Lisa Su said Wednesday. AMD says the MI300X is based on a new architecture, which often leads to significant performance gains. Its most distinctive feature is that it has 192GB of a cutting-edge, high-performance type of memory known as HBM3, which transfers data faster and can fit larger AI models. Su directly compared the MI300X and the systems built with it to Nvidia's main AI GPU, the H100. "What this performance does is it just directly translates into a better user experience," Su said. "When you ask a model something, you'd like it to come back faster, especially as responses get more complicated."

The main question facing AMD is whether companies that have been building on Nvidia will invest the time and money to add another GPU supplier. "It takes work to adopt AMD," Su said. AMD on Wednesday told investors and partners that it had improved its software suite called ROCm to compete with Nvidia's industry standard CUDA software, addressing a key shortcoming that had been one of the primary reasons AI developers currently prefer Nvidia. Price will also be important. AMD didn't reveal pricing for the MI300X on Wednesday, but Nvidia's can cost around $40,000 for one chip, and Su told reporters that AMD's chip would have to cost less to purchase and operate than Nvidia's in order to persuade customers to buy it.

On Wednesday, AMD said it had already signed up some of the companies most hungry for GPUs to use the chip. Meta and Microsoft were the two largest purchasers of Nvidia H100 GPUs in 2023, according to a recent report from research firm Omidia. Meta said it will use MI300X GPUs for AI inference workloads such as processing AI stickers, image editing, and operating its assistant. Microsoft's CTO, Kevin Scott, said the company would offer access to MI300X chips through its Azure web service. Oracle's cloud will also use the chips. OpenAI said it would support AMD GPUs in one of its software products, called Triton, which isn't a big large language model like GPT but is used in AI research to access chip features.

Programming

Java Tries a New Way to Use Multithreading: Structured Concurrency (infoworld.com) 96

"Structured concurrency is a new way to use multithreading in Java," reports InfoWorld.

"It allows developers to think about work in logical groups while taking advantage of both traditional and virtual threads." Available in preview in Java 21, structured concurrency is a key aspect of Java's future, so now is a good time to start working with it... Java's thread model makes it a strong contender among concurrent languages, but multithreading has always been inherently tricky. Structured concurrency allows you to use multiple threads with structured programming syntax. In essence, it provides a way to write concurrent software using familiar program flows and constructs. This lets developers focus on the business at hand, instead of the orchestration of threading.

As the JEP for structured concurrency says, "If a task splits into concurrent subtasks then they all return to the same place, namely the task's code block." Virtual threads, now an official feature of Java, create the possibility of cheaply spawning threads to gain concurrent performance. Structured concurrency provides the simple syntax to do so. As a result, Java now has a unique and highly-optimized threading system that is also easy to understand...

Between virtual threads and structured concurrency, Java developers have a compelling new mechanism for breaking up almost any code into concurrent tasks without much overhead... Any time you encounter a bottleneck where many tasks are occurring, you can easily hand them all off to the virtual thread engine, which will find the best way to orchestrate them. The new thread model with structured concurrency also makes it easy to customize and fine-tune this behavior. It will be very interesting to see how developers use these new concurrency capabilities in our applications, frameworks, and servers going forward.

It involves a new class StructuredTaskScope located in the java.util.concurrent library. (InfoWorld points out that "you'll need to use --enable-preview and --source 21 or --source 22 to enable structured concurrency.")

Their reporter shared an example on GitHub, and there's more examples in the Java 21 documentation. "The structured concurrency documentation includes an example of collecting subtask results as they succeed or fail and then returning the results."
AI

Nvidia Upgrades Processor as Rivals Challenge Its AI Dominance (bloomberg.com) 39

Nvidia, the world's most valuable chipmaker, is updating its H100 artificial intelligence processor, adding more capabilities to a product that has fueled its dominance in the AI computing market. From a report: The new model, called the H200, will get the ability to use high-bandwidth memory, or HBM3e, allowing it to better cope with the large data sets needed for developing and implementing AI, Nvidia said Monday. Amazon's AWS, Alphabet's Google Cloud and Oracle's Cloud Infrastructure have all committed to using the new chip starting next year.

The current version of the Nvidia processor -- known as an AI accelerator -- is already in famously high demand. It's a prized commodity among technology heavyweights like Larry Ellison and Elon Musk, who boast about their ability to get their hands on the chip. But the product is facing more competition: AMD is bringing its rival MI300 chip to market in the fourth quarter, and Intel claims that its Gaudi 2 model is faster than the H100. With the new product, Nvidia is trying to keep up with the size of data sets used to create AI models and services, it said. Adding the enhanced memory capability will make the H200 much faster at bombarding software with data -- a process that trains AI to perform tasks such as recognizing images and speech.

Red Hat Software

How Red Hat Divided the Open Source Community (msn.com) 191

In Raleigh, North Carolina — the home of Red Hat — local newspaper the News & Observer takes an in-depth look at the "announcement that split the open source software community." (Alternate URL here.) [M]any saw Red Hat's decision to essentially paywall Red Hat Enterprise Linux, or RHEL, as sacrilegious... Red Hat employees were also conflicted about the new policy, [Red Hat Vice President Mike] McGrath acknowledged. "I think a lot of even internal associates didn't fully understand what we had announced and why," he said...

At issue, he wrote, were emerging competitors who copied Red Hat Enterprise Linux, down to even the code's mistakes, and then offered these Red Hat-replicas to customers for free. These weren't community members adding value, he contended, but undercutting rivals. And in a year when Red Hat laid off 4% of its total workforce, McGrath said, the company could not justify allowing this to continue. "I feel that while this was a difficult decision between community and business, we're still on the right side of it," he told the News & Observer. Not everyone agrees...

McGrath offered little consolation to customers who were relying on one-for-one versions of RHEL. They could stay with the downstream distributions, find another provider, or pay for Red Hat. "I think (people) were just so used to the way things work," he said. "There's a vocal group of people that probably need Red Hat's level of support, but simply don't want to pay for it. And I don't really have... there's not much we can tell them."

Since its RHEL decision, Red Hat has secured several prominent partnerships. In September, the cloud-based software company Salesforce moved 200,000 of its systems from the free CentOS Linux to Red Hat Enterprise Linux. The same month, Red Hat announced RHEL would begin to support Oracle's cloud infrastructure. Oracle was one of the few major companies this summer to publicly criticize Red Hat for essentially paywalling its most popular code. On Oct. 24, Red Hat notched another win when the data security firm Cohesity said it would also ditch CentOS Linux for RHEL.

The article delves into the history of Red Hat — and of Linux — before culminating with this quote from McGrath. "I think long gone are the times of that sort of romantic view of hobbyists working in their spare time to build open source. I think there's still room for that — we still have that — but quite a lot of open source is now built from people that are paid full time."

Red Hat likes to point out that 90% of Fortune 500 companies use its services, according to the article. But it also quotes Jonathan Wright, infrastructure team lead at the nonprofit AlmaLinux, as saying that Red Hat played "fast and loose" with the GPL. The newspaper then adds that "For many open source believers, such a threat to its hallowed text isn't forgivable."
Linux

OpenELA Drops First RHEL, 'Enterprise Linux' Compatible Source Code (theregister.com) 39

Long-time Slashdot reader williamyf writes: In the ongoing battle between Red Hat and other "Enterprise Linux -- RHEL compatible" distros, today the OpenELA (Open Enterprise Linux Association), a body Consisting of CIQ (stewards of Rocky Linux), Oracle and Suse, released source code for a generic "Enterprise Linux Distro" (Sources available for RHEL 8 and RHEL 9). A Steering committee for the foundation was also formed.

War between Red Hat and what they call "clones" (mostly Oracle; CentOS, Rocky, Alma and others seem to be collateral damage) has been raging on for years. First, in 2011, Red Hat changed the way they distributed kernel patches. Then, in 2014, Red Hat absorbed CentOS. In 2019 Red Hat transformed CentOS to CentOS stream, and shortened support Timetables for CentOS 8, all out of the blue. Then, in 2023, RedHat severely restricted source code access to non-customers.

What will be RedHat's reaction to this development? My bet is that they will stop to release source code of distro modules under BSD, MIT, APACHE and MPL Licenses for RHEL and in certain Windows for CentOS Stream. What is your bet? Let us know in the comments.

Red Hat Software

CIQ, Oracle and SUSE Unite Behind OpenELA To Take on Red Hat Enterprise Linux (zdnet.com) 18

An anonymous reader shares a report: When Mike McGrath, Red Hat's Red Hat Core Platforms vice president, announced that Red Hat was putting new restrictions on who could access Red Hat Enterprise Linux (RHEL)'s code, other Linux companies that depended on RHEL's code for their own distro releases were, in a word, unhappy. Three of them, CIQ, Oracle, and SUSE, came together to form the Open Enterprise Linux Association (OpenELA). Their united goal was to foster "the development of distributions compatible with Red Hat Enterprise Linux (RHEL) by providing open and free enterprise Linux source code." Now, the first OpenELA code release is available.

As Thomas Di Giacomo, SUSE's chief technology and product officer, said in a statement, "We're pleased to deliver on our promise of making source code available and to continue our work together to provide choice to our customers while we ensure that Enterprise Linux source code remains freely accessible to the public." Why are they doing this? Gregory Kurtzer, CIQ's CEO, and Rocky Linux's founder, explained: "Organizations worldwide standardized on CentOS because it was freely available, followed the Enterprise Linux standard, and was well supported. After CentOS was discontinued, it left not only a gaping hole in the ecosystem but also clearly showed how the community needs to come together and do better. OpenELA is exactly that -- the community's answer to ensuring a collaborative and stable future for all professional IT departments and enterprise use cases."

Open Source

AlmaLinux Stays Red Hat Enterprise Linux Compatible Without Red Hat Code (zdnet.com) 34

AlmaLinux is creating a Red Hat Enterprise Linux (RHEL) without any Red Hat code. Instead, AlmaLinux OS will aim to be Application Binary Interface (ABI) compatible and use the CentOS Stream source code that Red Hat continues to offer. Additional code is pulled from Red Hat Universal Base Images, and upstream Linux code. Benny Vasquez, chairperson of the AlmaLinux OF Foundation, explained how all this works at the open-source community convention All Things Open. ZDNet's Steven Vaughan-Nichols reports: The hardest part is Red Hat's Linux kernel updates because, added Vasquez, "you can't get those kernel updates without violating Red Hat's licensing agreements." Therefore, she continued, "What we do is we pull the security patches from various other sources, and, if nothing else, we can find them when Oracle releases them." Vasquez did note one blessing from this change in production: "AlmaLinux, no longer bound to Red Hat's releases, has been able to release upstream security fixes faster than Red Hat. "For example, the AMD microcode exploits were patched before Red Hat because they took a little bit of extra time to get out the door. We then pulled in, tested, and out the door about a week ahead of them." The overall goal remains to maintain RHEL compatibility. "Any breaking changes between RHEL and AlmaLinux, any application that stops working, is a bug and must be fixed."

That's not to say AlmaLinux will be simply an excellent RHEL clone going forward. It plans to add features of its own. For instance, Red Hat users who want programs not bundled in RHEL often turn to Extra Packages for Enterprise Linux (EPEL). These typically are programs included in Fedora Linux. Besides supporting EPEL software, AlmaLinux has its own extra software package -- called Synergy -- which holds programs that the AlmaLinux community wants but are not available in either EPEL or RHEL. If one such program is subsequently added to EPEL or RHEL, AlmaLinux drops it from Synergy to prevent confusion and duplication of effort.

This has not been an easy road for AlmaLinux. Even a 1% code difference is a lot to write and maintain. For example, when AlmaLinux tried to patch CentOS Stream code to fix a problem, Red Hat was downright grumpy about AlmaLinux's attempt to fix a security hole. Vasquez acknowledged it was tough sledding at first, but noted: "The good news is that they have been improving the process, and things will look a little bit smoother." AlmaLinux, she noted, is also not so much worried as aware that Red Hat may throw a monkey wrench into their efforts. Vasquez added: "Internally, we're working on stopgap things we'd need to do to anticipate Red Hat changing everything terribly." She doesn't think Red Hat will do it, but "we want to be as prepared as possible."

Java

C# Challenges Java in Programming Language Popularity (infoworld.com) 109

"The gap between C# and Java never has been so small," according to October's update for TIOBE's "Programming Community Index".

"Currently, the difference is only 1.2%, and if the trends remain this way, C# will surpass Java in about 2 month's time." Java shows the largest decline of -3.92% and C# the largest gain of +3.29% of all programming languages (annually).

The two languages have always been used in similar domains and thus have been competitors for more than 2 decades now. Java's decline in popularity is mainly caused by Oracle's decision to introduce a paid license model after Java 8. Microsoft took the opposite approach with C#. In the past, C# could only be used as part of commercial tool Visual Studio. Nowadays, C# is free and open source and it's embraced by many developers.

There are also other reasons for Java's decline. First of all, the Java language definition has not changed much the past few years and Kotlin, its fully compatible direct competitor, is easier to use and free of charge.

"Java remains a critical language in enterprise computing," argues InfoWorld, "with Java 21 just released last month and Java 22 due next March. And free open source binaries of Java still are available via OpenJDK." InfoWorld also notes TIOBE's ranking is different than other indexes. TIOBE's top 10:
  1. Python (14.82%)
  2. C (12.08%)
  3. C++ (10.67%)
  4. Java (8.92%)
  5. C# (7.71%)
  6. JavaScript (2.91%)
  7. Visual Basic (2.13%)
  8. PHP (1.9%)
  9. SQL (1.78%)
  10. Assembly (1.64%)

And here's the Pypl Popularity of Programming Language (based on searches for language tutorials on Google):

  1. Python, with a 28.05% share
  2. Java (15.88%)
  3. JavaScript (9.27%)
  4. C# (6.79%)
  5. C/C++ (6.59%)
  6. PHP (4.86%)
  7. R (4.45%)
  8. TypeScript (2.93%)
  9. Swift (2.69%)
  10. Objective-C (2.29%)

Privacy

Password-Stealing Linux Malware Served For 3 Years and No One Noticed (arstechnica.com) 54

An anonymous reader quotes a report from Ars Technica: A download site surreptitiously served Linux users malware that stole passwords and other sensitive information for more than three years until it finally went quiet, researchers said on Tuesday. The site, freedownloadmanager[.]org, offered a benign version of a Linux offering known as the Free Download Manager. Starting in 2020, the same domain at times redirected users to the domain deb.fdmpkg[.]org, which served a malicious version of the app. The version available on the malicious domain contained a script that downloaded two executable files to the /var/tmp/crond and /var/tmp/bs file paths. The script then used the cron job scheduler to cause the file at /var/tmp/crond to launch every 10 minutes. With that, devices that had installed the booby-trapped version of Free Download Manager were permanently backdoored.

After accessing an IP address for the malicious domain, the backdoor launched a reverse shell that allowed the attackers to remotely control the infected device. Researchers from Kaspersky, the security firm that discovered the malware, then ran the backdoor on a lab device to observe how it behaved. "This stealer collects data such as system information, browsing history, saved passwords, cryptocurrency wallet files, as well as credentials for cloud services (AWS, Google Cloud, Oracle Cloud Infrastructure, Azure)," the researchers wrote in a report on Tuesday. "After collecting information from the infected machine, the stealer downloads an uploader binary from the C2 server, saving it to /var/tmp/atd. It then uses this binary to upload stealer execution results to the attackers' infrastructure."

Oracle

Largest Local Government Body In Europe Goes Under Amid Oracle Disaster (theregister.com) 110

Birmingham City Council, the largest local authority in Europe, has declared itself in financial distress after troubled Oracle project costs ballooned from $25 million to around $125.5 million. The Register reports: Contributing to the publication of a legal Section 114 Notice, which says the $4.3 billion revenue organization is unable to balance the books, is a bill of up to $954 million to settle equal pay claims. In a statement today, councillors John Cotton and Sharon Thompson, leader and deputy leader respectively, said the authority was also hit by financial stress owing to issues with the implementation of its Oracle IT system. The council has made a request to the Local Government Association for additional strategic support, the statement said.

In May, Birmingham City Council said it was set to pay up to $125.5 million for its Oracle ERP system -- potentially a fourfold increase on initial estimated expenses -- in a project suffering from delays, cost over-runs, and a lack of controls. After grappling with the project to replace SAP for core HR and finance functions since 2018, the council reviewed the plan in 2019, 2020, and again in 2021, when the total implementation cost for the project almost doubled to $48.5 million. The project, dubbed Financial and People, was "crucial to an organisation of Birmingham City Council's size," a spokesperson said at the time. Cotton said the system had a problem with how it was "tracking our financial transactions and HR transactions issues as well. That's got to be fixed," he said.

Earlier this year, one insider told The Register that Oracle Fusion, the cloud-based ERP system the council is moving to, "is not a product that is suitable for local authorities, because it's very much geared towards a manufacturing/trading organization." They said the previous SAP system had been heavily customized to meet the council's needs and it was struggling to recreate these functions in Oracle.

Slashdot Top Deals