AI

Big Tech's AI Datacenters Demand Electricity. Are They Increasing Use of Fossil Fuels? (msn.com) 56

The artificial intelligence revolution will demand more electricity, warns the Washington Post. "Much more..."

They warn that the "voracious" electricity consumption of AI is driving an expansion of fossil fuel use in America — "including delaying the retirement of some coal-fired plants." As the tech giants compete in a global AI arms race, a frenzy of data center construction is sweeping the country. Some computing campuses require as much energy as a modest-sized city, turning tech firms that promised to lead the way into a clean energy future into some of the world's most insatiable guzzlers of power. Their projected energy needs are so huge, some worry whether there will be enough electricity to meet them from any source... A ChatGPT-powered search, according to the International Energy Agency, consumes almost 10 times the amount of electricity as a search on Google. One large data center complex in Iowa owned by Meta burns the annual equivalent amount of power as 7 million laptops running eight hours every day, based on data shared publicly by the company...

[Tech companies] argue advancing AI now could prove more beneficial to the environment than curbing electricity consumption. They say AI is already being harnessed to make the power grid smarter, speed up innovation of new nuclear technologies and track emissions.... "If we work together, we can unlock AI's game-changing abilities to help create the net zero, climate resilient and nature positive works that we so urgently need," Microsoft said in a statement.

The tech giants say they buy enough wind, solar or geothermal power every time a big data center comes online to cancel out its emissions. But critics see a shell game with these contracts: The companies are operating off the same power grid as everyone else, while claiming for themselves much of the finite amount of green energy. Utilities are then backfilling those purchases with fossil fuel expansions, regulatory filings show... heavily polluting fossil fuel plants that become necessary to stabilize the power grid overall because of these purchases, making sure everyone has enough electricity.

The article quotes a project director at the nonprofit Data & Society, which tracks the effect of AI and accuses the tech industry of using "fuzzy math" in its climate claims. "Coal plants are being reinvigorated because of the AI boom," they tell the Washington Post. "This should be alarming to anyone who cares about the environment."

The article also summarzies a recent Goldman Sachs analysis, which predicted data centers would use 8% of America's total electricity by 2030, with 60% of that usage coming "from a vast expansion in the burning of natural gas. The new emissions created would be comparable to that of putting 15.7 million additional gas-powered cars on the road." "We all want to be cleaner," Brian Bird, president of NorthWestern Energy, a utility serving Montana, South Dakota and Nebraska, told a recent gathering of data center executives in Washington, D.C. "But you guys aren't going to wait 10 years ... My only choice today, other than keeping coal plants open longer than all of us want, is natural gas. And so you're going see a lot of natural gas build out in this country."
Big Tech responded by "going all in on experimental clean-energy projects that have long odds of success anytime soon," the article concludes. "In addition to fusion, they are hoping to generate power through such futuristic schemes as small nuclear reactors hooked to individual computing centers and machinery that taps geothermal energy by boring 10,000 feet into the Earth's crust..." Some experts point to these developments in arguing the electricity needs of the tech companies will speed up the energy transition away from fossil fuels rather than undermine it. "Companies like this that make aggressive climate commitments have historically accelerated deployment of clean electricity," said Melissa Lott, a professor at the Climate School at Columbia University.
Math

Mathematician Reveals 'Equals' Has More Than One Meaning In Math (sciencealert.com) 118

"It turns out that mathematicians actually can't agree on the definition of what makes two things equal, and that could cause some headaches for computer programs that are increasingly being used to check mathematical proofs," writes Clare Watson via ScienceAlert. The issue has prompted British mathematician Kevin Buzzard to re-examine the concept of equality to "challenge various reasonable-sounding slogans about equality." The research has been posted on arXiv. From the report: In familiar usage, the equals sign sets up equations that describe different mathematical objects that represent the same value or meaning, something which can be proven with a few switcharoos and logical transformations from side to side. For example, the integer 2 can describe a pair of objects, as can 1 + 1. But a second definition of equality has been used amongst mathematicians since the late 19th century, when set theory emerged. Set theory has evolved and with it, mathematicians' definition of equality has expanded too. A set like {1, 2, 3} can be considered 'equal' to a set like {a, b, c} because of an implicit understanding called canonical isomorphism, which compares similarities between the structures of groups.

"These sets match up with each other in a completely natural way and mathematicians realised it would be really convenient if we just call those equal as well," Buzzard told New Scientist's Alex Wilkins. However, taking canonical isomorphism to mean equality is now causing "some real trouble," Buzzard writes, for mathematicians trying to formalize proofs -- including decades-old foundational concepts -- using computers. "None of the [computer] systems that exist so far capture the way that mathematicians such as Grothendieck use the equal symbol," Buzzard told Wilkins, referring to Alexander Grothendieck, a leading mathematician of the 20th century who relied on set theory to describe equality.

Some mathematicians think they should just redefine mathematical concepts to formally equate canonical isomorphism with equality. Buzzard disagrees. He thinks the incongruence between mathematicians and machines should prompt math minds to rethink what exactly they mean by mathematical concepts as foundational as equality so computers can understand them. "When one is forced to write down what one actually means and cannot hide behind such ill-defined words," Buzzard writes. "One sometimes finds that one has to do extra work, or even rethink how certain ideas should be presented."

AI

China's DeepSeek Coder Becomes First Open-Source Coding Model To Beat GPT-4 Turbo (venturebeat.com) 108

Shubham Sharma reports via VentureBeat: Chinese AI startup DeepSeek, which previously made headlines with a ChatGPT competitor trained on 2 trillion English and Chinese tokens, has announced the release of DeepSeek Coder V2, an open-source mixture of experts (MoE) code language model. Built upon DeepSeek-V2, an MoE model that debuted last month, DeepSeek Coder V2 excels at both coding and math tasks. It supports more than 300 programming languages and outperforms state-of-the-art closed-source models, including GPT-4 Turbo, Claude 3 Opus and Gemini 1.5 Pro. The company claims this is the first time an open model has achieved this feat, sitting way ahead of Llama 3-70B and other models in the category. It also notes that DeepSeek Coder V2 maintains comparable performance in terms of general reasoning and language capabilities.

Founded last year with a mission to "unravel the mystery of AGI with curiosity," DeepSeek has been a notable Chinese player in the AI race, joining the likes of Qwen, 01.AI and Baidu. In fact, within a year of its launch, the company has already open-sourced a bunch of models, including the DeepSeek Coder family. The original DeepSeek Coder, with up to 33 billion parameters, did decently on benchmarks with capabilities like project-level code completion and infilling, but only supported 86 programming languages and a context window of 16K. The new V2 offering builds on that work, expanding language support to 338 and context window to 128K -- enabling it to handle more complex and extensive coding tasks. When tested on MBPP+, HumanEval, and Aider benchmarks, designed to evaluate code generation, editing and problem-solving capabilities of LLMs, DeepSeek Coder V2 scored 76.2, 90.2, and 73.7, respectively -- sitting ahead of most closed and open-source models, including GPT-4 Turbo, Claude 3 Opus, Gemini 1.5 Pro, Codestral and Llama-3 70B. Similar performance was seen across benchmarks designed to assess the model's mathematical capabilities (MATH and GSM8K). The only model that managed to outperform DeepSeek's offering across multiple benchmarks was GPT-4o, which obtained marginally higher scores in HumanEval, LiveCode Bench, MATH and GSM8K. [...]

As of now, DeepSeek Coder V2 is being offered under a MIT license, which allows for both research and unrestricted commercial use. Users can download both 16B and 236B sizes in instruct and base avatars via Hugging Face. Alternatively, the company is also providing access to the models via API through its platform under a pay-as-you-go model. For those who want to test out the capabilities of the models first, the company is offering the option to interact. with Deepseek Coder V2 via chatbot.

Microsoft

The Verge's David Pierce Reports On the Excel World Championship From Vegas (theverge.com) 29

In a featured article for The Verge, David Pierce explores the world of competitive Excel, highlighting its rise from a hobbyist activity to a potential esport, showcased during the Excel World Championship in Las Vegas. Top spreadsheet enthusiasts competed at the MGM Grand to solve complex Excel challenges, emphasizing the transformative power and ubiquity of spreadsheets in both business and entertainment. An anonymous reader quotes an excerpt from the report: Competitive Excel has been around for years, but only in a hobbyist way. Most of the people in this room full of actuaries, analysts, accountants, and investors play Excel the way I play Scrabble or do the crossword -- exercising your brain using tools you understand. But last year's competition became a viral hit on ESPN and YouTube, and this year, the organizers are trying to capitalize. After all, someone points out to me, poker is basically just math, and it's all over TV. Why not spreadsheets? Excel is a tool. It's a game. Now it hopes to become a sport. I've come to realize in my two days in this ballroom that understanding a spreadsheet is like a superpower. The folks in this room make their living on their ability to take some complex thing -- a company's sales, a person's lifestyle, a region's political leanings, a race car -- and pull it apart into its many component pieces. If you can reduce the world down to a bunch of rows and columns, you can control it. Manipulate it. Build it and rebuild it in a thousand new ways, with a couple of hotkeys and an undo button at the ready. A good spreadsheet shows you the universe and gives you the ability to create new ones. And the people in this room, in their dad jeans and short-sleeved button-downs, are the gods on Olympus, bending everything to their will.

There is one inescapably weird thing about competitive Excel: spreadsheets are not fun. Spreadsheets are very powerful, very interesting, very important, but they are for work. Most of what happens at the FMWC is, in almost every practical way, indistinguishable from the normal work that millions of people do in spreadsheets every day. You can gussy up the format, shorten the timelines, and raise the stakes all you want -- the reality is you're still asking a bunch of people who make spreadsheets for a living to just make more spreadsheets, even if they're doing it in Vegas. You really can't overstate how important and ubiquitous spreadsheets really are, though. "Electronic spreadsheets" actually date back earlier than computers and are maybe the single most important reason computers first became mainstream. In the late 1970s, a Harvard MBA student named Dan Bricklin started to dream up a software program that could automatically do the math he was constantly doing and re-doing in class. "I imagined a magic blackboard that if you erased one number and wrote a new thing in, all of the other numbers would automatically change, like word processing with numbers," he said in a 2016 TED Talk. This sounds quaint and obvious now, but it was revolutionary then. [...]

Competitive Excel has been around for years, but only in a hobbyist way. Most of the people in this room full of actuaries, analysts, accountants, and investors play Excel the way I play Scrabble or do the crossword -- exercising your brain using tools you understand. But last year's competition became a viral hit on ESPN and YouTube, and this year, the organizers are trying to capitalize. After all, someone points out to me, poker is basically just math, and it's all over TV. Why not spreadsheets? Excel is a tool. It's a game. Now it hopes to become a sport. I've come to realize in my two days in this ballroom that understanding a spreadsheet is like a superpower. The folks in this room make their living on their ability to take some complex thing -- a company's sales, a person's lifestyle, a region's political leanings, a race car -- and pull it apart into its many component pieces. If you can reduce the world down to a bunch of rows and columns, you can control it. Manipulate it. Build it and rebuild it in a thousand new ways, with a couple of hotkeys and an undo button at the ready. A good spreadsheet shows you the universe and gives you the ability to create new ones. And the people in this room, in their dad jeans and short-sleeved button-downs, are the gods on Olympus, bending everything to their will.

IOS

Apple Made an iPad Calculator App After 14 Years (theverge.com) 62

Jay Peters reports via The Verge: The iPad is finally getting a Calculator app as part of iPadOS 18. The long-requested app was just announced by Apple at WWDC 2024. On its face, the app looks a lot like the calculator you might be familiar with from iOS. But it also supports Apple Pencil, meaning that you can write down math problems and the app will solve them thanks to a feature Apple calls Math Notes. Other features included in iPadOS 18 include a new, customizable floating tab bar; enhanced SharePlay functionality for easier screen sharing and remote control of another person's iPad; and Smart Script, a handwriting feature that refines and improves legibility using machine learning.
Desktops (Apple)

Apple Unveils macOS 15 'Sequoia' at WWDC, Introduces Window Tiling and iPhone Mirroring (arstechnica.com) 35

At its Worldwide Developers Conference, Apple formally introduced macOS 15, codenamed "Sequoia." The new release combines features from iOS 18 with Mac-specific improvements. One notable addition is automated window tiling, allowing users to arrange windows on their screen without manual resizing or switching to full-screen mode. Another feature, iPhone Mirroring, streams the iPhone's screen to the Mac, enabling app use with the Mac's keyboard and trackpad while keeping the phone locked for privacy.

Gamers will appreciate the second version of Apple's Game Porting Toolkit, simplifying the process of bringing Windows games to macOS and vice versa. Sequoia also incorporates changes from iOS and iPadOS, such as RCS support and expanded Tapback reactions in Messages, a redesigned Calculator app, and the Math Notes feature for typed equations in Notes. Additionally, all Apple platforms and Windows will receive a new Passwords app, potentially replacing standalone password managers. A developer beta of macOS Sequoia is available today, with refined public betas coming in July and a full release planned for the fall.
Math

Crows Can 'Count' Out Loud, Study Shows (sciencealert.com) 39

An anonymous reader quotes a report from ScienceAlert: A team of scientists has shown that crows can 'count' out loud -- producing a specific and deliberate number of caws in response to visual and auditory cues. While other animals such as honeybees have shown an ability to understand numbers, this specific manifestation of numeric literacy has not yet been observed in any other non-human species. "Producing a specific number of vocalizations with purpose requires a sophisticated combination of numerical abilities and vocal control," writes the team of researchers led by neuroscientist Diana Liao of the University of Tubingen in Germany. "Whether this capacity exists in animals other than humans is yet unknown. We show that crows can flexibly produce variable numbers of one to four vocalizations in response to arbitrary cues associated with numerical values."

The ability to count aloud is distinct from understanding numbers. It requires not only that understanding, but purposeful vocal control with the aim of communication. Humans are known to use speech to count numbers and communicate quantities, an ability taught young. [...] "Our results demonstrate that crows can flexibly and deliberately produce an instructed number of vocalizations by using the 'approximate number system', a non-symbolic number estimation system shared by humans and animals," the researchers write in their paper. "This competency in crows also mirrors toddlers' enumeration skills before they learn to understand cardinal number words and may therefore constitute an evolutionary precursor of true counting where numbers are part of a combinatorial symbol system."
The findings have been published in the journal Science.
Bitcoin

MIT Students Stole $25 Million In Seconds By Exploiting ETH Blockchain Bug, DOJ Says (arstechnica.com) 112

An anonymous reader quotes a report from Ars Technica: Within approximately 12 seconds, two highly educated brothers allegedly stole $25 million by tampering with the ethereum blockchain in a never-before-seen cryptocurrency scheme, according to an indictment that the US Department of Justice unsealed Wednesday. In a DOJ press release, US Attorney Damian Williams said the scheme was so sophisticated that it "calls the very integrity of the blockchain into question."

"The brothers, who studied computer science and math at one of the most prestigious universities in the world, allegedly used their specialized skills and education to tamper with and manipulate the protocols relied upon by millions of ethereum users across the globe," Williams said. "And once they put their plan into action, their heist only took 12 seconds to complete." Anton, 24, and James Peraire-Bueno, 28, were arrested Tuesday, charged with conspiracy to commit wire fraud, wire fraud, and conspiracy to commit money laundering. Each brother faces "a maximum penalty of 20 years in prison for each count," the DOJ said. The indictment goes into detail explaining that the scheme allegedly worked by exploiting the ethereum blockchain in the moments after a transaction was conducted but before the transaction was added to the blockchain.
To uncover the scheme, the special agent in charge, Thomas Fattorusso of the IRS Criminal Investigation (IRS-CI) New York Field Office, said that investigators "simply followed the money."

"Regardless of the complexity of the case, we continue to lead the effort in financial criminal investigations with cutting-edge technology and good-ol'-fashioned investigative work, on and off the blockchain," Fattorusso said.
Power

A Coal Billionaire is Building the World's Biggest Clean Energy Plant - Five Times the Size of Paris (cnn.com) 79

An anonymous reader shared this report from CNN: Five times the size of Paris. Visible from space. The world's biggest energy plant. Enough electricity to power Switzerland. The scale of the project transforming swathes of barren salt desert on the edge of western India into one of the most important sources of clean energy anywhere on the planet is so overwhelming that the man in charge can't keep up. "I don't even do the math any more," Sagar Adani told CNN in an interview last week.

Adani is executive director of Adani Green Energy Limited (AGEL). He's also the nephew of Gautam Adani, Asia's second richest man, whose $100 billion fortune stems from the Adani Group, India's biggest coal importer and a leading miner of the dirty fuel. Founded in 1988, the conglomerate has businesses in fields ranging from ports and thermal power plants to media and cements. Its clean energy unit AGEL is building the sprawling solar and wind power plant in the western Indian state of Gujarat at a cost of about $20 billion.

It will be the world's biggest renewable park when it is finished in about five years, and should generate enough clean electricity to power 16 million Indian homes... [T]he park will cover more than 200 square miles and be the planet's largest power plant regardless of the energy source, AGEL said.

CNN adds that the company "plans to invest $100 billion into energy transition over the next decade, with 70% of the investments ear-marked for clean energy."
XBox (Games)

Xbox Console Sales Are Tanking As Microsoft Brings Games To PS5 (kotaku.com) 25

In its third-quarter earnings call on Thursday, Microsoft reported a 30% drop in Xbox console sales, after reporting a 30% drop last April. "It blamed the nosedive on a 'lower volume of consoles sold' during the start of 2024," reports Kotaku. From the report: In February, Grand Theft Auto VI parent company Take-Two claimed in a presentation to investors that there were roughly 77 million "gen 9" consoles in people's homes. It didn't take fans long to do the math and speculate that Microsoft had only sold around 25 million Xbox Series X/S consoles to-date. That puts it ahead of the GameCube but behind the Nintendo 64, at least for now. Given the results this quarter as well, it doesn't seem like Game Pass and Starfield have moved the needle much. Maybe that will change once Call of Duty, which Microsoft acquired last fall along with the rest of Activision Blizzard, finally makes its way to Game Pass. Diablo IV only just arrived on the Netflix-like subscription platform this month. But given the fact that the fate of Xbox Series X/S appears to be locked in at this point, it's easy to see why Microsoft is looking at other places it can put its games.

Sea of Thieves, the last of four games in this initial volley to come to PS5, dominated the PlayStation Store's top sellers list last week on pre-orders alone. CEO Satya Nadella specifically called this out during a call with investors, noting that Microsoft had more games in the top 25 best sellers on PS5 than any other publisher. "We are committed to meeting players where they are by bringing great games to more people on more devices," he said. If players there continue to flock to the live-service pirate sim, it's not hard to imagine Microsoft bringing another batch of its first-party exclusives to the rival platform. Whether that means more recent blockbusters like Starfield or the upcoming Indiana Jones game will someday make the journey remains to be seen.

Math

A Chess Formula Is Taking Over the World (theatlantic.com) 28

An anonymous reader quotes a report from The Atlantic: In October 2003, Mark Zuckerberg created his first viral site: not Facebook, but FaceMash. Then a college freshman, he hacked into Harvard's online dorm directories, gathered a massive collection of students' headshots, and used them to create a website on which Harvard students could rate classmates by their attractiveness, literally and figuratively head-to-head. The site, a mean-spirited prank recounted in the opening scene of The Social Network, got so much traction so quickly that Harvard shut down his internet access within hours. The math that powered FaceMash -- and, by extension, set Zuckerberg on the path to building the world's dominant social-media empire -- was reportedly, of all things, a formula for ranking chess players: the Elo system.

Fundamentally, what an Elo rating does is predict the outcome of chess matches by assigning every player a number that fluctuates based purely on performance. If you beat a slightly higher-ranked player, your rating goes up a little, but if you beat a much higher-ranked player, your rating goes up a lot (and theirs, conversely, goes down a lot). The higher the rating, the more matches you should win. That is what Elo was designed for, at least. FaceMash and Zuckerberg aside, people have deployed Elo ratings for many sports -- soccer, football, basketball -- and for domains as varied as dating, finance, and primatology. If something can be turned into a competition, it has probably been Elo-ed. Somehow, a simple chess algorithm has become an all-purpose tool for rating everything. In other words, when it comes to the preferred way to rate things, Elo ratings have the highest Elo rating. [...]

Elo ratings don't inherently have anything to do with chess. They're based on a simple mathematical formula that works just as well for any one-on-one, zero-sum competition -- which is to say, pretty much all sports. In 1997, a statistician named Bob Runyan adapted the formula to rank national soccer teams -- a project so successful that FIFA eventually adopted an Elo system for its official rankings. Not long after, the statistician Jeff Sagarin applied Elo to rank NFL teams outside their official league standings. Things really took off when the new ESPN-owned version of Nate Silver's 538 launched in 2014 and began making Elo ratings for many different sports. Some sports proved trickier than others. NBA basketball in particular exposed some of the system's shortcomings, Neil Paine, a stats-focused sportswriter who used to work at 538, told me. It consistently underrated heavyweight teams, for example, in large part because it struggled to account for the meaninglessness of much of the regular season and the fact that either team might not be trying all that hard to win a given game. The system assumed uniform motivation across every team and every game. Pretty much anything, it turns out, can be framed as a one-on-one, zero-sum game.
Arpad Emmerich Elo, creator of the Elo rating system, understood the limitations of his invention. "It is a measuring tool, not a device of reward or punishment," he once remarked. "It is a means to compare performances, assess relative strength, not a carrot waved before a rabbit, or a piece of candy given to a child for good behavior."
Microsoft

Microsoft Takes Down AI Model Published by Beijing-Based Researchers Without Adequate Safety Checks (theinformation.com) 49

Microsoft's Beijing-based research group published a new open source AI model on Tuesday, only to remove it from the internet hours later after the company realized that the model hadn't gone through adequate safety testing. From a report: The team that published the model, which is comprised of China-based researchers in Microsoft Research Asia, said in a tweet on Tuesday that they "accidentally missed" the safety testing step that Microsoft requires before models can be published.

Microsoft's AI policies require that before any AI models can be published, they must be approved by the company's Deployment Safety Board, which tests whether the models can carry out harmful tasks such as creating violent or disturbing content, according to an employee familiar with the process. In a now-deleted blog post, the researchers behind the model, dubbed WizardLM-2, said that it could carry out tasks like generating text, suggesting code, translating between different languages, or solving some math problems.

Math

73-Year-Old Clifford Stoll Is Now Selling Klein Bottles (berkeley.edu) 47

O'Reilly's "Tech Trends" newsletter included an interesting item this month: Want your own Klein Bottle? Made by Cliff Stoll, author of the cybersecurity classic The Cuckoo's Egg, who will autograph your bottle for you (and may include other surprises).
First described in 1882 by the mathematician Felix Klein, a Klein bottle (like a Mobius strip) has a one-side surface. ("Need a zero-volume bottle...?" asks Stoll's web site. "Want the ultimate in non-orientability...? A mathematician's delight, handcrafted in glass.")

But how the legendary cyberbreach detective started the company is explained in this 2016 article from a U.C. Berkeley alumni magazine. Its headline? "How a Berkeley Eccentric Beat the Russians — and Then Made Useless, Wondrous Objects." The reward for his cloak-and-dagger wizardry? A certificate of appreciation from the CIA, which is stashed somewhere in his attic... Stoll published a best-selling book, The Cuckoo's Egg, about his investigation. PBS followed it with a NOVA episode entitled "The KGB, the Computer, and Me," a docudrama starring Stoll playing himself and stepping through the "fourth wall" to double as narrator. Stoll had stepped through another wall, as well, into the numinous realm of fame, as the burgeoning tech world went wild with adulation... He was more famous than he ever could have dreamed, and he hated it. "After a few months, you realize how thin fame is, and how shallow. I'm not a software jockey; I'm an astronomer. But all people cared about was my computing."

Stoll's disenchantment also arose from what he perceived as the false religion of the Internet... Stoll articulated his disenchantment in his next book, Silicon Snake Oil, published in 1995, which urged readers to get out from behind their computer screens and get a life. "I was asking what I thought were reasonable questions: Is the electronic classroom an improvement? Does a computer help a student learn? Yes, but what it teaches you is to go to the computer whenever you have a question, rather than relying on yourself. Suppose I was an evil person and wanted to eliminate the curiosity of children. Give the kid a diet of Google, and pretty soon the child learns that every question he has is answered instantly. The coolest thing about being human is to learn, but you don't learn things by looking it up; you learn by figuring it out." It was not a popular message in the rise of the dot-com era, as Stoll soon learned...

Being a Voice in the Wilderness doesn't pay well, however, and by this time Stoll had taken his own advice and gotten a life; namely, marrying and having two children. So he looked around for a way to make some money. That ushered in his third — and current — career as President and Chief Bottle Washer of the aforementioned Acme Klein Bottle company... At first, Stoll had a hard time finding someone to make Klein bottles. He tried a bong peddler on Telegraph Avenue, but the guy took Cliff's money and disappeared. "I realized that the trouble with bong makers is that they're also bong users."

Then in 1994, two friends of his, Tom Adams and George Chittenden, opened a shop in West Berkeley that made glassware for science labs. "They needed help with their computer program and wanted to pay me," Stoll recalls. "I said, 'Nah, let's make Klein bottles instead.' And that's how Acme Klein Bottles was born."

UPDATE: Turns out Stoll is also a long-time Slashdot reader, and shared comments this weekend on everything from watching the eclipse to his VIP parking pass for CIA headquarters and "this CIA guy's rubber-stamp collection."

"I am honored by the attention and kindness of fellow nerds and online friends," Stoll added Saturday. "When I first started on that chase in 1986, I had no idea wrhere it would lead me... To all my friends: May you burdens be light and your purpose high. Stay curious!"
AI

OpenAI Makes ChatGPT 'More Direct, Less Verbose' (techcrunch.com) 36

Kyle Wiggers reports via TechCrunch: OpenAI announced today that premium ChatGPT users -- customers paying for ChatGPT Plus, Team or Enterprise -- can now leveraged an updated and enhanced version of GPT-4 Turbo, one of the models that powers the conversational ChatGPT experience. This new model ("gpt-4-turbo-2024-04-09") brings with it improvements in writing, math, logical reasoning and coding, OpenAI claims, as well as a more up-to-date knowledge base. It was trained on publicly available data up to December 2023, in contrast to the previous edition of GPT-4 Turbo available in ChatGPT, which had an April 2023 cut-off. "When writing with ChatGPT [with the new GPT-4 Turbo], responses will be more direct, less verbose and use more conversational language," OpenAI writes in a post on X.
Education

AI's Impact on CS Education Likened to Calculator's Impact on Math Education (acm.org) 102

In Communication of the ACM, Google's VP of Education notes how calculators impacted math education — and wonders whether generative AI will have the same impact on CS education: Teachers had to find the right amount of long-hand arithmetic and mathematical problem solving for students to do, in order for them to have the "number sense" to be successful later in algebra and calculus. Too much focus on calculators diminished number sense. We have a similar situation in determining the 'code sense' required for students to be successful in this new realm of automated software engineering. It will take a few iterations to understand exactly what kind of praxis students need in this new era of LLMs to develop sufficient code sense, but now is the time to experiment."
Long-time Slashdot reader theodp notes it's not the first time the Google executive has had to consider "iterating" curriculum: The CACM article echoes earlier comments Google's Education VP made in a featured talk called The Future of Computational Thinking at last year's Blockly Summit. (Blockly is the Google technology that powers drag-and-drop coding IDE's used for K-12 CS education, including Scratch and Code.org). Envisioning a world where AI generates code and humans proofread it, Johnson explained: "One can imagine a future where these generative coding systems become so reliable, so capable, and so secure that the amount of time doing low-level coding really decreases for both students and for professionals. So, we see a shift with students to focus more on reading and understanding and assessing generated code and less about actually writing it. [...] I don't anticipate that the need for understanding code is going to go away entirely right away [...] I think there will still be at least in the near term a need to understand read and understand code so that you can assess the reliabilities, the correctness of generated code. So, I think in the near term there's still going to be a need for that." In the following Q&A, Johnson is caught by surprise when asked whether there will even be a need for Blockly at all in the AI-driven world as described — and the Google VP concedes there may not be.
Intel

Intel Discloses $7 Billion Operating Loss For Chip-Making Unit (reuters.com) 82

Intel on Tuesday disclosed $7 billion in operating losses for its foundry business in 2023, "a steeper loss than the $5.2 billion in operating losses the year before," reports Reuters. "The unit had revenue of $18.9 billion for 2023, down 31% from $27.49 billion the year before." From the report: Intel shares were down 4.3% after the documents were filed with the U.S. Securities and Exchange Commission (SEC). During a presentation for investors, Chief Executive Pat Gelsinger said that 2024 would be the year of worst operating losses for the company's chipmaking business and that it expects to break even on an operating basis by about 2027. Gelsinger said the foundry business was weighed down by bad decisions, including one years ago against using extreme ultraviolet (EUV) machines from Dutch firm ASML. While those machines can cost more than $150 million, they are more cost-effective than earlier chip making tools.

Partially as a result of the missteps, Intel has outsourced about 30% of the total number of wafers to external contract manufacturers such as TSMC, Gelsinger said. It aims to bring that number down to roughly 20%. Intel has now switched over to using EUV tools, which will cover more and more production needs as older machines are phased out. "In the post EUV era, we see that we're very competitive now on price, performance (and) back to leadership," Gelsinger said. "And in the pre-EUV era we carried a lot of costs and (were) uncompetitive."
Editor's note: This story has been corrected to change the 2022 revenue figure for Intel Foundry to $27.49 billion, as reflected in the source article. We apologize for the math error.
AI

Databricks Claims Its Open Source Foundational LLM Outsmarts GPT-3.5 (theregister.com) 17

Lindsay Clark reports via The Register: Analytics platform Databricks has launched an open source foundational large language model, hoping enterprises will opt to use its tools to jump on the LLM bandwagon. The biz, founded around Apache Spark, published a slew of benchmarks claiming its general-purpose LLM -- dubbed DBRX -- beat open source rivals on language understanding, programming, and math. The developer also claimed it beat OpenAI's proprietary GPT-3.5 across the same measures.

DBRX was developed by Mosaic AI, which Databricks acquired for $1.3 billion, and trained on Nvidia DGX Cloud. Databricks claims it optimized DBRX for efficiency with what it calls a mixture-of-experts (MoE) architecture â" where multiple expert networks or learners divide up a problem. Databricks explained that the model possesses 132 billion parameters, but only 36 billion are active on any one input. Joel Minnick, Databricks marketing vice president, told The Register: "That is a big reason why the model is able to run as efficiently as it does, but also runs blazingly fast. In practical terms, if you use any kind of major chatbots that are out there today, you're probably used to waiting and watching the answer get generated. With DBRX it is near instantaneous."

But the performance of the model itself is not the point for Databricks. The biz is, after all, making DBRX available for free on GitHub and Hugging Face. Databricks is hoping customers use the model as the basis for their own LLMs. If that happens it might improve customer chatbots or internal question answering, while also showing how DBRX was built using Databricks's proprietary tools. Databricks put together the dataset from which DBRX was developed using Apache Spark and Databricks notebooks for data processing, Unity Catalog for data management and governance, and MLflow for experiment tracking.

Math

Pythagoras Was Wrong: There Are No Universal Musical Harmonies, Study Finds (cam.ac.uk) 73

An anonymous reader shares a report: According to the Ancient Greek philosopher Pythagoras, 'consonance' -- a pleasant-sounding combination of notes -- is produced by special relationships between simple numbers such as 3 and 4. More recently, scholars have tried to find psychological explanations, but these 'integer ratios' are still credited with making a chord sound beautiful, and deviation from them is thought to make music 'dissonant,' unpleasant sounding.

But researchers from the University of Cambridge, Princeton and the Max Planck Institute for Empirical Aesthetics, have now discovered two key ways in which Pythagoras was wrong. Their study, published in Nature Communications, shows that in normal listening contexts, we do not actually prefer chords to be perfectly in these mathematical ratios. "We prefer slight amounts of deviation. We like a little imperfection because this gives life to the sounds, and that is attractive to us," said co-author, Dr Peter Harrison, from Cambridge's Faculty of Music and Director of its Centre for Music and Science.

The researchers also found that the role played by these mathematical relationships disappears when you consider certain musical instruments that are less familiar to Western musicians, audiences and scholars. These instruments tend to be bells, gongs, types of xylophones and other kinds of pitched percussion instruments. In particular, they studied the 'bonang,' an instrument from the Javanese gamelan built from a collection of small gongs.

AI

Why Are So Many AI Chatbots 'Dumb as Rocks'? (msn.com) 73

Amazon announced a new AI-powered chatbot last month — still under development — "to help you figure out what to buy," writes the Washington Post. Their conclusion? "[T]he chatbot wasn't a disaster. But I also found it mostly useless..."

"The experience encapsulated my exasperation with new types of AI sprouting in seemingly every technology you use. If these chatbots are supposed to be magical, why are so many of them dumb as rocks?" I thought the shopping bot was at best a slight upgrade on searching Amazon, Google or news articles for product recommendations... Amazon's chatbot doesn't deliver on the promise of finding the best product for your needs or getting you started on a new hobby.

In one of my tests, I asked what I needed to start composting at home. Depending on how I phrased the question, the Amazon bot several times offered basic suggestions that I could find in a how-to article and didn't recommend specific products... When I clicked the suggestions the bot offered for a kitchen compost bin, I was dumped into a zillion options for countertop compost products. Not helpful... Still, when the Amazon bot responded to my questions, I usually couldn't tell why the suggested products were considered the right ones for me. Or, I didn't feel I could trust the chatbot's recommendations.

I asked a few similar questions about the best cycling gloves to keep my hands warm in winter. In one search, a pair that the bot recommended were short-fingered cycling gloves intended for warm weather. In another search, the bot recommended a pair that the manufacturer indicated was for cool temperatures, not frigid winter, or to wear as a layer under warmer gloves... I did find the Amazon chatbot helpful for specific questions about a product, such as whether a particular watch was waterproof or the battery life of a wireless keyboard.

But there's a larger question about whether technology can truly handle this human-interfacing task. "I have also found that other AI chatbots, including those from ChatGPT, Microsoft and Google, are at best hit-or-miss with shopping-related questions..." These AI technologies have potentially profound applications and are rapidly improving. Some people are making productive use of AI chatbots today. (I mostly found helpful Amazon's relatively new AI-generated summaries of customer product reviews.)

But many of these chatbots require you to know exactly how to speak to them, are useless for factual information, constantly make up stuff and in many cases aren't much of an improvement on existing technologies like an app, news articles, Google or Wikipedia. How many times do you need to scream at a wrong math answer from a chatbot, botch your taxes with a TurboTax AI, feel disappointed at a ChatGPT answer or grow bored with a pointless Tom Brady chatbot before we say: What is all this AI junk for...?

"When so many AI chatbots overpromise and underdeliver, it's a tax on your time, your attention and potentially your money," the article concludes.

"I just can't with all these AI junk bots that demand a lot of us and give so little in return."
Math

Pi Calculated to 105 Trillion Digits. (Stored on 1 Petabyte of SSDs) (solidigm.com) 95

Pi was calculated to 100 trillion decimal places in 2022 by a Google team lead by cloud developer advocate Emma Haruka Iwao.

But 2024's "pi day" saw a new announcement... After successfully breaking the speed record for calculating pi to 100 trillion digits last year, the team at StorageReview has taken it up a notch, revealing all the numbers of Pi up to 105 trillion digits! Spoiler: the 105 trillionth digit of Pi is 6!

Owner and Editor-in-Chief Brian Beeler led the team that used 36 Solidigm SSDs (nearly a petabyte) for their unprecedented capacity and reliability required to store the calculated digits of Pi. Although there is no practical application for this many digits, the exercise underscores the astounding capabilities of modern hardware and an achievement in computational and storage technology...

For an undertaking of this size, which took 75 days, the role of storage cannot be understated. "For the Pi computation, we're entirely restricted by storage, says Beeler. "Faster CPUs will help accelerate the math, but the limiting factor to many new world records is the amount of local storage in the box. For this run, we're again leveraging Solidigm D5-P5316 30.72TB SSDs to help us get a little over 1P flash in the system.

"These SSDs are the only reason we could break through the prior records and hit 105 trillion Pi digits."

"Leveraging a combination of open-source and proprietary software, the team at StorageReview optimized the algorithmic process to fully exploit the hardware's capabilities, reducing computational time and enhancing efficiency," Beeler says in the announcement.

There's a video on YouTube where the team discusses their effort.

Slashdot Top Deals