×
Programming

Mistral Releases Codestral, Its First Generative AI Model For Code (techcrunch.com) 27

Mistral, the French AI startup backed by Microsoft and valued at $6 billion, has released its first generative AI model for coding, dubbed Codestral. From a report: Codestral, like other code-generating models, is designed to help developers write and interact with code. It was trained on over 80 programming languages, including Python, Java, C++ and JavaScript, explains Mistral in a blog post. Codestral can complete coding functions, write tests and "fill in" partial code, as well as answer questions about a codebase in English. Mistral describes the model as "open," but that's up for debate. The startup's license prohibits the use of Codestral and its outputs for any commercial activities. There's a carve-out for "development," but even that has caveats: the license goes on to explicitly ban "any internal usage by employees in the context of the company's business activities." The reason could be that Codestral was trained partly on copyrighted content. Codestral might not be worth the trouble, in any case. At 22 billion parameters, the model requires a beefy PC in order to run.
Hardware

Arm Says Its Next-Gen Mobile GPU Will Be Its Most 'Performant and Efficient' (theverge.com) 29

IP core designer Arm announced its next-generation CPU and GPU designs for flagship smartphones: the Cortex-X925 CPU and Immortalis G925 GPU. Both are direct successors to the Cortex-X4 and Immortalis G720 that currently power MediaTek's Dimensity 9300 chip inside flagship smartphones like the Vivo X100 and X100 Pro and Oppo Find X7. From a report: Arm changed the naming convention for its Cortex-X CPU design to highlight what it says is a much faster CPU design. It claims the X925's single-core performance is 36 percent faster than the X4 (when measured in Geekbench). Arm says it increased the AI workload performance by 41 percent, time to token, with up to 3MB of private L2 cache. The Cortex-X925 brings a new generation of Cortex-A microarchitectures ("little" cores) with it, too: the Cortex-A725, which Arm says has 35 percent better performance efficiency than last-gen's A720 and a 15 percent more power-efficient Cortex-A520.

Arm's new Immortalis G925 GPU is its "most performant and efficient GPU" to date, it says. It's 37 percent faster on graphics applications compared to the last-gen G720, with improved ray-tracing performance with intricate objects by 52 percent and improved AI and ML workloads by 34 percent -- all while using 30 percent less power. For the first time, Arm will offer "optimized layouts" of its new CPU and GPU designs that it says will be easier for device makers to "drop" or implement into their own system on chip (SoC) layouts. Arm says this new physical implementation solution will help other companies get their devices to market faster, which, if true, means we could see more devices with Arm Cortex-X925 and / or Immortalis G925 than the few that shipped with its last-gen ones.

Chrome

Chromebooks Will Get Gemini and New Google AI Features (wired.com) 9

Google is introducing the Gemini AI chatbot to Chromebook Plus models, enhancing features like text rewriting, image editing, and hands-free control. Here are a few of the top new features coming to ChromeOS, as summarized by Wired: The first notable feature is Help Me Write, which works in any text box. Select text in any text box and right-click -- you'll see a box next to the standard right-click context menu. You can ask Google's AI to rewrite the selected text, rephrase it in a specific way, or change the tone. I tried to use it on a few sentences in this story but did not like any of the suggestions it gave me, so your mileage may vary. Or maybe I'm a better writer than Google's AI. Who knows?

Google's bringing the same generative AI wallpaper system you'll find in Android to ChromeOS. You can access this feature in ChromeOS's wallpaper settings and generate images based on specific parameters. Weirdly, you can create these when you're in a video-calling app too. You'll see a menu option next to the system tray whenever the microphone and video camera are being accessed -- tap on it and click "Create with AI" and you can generate an image for your video call's background. I'm not sure why I'd want a background of a "surreal bicycle made of flowers in pink and purple," but there you go. AI!

Here's something a little more useful: Magic Editor in Google Photos. Yep, the same feature that debuted in Google's Pixel 8 smartphones is now available on Chromebook Plus laptops. In the Google Photos app, you can press Edit on a photo and you'll see the option for Magic Editor. (You'll need to download more editing tools to get started.) This feature lets you erase unwanted objects in your photos, move a subject to another area of the frame, and fill in the backgrounds of photos. I successfully erased a paint can in the background of a photo of my dog, and it worked pretty quickly.

Then there's Gemini. It's available as a stand-alone app, and you can ask it to do pretty much anything. Write a cover letter, break down complex topics, ask for travel tips for a specific country. Just, you know, double-check the results and make sure there aren't any hallucinations. If you want to tap into Google's Gemini Advanced model, the company says it is offering 12 months free for new Chromebook Plus owners through the end of the year, so you have some time to redeem that offer. This is technically an upgrade from Google One, and it nets you Gemini for Workspace, 2 terabytes of storage, and a few other perks.
New features coming to all Chromebooks include easy setup with Android phones via QR code for sharing Wi-Fi credentials, integration of Google Tasks into the system tray, a Game Dashboard for mapping controls and recording gameplay as GIFs, and a built-in screen recorder tool. Upcoming enhancements also include Hands-Free Control using face gestures, the Help Me Read feature with Gemini for summarizing websites and PDFs, and an Overview screen to manage open browser windows, tabs, and apps.

You can check if your Chromebook is compatible with the Chromebook Plus OS update here.
Security

Instead of 'Auth,' We Should Say 'Permissions' and 'Login' (ntietz.com) 101

The term "auth" is ambiguous, often meaning either authentication (authn) or authorization (authz), which leads to confusion and poor system design. Instead, Nicole Tietz-Sokolskaya, a software engineer at AI market research platform Remesh, argues that the industry adopt the terms "login" for authentication and "permissions" for authorization, as these are clearer and help maintain distinct, appropriate abstractions for each concept. From their blog post: We should always use the most clear terms we have. Sometimes there's not a great option, but here, we have wonderfully clear terms. Those are "login" for authentication and "permissions" for authorization. Both are terms that will make sense with little explanation (in contrast to "authn" and "authz", which are confusing on first encounter) since almost everyone has logged into a system and has run into permissions issues. There are two ways to use "login" here: the noun and the verb form. The noun form is "login", which refers to the information you enter to gain access to the system. And the verb form is "log in", which refers to the action of entering your login to use the system. "Permissions" is just the noun form. To use a verb, you would use "check permissions." While this is long, it's also just... fine? It hasn't been an issue in my experience.

Both of these are abundantly clear even to our peers in disciplines outside software engineering. This to me makes it worth using them from a clarity perspective alone. But then we have the big benefit to abstractions, as well. When we call both by the same word, there's often an urge to combine them into a single module just by dint of the terminology. This isn't necessarily wrong -- there is certainly some merit to put them together, since permissions typically require a login. But it's not necessary, either, and our designs will be stronger if we don't make that assumption and instead make a reasoned choice.

Piracy

Nvidia Denies Pirate e-Book Sites Are 'Shadow Libraries' To Shut Down Lawsuit (arstechnica.com) 105

An anonymous reader quotes a report from Ars Technica: Some of the most infamous so-called shadow libraries have increasingly faced legal pressure to either stop pirating books or risk being shut down or driven to the dark web. Among the biggest targets are Z-Library, which the US Department of Justice has charged with criminal copyright infringement, and Library Genesis (Libgen), which was sued by textbook publishers last fall for allegedly distributing digital copies of copyrighted works "on a massive scale in willful violation" of copyright laws. But now these shadow libraries and others accused of spurning copyrights have seemingly found an unlikely defender in Nvidia, the AI chipmaker among those profiting most from the recent AI boom.

Nvidia seemed to defend the shadow libraries as a valid source of information online when responding to a lawsuit from book authors over the list of data repositories that were scraped to create the Books3 dataset used to train Nvidia's AI platform NeMo. That list includes some of the most "notorious" shadow libraries -- Bibliotik, Z-Library (Z-Lib), Libgen, Sci-Hub, and Anna's Archive, authors argued. However, Nvidia hopes to invalidate authors' copyright claims partly by denying that any of these controversial websites should even be considered shadow libraries.

"Nvidia denies the characterization of the listed data repositories as 'shadow libraries' and denies that hosting data in or distributing data from the data repositories necessarily violates the US Copyright Act," Nvidia's court filing said. The chipmaker did not go into further detail to define what counts as a shadow library or what potentially absolves these controversial sites from key copyright concerns raised by various ongoing lawsuits. Instead, Nvidia kept its response brief while also curtly disputing authors' petition for class-action status and defending its AI training methods as fair use. "Nvidia denies that it has improperly used or copied the alleged works," the court filing said, arguing that "training is a highly transformative process that may include adjusting numerical parameters including 'weights,' and that outputs of an LLM may be based, at least in part, on such 'weights.'"
"Nvidia's argument likely depends on the court agreeing that AI models ingesting published works in order to transform those works into weights governing AI outputs is fair use," notes Ars. "However, authors have argued that 'these weights are entirely and uniquely derived from the protected expression in the training dataset' that has been copied without getting authors' consent or providing authors with compensation."

"Authors suing Nvidia have taken the next step, linking the chipmaker to shadow libraries by arguing that 'these shadow libraries have long been of interest to the AI-training community because they host and distribute vast quantities of unlicensed copyrighted material. For that reason, these shadow libraries also violate the US Copyright Act.'"
AI

Anthropic Hires Former OpenAI Safety Lead To Head Up New Team (techcrunch.com) 5

Jan Leike, one of OpenAI's "superalignment" leaders, who resigned last week due to AI safety concerns, has joined Anthropic to continue the mission. According to Leike, the new team "will work on scalable oversight, weak-to-strong generalization, and automated alignment research." TechCrunch reports: A source familiar with the matter tells TechCrunch that Leike will report directly to Jared Kaplan, Anthropic's chief science officer, and that Anthropic researchers currently working on scalable oversight -- techniques to control large-scale AI's behavior in predictable and desirable ways -- will move to report to Leike as Leike's team spins up. In many ways, Leike's team sounds similar in mission to OpenAI's recently-dissolved Superalignment team. The Superalignment team, which Leike co-led, had the ambitious goal of solving the core technical challenges of controlling superintelligent AI in the next four years, but often found itself hamstrung by OpenAI's leadership. Anthropic has often attempted to position itself as more safety-focused than OpenAI.
AI

Klarna Using GenAI To Cut Marketing Costs By $10 Million Annually (reuters.com) 33

Fintech firm Klarna, one of the early adopters of generative AI said on Tuesday it is using AI for purposes such as running marketing campaigns and generating images, saving about $10 million in costs annually. From a report: The company has cut its sales and marketing budget by 11% in the first quarter, with AI responsible for 37% of the cost savings, while increasing the number of campaigns, the company said. Using GenAI tools like Midjourney, DALL-E, and Firefly for image generation, Klarna said it has reduced image production costs by $6 million.
Businesses

PayPal Is Planning an Ad Business Using Data on Its Millions of Shoppers (wsj.com) 35

PayPal hopes to boost its growth by starting an ad network [non-paywalled link] juiced with something it already owns: data on its millions of users. From a report: The digital payments company plans to build an ad sales business around the reams of data it generates from tracking the purchases as well as the broader spending behaviors of millions of consumers who use its services, which include the more socially-enabled Venmo app. PayPal has hired Mark Grether, who formerly led Uber's advertising business, to lead the effort as senior vice president and general manager of its newly-created PayPal Ads division. In his new role, he will be responsible for developing new ad formats, overseeing sales and hiring staff to fill out the division, he said.

PayPal in January introduced Advanced Offers, its first ad product, which uses AI and the company's data to help merchants target PayPal users with discounts and other personalized promotions. Advanced Offers only charges advertisers when consumers make a purchase. Online marketplaces eBay and Zazzle have begun testing it, according to a PayPal spokesman. But PayPal now aims to sell ads not only to its own customers, but to so-called non-endemic advertisers, or those that don't sell products or services through PayPal. Those companies might use PayPal data to target consumers with ads that could be displayed elsewhere, for instance, on other websites or connected TV sets.

AI

OpenAI Says It Has Begun Training a New Flagship AI Model (nytimes.com) 40

OpenAI said on Tuesday that it has begun training a new flagship AI model that would succeed the GPT-4 technology that drives its popular online chatbot, ChatGPT. From a report: The San Francisco start-up, which is one of the world's leading A.I. companies, said in a blog post that it expects the new model to bring "the next level of capabilities" as it strives to build "artificial general intelligence," or A.G.I., a machine that can do anything the human brain can do. The new model would be an engine for A.I. products including chatbots, digital assistants akin to Apple's Siri, search engines and image generators.

OpenAI also said it was creating a new Safety and Security Committee to explore how it should handle the risks posed by the new model and future technologies. "While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment," the company said. OpenAI is aiming to move A.I. technology forward faster than its rivals, while also appeasing critics who say the technology is becoming increasingly dangerous, helping to spread disinformation, replace jobs and even threaten humanity. Experts disagree on when tech companies will reach artificial general intelligence, but companies including OpenAI, Google, Meta and Microsoft have steadily increased the power of A.I. technologies for more than a decade, demonstrating a noticeable leap roughly every two to three years.

Microsoft

Microsoft's Automatic Super Resolution Arrives To Improve Gaming Performance (tomshardware.com) 53

Microsoft has announced Auto SR, an AI-powered image upscaling solution for Windows 11 on Arm devices. The feature, exclusive to Qualcomm's Snapdragon X CPUs, aims to enhance gaming performance on ARM-based systems. Auto SR, however, comes with notable restrictions, including compatibility limitations with certain DirectX versions and the inability to work simultaneously with HDR.
Google

Google's AI Feeds People Answers From The Onion (avclub.com) 125

An anonymous reader shares a report: As denizens of the Internet, we have all often seen a news item so ridiculous it caused us to think, "This seems like an Onion headline." But as real human beings, most of us have the ability to discern between reality and satire. Unfortunately, Google's newly launched "AI Overview" lacks that crucial ability. The feature, which launched less than two weeks ago (with no way for users to opt-out), provides answers to certain queries at the top of the page above any other online resources. The artificial intelligence creates its answers from knowledge it has synthesized from around the web, which would be great, except not everything on the Internet is true or accurate. Obviously.

Ben Collins, one of the new owners of our former sister site, pointed out some of AI Overview's most egregious errors on his social media. Asked "how many rocks should I eat each day," Overview said that geologists recommend eating "at least one small rock a day." That language was of course pulled almost word-for-word from a 2021 Onion headline. Another search, "what color highlighters do the CIA use," prompted Overview to answer "black," which was an Onion joke from 2005.

Robotics

Technical Issues' Stall MLB's Adoption of Robots to Call Balls and Strikes (cbssports.com) 39

Will Major League Baseball games use "automated" umpires next year to watch pitches from home plate and call balls and strikes?

"We still have some technical issues," baseball Commissioner Rob Manfred said Thursday. NBC News reports: "We haven't made as much progress in the minor leagues this year as we sort of hoped at this point. I think it's becoming more and more likely that this will not be a go for '25."

Major League Baseball has been experimenting with the automated ball-strike system in minor leagues since 2019. It is being used at all Triple-A parks this year for the second straight season, the robot alone for the first three games of each series and a human with a [robot-assisted] challenge system in the final three.

In "challenge-system" games, robo-umpires are only used for quickly ruling on challenges to calls from human umpires. (As demonstrated in this 11-second video.)

CBS Sports explains: Each team is given a limited number of "incorrect" challenges per game, which incentivizes judicious use of challenges... In some ways, the challenge system is a compromise between the traditional method of making ball-strike calls and the fully automated approach. That middle ground may make approval by the various stakeholders more likely to happen and may lay the foundation for full automation at some future point.
Manfred cites "a growing consensus in large part" from Major League players that that's how they'd want to see robo-umpiring implemented, according to a post on X.com from The Athletic's Evan Drellich. (NBC notes one concern is eliminating the artful way catchers "frame" caught pitches to convince umpires a pitch passed through the strike zone.)

But umpires face greater challenges today, adds CBS Sports: The strong trend, stretching across years, of increased pitch velocity in the big leagues has complicated the calling of balls and strikes, as has the emphasis on high-spin breaking pitches. Discerning balls from strikes has always been challenging, and the stuff of the contemporary major-league pitcher has made anything like perfect accuracy beyond the capabilities of the human eye. Big-league umpires are highly skilled, but the move toward ball-strike automation and thus a higher tier of accuracy is likely inevitable. Manfred's Wednesday remarks reinforce that perception.
AI

Mojo, Bend, and the Rise of AI-First Programming Languages (venturebeat.com) 26

"While general-purpose languages like Python, C++, and Java remain popular in AI development," writes VentureBeat, "the resurgence of AI-first languages signifies a recognition that AI's unique demands require specialized languages tailored to the domain's specific needs... designed from the ground up to address the specific needs of AI development." Bend, created by Higher Order Company, aims to provide a flexible and intuitive programming model for AI, with features like automatic differentiation and seamless integration with popular AI frameworks. Mojo, developed by Modular AI, focuses on high performance, scalability, and ease of use for building and deploying AI applications. Swift for TensorFlow, an extension of the Swift programming language, combines the high-level syntax and ease of use of Swift with the power of TensorFlow's machine learning capabilities...

At the heart of Mojo's design is its focus on seamless integration with AI hardware, such as GPUs running CUDA and other accelerators. Mojo enables developers to harness the full potential of specialized AI hardware without getting bogged down in low-level details. One of Mojo's key advantages is its interoperability with the existing Python ecosystem. Unlike languages like Rust, Zig or Nim, which can have steep learning curves, Mojo allows developers to write code that seamlessly integrates with Python libraries and frameworks. Developers can continue to use their favorite Python tools and packages while benefiting from Mojo's performance enhancements... It supports static typing, which can help catch errors early in development and enable more efficient compilation... Mojo also incorporates an ownership system and borrow checker similar to Rust, ensuring memory safety and preventing common programming errors. Additionally, Mojo offers memory management with pointers, giving developers fine-grained control over memory allocation and deallocation...

Mojo is conceptually lower-level than some other emerging AI languages like Bend, which compiles modern high-level language features to native multithreading on Apple Silicon or NVIDIA GPUs. Mojo offers fine-grained control over parallelism, making it particularly well-suited for hand-coding modern neural network accelerations. By providing developers with direct control over the mapping of computations onto the hardware, Mojo enables the creation of highly optimized AI implementations.

According to Mojo's creator, Modular, the language has already garnered an impressive user base of over 175,000 developers and 50,000 organizations since it was made generally available last August. Despite its impressive performance and potential, Mojo's adoption might have stalled initially due to its proprietary status. However, Modular recently decided to open-source Mojo's core components under a customized version of the Apache 2 license. This move will likely accelerate Mojo's adoption and foster a more vibrant ecosystem of collaboration and innovation, similar to how open source has been a key factor in the success of languages like Python.

Developers can now explore Mojo's inner workings, contribute to its development, and learn from its implementation. This collaborative approach will likely lead to faster bug fixes, performance improvements and the addition of new features, ultimately making Mojo more versatile and powerful.

The article also notes other languages "trying to become the go-to choice for AI development" by providing high-performance execution on parallel hardware. Unlike low-level beasts like CUDA and Metal, Bend feels more like Python and Haskell, offering fast object allocations, higher-order functions with full closure support, unrestricted recursion and even continuations. It runs on massively parallel hardware like GPUs, delivering near-linear speedup based on core count with zero explicit parallel annotations — no thread spawning, no locks, mutexes or atomics. Powered by the HVM2 runtime, Bend exploits parallelism wherever it can, making it the Swiss Army knife for AI — a tool for every occasion...

The resurgence of AI-focused programming languages like Mojo, Bend, Swift for TensorFlow, JAX and others marks the beginning of a new era in AI development. As the demand for more efficient, expressive, and hardware-optimized tools grows, we expect to see a proliferation of languages and frameworks that cater specifically to the unique needs of AI. These languages will leverage modern programming paradigms, strong type systems, and deep integration with specialized hardware to enable developers to build more sophisticated AI applications with unprecedented performance. The rise of AI-focused languages will likely spur a new wave of innovation in the interplay between AI, language design and hardware development. As language designers work closely with AI researchers and hardware vendors to optimize performance and expressiveness, we will likely see the emergence of novel architectures and accelerators designed with these languages and AI workloads in mind. This close relationship between AI, language, and hardware will be crucial in unlocking the full potential of artificial intelligence, enabling breakthroughs in fields like autonomous systems, natural language processing, computer vision, and more.

The future of AI development and computing itself are being reshaped by the languages and tools we create today.

In 2017 Modular AI's founder Chris Lattner (creator of the Swift and LLVM) answered questions from Slashdot readers.
Sci-Fi

Netflix's Sci-Fi Movie 'Atlas': AI Apocalypse Blockbuster Gets 'Shocking' Reviews (tomsguide.com) 94

Space.com calls it a movie "adding more combustible material to the inferno of AI unease sweeping the globe." Its director tells them James Cameron was a huge inspiration, saying Atlas "has an Aliens-like vibe because of the grounded, grittiness to it." (You can watch the movie's trailer here...)

But Tom's Guide says "the reviews are just as shocking as the movie's AI." Its "audience score" on Rotten Tomatoes is 55% — but its aggregate score from professional film critics is 16%. The Hollywood Reporter called it "another Netflix movie to half-watch while doing laundry." ("The star plays a data analyst forced to team up with an AI robot in order to prevent an apocalypse orchestrated by a different AI robot...") The site Giant Freakin Robot says "there seems to be a direct correlation between how much money the streaming platform spends on green screen effects and how bad the movie is" (noting the film's rumored budget of $100 million)...

But Tom's Guide defends it as a big-budget sci-fi thriller that "has an interesting premise that makes you think about the potential dangers of AI progression." Our world has always been interested in computers and machines, and the very idea of technology turning against us is unsettling. That's why "Atlas" works as a movie, but professional critics have other things to say. Ross McIndoe from Slant Magazine said: "Atlas seems like a story that should have been experienced with a gamepad in hand...." Todd Gilchrist from Variety didn't enjoy the conventional structure that "Atlas" followed...

However, even though the score is low and the reviews are pretty negative, I don't want to completely bash this movie... If I'm being completely honest, most movies and TV shows nowadays are taken too seriously. The more general blockbusters are supposed to be entertaining and fun, with visually pleasing effects that keep you hooked on the action. This is much like "Atlas", which is a fun watch with an unsettling undertone focused on the dangers of evolving AI...

Being part of the audience, we're supposed to just take it in and enjoy the movie as a casual viewer. This is why I think you should give "Atlas" a chance, especially if you're big into dramatic action sequences and have enjoyed movies like "Terminator" and "Pacific Rim".

AI

How A US Hospital is Using AI to Analyze X-Rays - With Help From Red Hat (redhat.com) 19

This week Red Hat announced one of America's leading pediatric hospitals is using AI to analyze X-rays, "to improve image quality and the speed and accuracy of image interpretation."

Red Hat's CTO said the move exemplifies "the positive impact AI can have in the healthcare field". Before Boston Children's Hospital began piloting AI in radiology, quantitative measurements had to be done manually, which was a time-consuming task. Other, more complex image analyses were performed completely offline and outside of the clinical workflow. In a field where time is of the essence, the hospital is piloting Red Hat OpenShift via the ChRIS Research Integration Service, a web-based medical image platform. The AI application running in ChRIS on the Red Hat OpenShift foundation has the potential to automatically examine x-rays, identify the most valuable diagnostic images among the thousands taken and flag any discrepancies for the radiologist. This decreases the interpretation time for radiologists.
But it also seems to be a big win for openness: Innovation developed internally is immediately transferable to public research clouds such as the Massachusetts Open Cloud, where large-scale data sharing and additional innovation can be fostered. Boston Children's Hospital aims to extend the reach of advanced healthcare solutions globally through this approach, amplifying their impact on patient well-being worldwide.
"Red Hat believes open unlocks the world's potential," the announcement concludes, "including the potential to share knowledge and build upon each other's discoveries. Additionally, Red Hat believes innovation — including AI — should be available everywhere, making any application, anywhere a reality.

"With open source, enabling AI-fueled innovation across hybrid IT environments that can lead to faster clinical breakthroughs and better patient outcomes is a reality."
AI

Elon Musk Says AI Could Eliminate Our Need to Work at Jobs (cnn.com) 289

In the future, "Probably none of us will have a job," Elon Musk said Thursday, speaking remotely to the VivaTech 2024 conference in Paris. Instead, jobs will be optional — something we'd do like a hobby — "But otherwise, AI and the robots will provide any goods and services that you want."

CNN reports that Musk added this would require "universal high income" — and "There would be no shortage of goods or services." In a job-free future, though, Musk questioned whether people would feel emotionally fulfilled. "The question will really be one of meaning — if the computer and robots can do everything better than you, does your life have meaning?" he said. "I do think there's perhaps still a role for humans in this — in that we may give AI meaning."
CNN accompanied their article with this counterargument: In January, researchers at MIT's Computer Science and Artificial Intelligence Lab found workplaces are adopting AI much more slowly than some had expected and feared. The report also said the majority of jobs previously identified as vulnerable to AI were not economically beneficial for employers to automate at that time. Experts also largely believe that many jobs that require a high emotional intelligence and human interaction will not need replacing, such as mental health professionals, creatives and teachers.
CNN notes that Musk "also used his stage time to urge parents to limit the amount of social media that children can see because 'they're being programmed by a dopamine-maximizing AI'."
AI

Robotaxis Face 'Heightened Scrutiny' While the Industry Plans Expansion (msn.com) 19

Besides investigations into Cruise and Waymo, America's National Highway Traffic Safety Administration (NHTSA) also announced it's examining two rear-end collisions between motorbikes and Amazon's steering wheel-free Zoox vehicles being tested in San Francisco, Seattle, and Las Vegas.

This means all three major self-driving vehicle companies "are facing federal investigations over potential flaws linked to dozens of crashes," notes the Washington Post, calling it "a sign of heightened scrutiny as the fledging industry lays plans to expand nationwide." The industry is poised for growth: About 40 companies have permits to test autonomous vehicles in California alone. The companies have drawn billions of dollars in investment, and supporters say they could revolutionize how Americans travel... Dozens of companies are testing self-driving vehicles in at least 10 states, with some offering services to paying passengers, according to the Autonomous Vehicle Industry Association. The deployments are concentrated in a handful of Western states, especially those with good weather and welcoming governors.

According to a Washington Post analysis of California data, the companies in test mode in San Francisco collectively report millions of miles on public roads every year, along with hundreds of mostly minor collisions. An industry association says autonomous vehicles have logged a total of 70 million miles, a figure that it compares with 293 trips to the moon and back. But it's a tiny fraction of the almost 9 billion miles that Americans drive every day. The relatively small number of miles the vehicles have driven makes it difficult to draw broad conclusions about their safety.

Key quotes from the article:
  • "Together, the three investigations opened in the past year examine more than two dozen collisions potentially linked to defective technology. The bulk of the incidents were minor and did not result in any injuries..."
  • "But robotic cars are still very much in their infancy, and while the bulk of the collisions flagged by NHTSA are relatively minor, they call into question the companies' boasts of being far safer than human drivers..."
  • "The era of unrealistic expectations and hype is over," said Matthew Wansley, a professor at the Cardozo School of Law in New York who specializes in emerging automotive technologies. "These companies are under a microscope, and they should be. Private companies are doing an experiment on public roads."
  • "Innocent people are on the roadways, and they're not being protected as they need to be," said Cathy Chase, the president of Advocates for Highway and Auto Safety.

Windows

Satya Nadella Says Microsoft's AI-Focused Copilot+ Laptops Will Outperform Apple's MacBooks (msn.com) 86

"Apple's done a fantastic job of really innovating on the Mac," Microsoft CEO Satya Nadella told the Wall Street Journal in a video interview this week.

. Then he said "We are gonna outperform them" with the upcoming Copilot+ laptops from Acer, ASUS, Dell, HP, Lenovo and Samsung that have been completely reengineered for AI — and begin shipping in less than four weeks. Satya Nadella: Qualcomm's got a new [ARM Snapdragon X] processor, which we've optimized Windows for. The battery lab, I've been using it now — I mean, it's 22 hours of continuous video playback... [Apple also uses ARM chips in its MacBooks]. We finally feel we have a very competitive product between Surface Pro and the Surface laptops. We have essentially the best specs when it comes to ARM-based silicon and performance or the NPU performance.

WSJ: Microsoft says the Surfaces are 58% faster than the MacBook Air with M3, and has 20% longer battery life.

The video includes a demonstration of local live translation powered by "small language models" stored on the device. ("It can translate live video calls or in-person conversations from 44 different languages into English. And it's fast.")

And in an accompanying article, the Journal's reporter also tested out the AI-powered image generator coming to Microsoft Paint.

As a longtime MS Paint stick-figure and box-house artist, I was delighted by this new tool. I typed in a prompt: "A Windows XP wallpaper with a mountain and sky." Then, as I started drawing, an AI image appeared in a new canvas alongside mine. When I changed a color in my sketch, it changed a color in the generated image. Microsoft says it still sends the prompt to the cloud to ensure content safety.
Privacy was also touched on. Discussing the AI-powered "Recall" search functionality, the Journal's reporter notes that users can stop it from taking screenshots of certain web sites or apps, or turn it off entirely... But they point out "There could be this reaction from some people that this is pretty creepy. Microsoft is taking screenshots of everything I do."

Nadella reminds them that "it's all being done locally, right...? That's the promise... That's one of the reasons why Recall works as a magical thing: because I can trust it, that it is on my computer."

Copilot will be powered by OpenAI's new GPT-4o, the Journal notes — before showing Satya Nadella saying "It's kind of like a new browser effectively." Satya Nadella: So, it's right there. It sees the screen, it sees the world, it hears you. And so, it's kind of like that personal agent that's always there that you want to talk to. You can interrupt it. It can interrupt you.
Nadella says though the laptop is optimized for Copilot, that's just the beginning, and "I fully expect Copilot to be everywhere" — along with its innovatively individualized "personal agent" interface. "It's gonna be ambient.... It'll go on the phone, right? I'll use it on WhatsApp. I'll use it on any other messaging platform. It'll be on speakers everywhere." Nadella says combining GPT-40 with Copilot's interface is "the type of magic that we wanna bring — first to Windows and everywhere else... The future I see is a computer that understands me versus a computer that I have to understand.

The interview ends when the reporter holds up the result — their own homegrown rendition of Windows XP's default background image "Bliss."
AI

OpenAI Didn't Copy Scarlett Johansson's Voice for ChatGPT, Records Show (msn.com) 74

The Atlantic argued this week that OpenAI "just gave away the entire game... The Johansson scandal is merely a reminder of AI's manifest-destiny philosophy: This is happening, whether you like it or not."

But the Washington Post reports that OpenAI "didn't copy Scarlett Johansson's voice for ChatGPT, records show." [W]hile many hear an eerie resemblance between [ChatGPT voice] "Sky" and Johansson's "Her" character, an actress was hired in June to create the Sky voice, months before Altman contacted Johansson, according to documents, recordings, casting directors and the actress's agent. The agent, who spoke on the condition of anonymity, citing the safety of her client, said the actress confirmed that neither Johansson nor the movie "Her" were ever mentioned by OpenAI. The actress's natural voice sounds identical to the AI-generated Sky voice, based on brief recordings of her initial voice test reviewed by The Post...

[Joanne Jang, who leads AI model behavior for OpenAI], said she "kept a tight tent" around the AI voices project, making Chief Technology Officer Mira Murati the sole decision-maker to preserve the artistic choices of the director and the casting office. Altman was on his world tour during much of the casting process and not intimately involved, she said.... To Jang, who spent countless hours listening to the actress and keeps in touch with the human actors behind the voices, Sky sounds nothing like Johansson, although the two share a breathiness and huskiness. In a statement from the Sky actress provided by her agent, she wrote that at times the backlash "feels personal being that it's just my natural voice and I've never been compared to her by the people who do know me closely."

More from Northeastern University's news service: "The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers," Altman said in a statement. "We cast the voice actor behind Sky's voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky's voice in our products. We are sorry to Ms. Johansson that we didn't communicate better..."

[Alexandra Roberts, a Northeastern University law and media professor] says she believes things will settle down and Johansson will probably not sue OpenAI since the company is no longer using the "Sky" voice. "If they stopped using it, and they promised her they're not going to use it, then she probably doesn't have a case," she says. "She probably doesn't have anything to sue on anymore, and since it was just a demo, and it wasn't a full release to the general public that offers the full range of services they plan to offer, it would be really hard for her to show any damages."

Maybe it's analgous to something Sam Altman said earlier this month on the All-In podcast. "Let's say we paid 10,000 musicians to create a bunch of music, just to make a great training set, where the music model could learn everything about song structure and what makes a good, catchy beat and everything else, and only trained on that... I was posing that as a thought experiment to musicians, and they were like, 'Well, I can't object to that on any principle basis at that point — and yet there's still something I don't like about it.'"

Altman added "Now, that's not a reason not to do it, um, necessarily, but..." and then talked about Apple's "Crush" ad and the importance of preserving human creativity. He concluded by saying that OpenAI has "currently made the decision not to do music, and partly because exactly these questions of where you draw the lines..."
AI

FTC Chair: AI Models Could Violate Antitrust Laws (thehill.com) 42

An anonymous reader quotes a report from The Hill: Federal Trade Commission (FTC) Chair Lina Khan said Wednesday that companies that train their artificial intelligence (A) models on data from news websites, artists' creations or people's personal information could be in violation of antitrust laws. At The Wall Street Journal's "Future of Everything Festival," Khan said the FTC is examining ways in which major companies' data scraping could hinder competition or potentially violate people's privacy rights. "The FTC Act prohibits unfair methods of competition and unfair or deceptive acts or practices," Khan said at the event. "So, you can imagine, if somebody's content or information is being scraped that they have produced, and then is being used in ways to compete with them and to dislodge them from the market and divert businesses, in some cases, that could be an unfair method of competition."

Khan said concern also lies in companies using people's data without their knowledge or consent, which can also raise legal concerns. "We've also seen a lot of concern about deception, about unfairness, if firms are making one set of representations when you're signing up to use them, but then are secretly or quietly using the data you're feeding them -- be it your personal data, be it, if you're a business, your proprietary data, your competitively significant data -- if they're then using that to feed their models, to compete with you, to abuse your privacy, that can also raise legal concerns," she said.

Khan also recognized people's concerns about companies retroactively changing their terms of service to let them use customers' content, including personal photos or family videos, to feed into their AI models. "I think that's where people feel a sense of violation, that that's not really what they signed up for and oftentimes, they feel that they don't have recourse," Khan said. "Some of these services are essential for navigating day to day life," she continued, "and so, if the choice -- 'choice' -- you're being presented with is: sign off on not just being endlessly surveilled, but all of that data being fed into these models, or forego using these services entirely, I think that's a really tough spot to put people in." Khan said she thinks many government agencies have an important role to play as AI continues to develop, saying, "I think in Washington, there's increasingly a recognition that we can't, as a government, just be totally hands off and stand out of the way."
You can watch the interview with Khan here.

Slashdot Top Deals