×
AMD

AMD Stops Certifying Monitors, TVs Under 144 Hz For FreeSync (arstechnica.com) 49

An anonymous reader quotes a report from Ars Technica: AMD announced this week that it has ceased FreeSync certification for monitors or TVs whose maximum refresh rates are under 144 Hz. Previously, FreeSync monitors and TVs could have refresh rates as low as 60 Hz, allowing for screens with lower price tags and ones not targeted at serious gaming to carry the variable refresh-rate technology. AMD also boosted the refresh-rate requirements for its higher AdaptiveSync tiers, FreeSync Premium and FreeSync Premium Pro, from 120 Hz to 200 Hz.

Here are the new minimum refresh-rate requirements for FreeSync, which haven't changed for laptops. AMD will continue supporting already-certified FreeSync displays even if they don't meet the above requirements. Interestingly, AMD's minimum refresh-rate requirements for TVs goes beyond 120 Hz, which many premium TVs max out at currently, due to the current-generation Xbox and PlayStation supporting max refresh rates of 120 frames per second (FPS). Announcing the changes this week in a blog post, Oguzhan Andic, AMD FreeSync and Radeon product marketing manager, claimed that the changes were necessary, noting that 60 Hz is no longer "considered great for gaming." Andic wrote that the majority of gaming monitors are 144 Hz or higher, compared to in 2015, when FreeSync debuted, and even 120 Hz was "a rarity."

AMD

Huawei's New CPU Matches Zen 3 In Single-Core Performance (tomshardware.com) 77

Long-time Slashdot reader AmiMoJo quotes Tom's Hardware: A Geekbench 6 result features what is likely the first-ever look at the single-core performance of the Taishan V120, developed by Huawei's HiSilicon subsidiary (via @Olrak29_ on X). The single-core score indicates that Taishan V120 cores are roughly on par with AMD's Zen 3 cores from late 2020, which could mean Huawei's technology isn't that far behind cutting-edge Western chip designers.

The Taishan V120 core was first spotted in Huawei's Kirin 9000s smartphone chip, which uses four of the cores alongside two efficiency-focused Arm Cortex A510 cores. Since Kirin 9000s chips are produced using SMIC's second-generation 7nm node (which may make it illegal to sell internationally according to U.S. lawmakers), it would also seem likely that the Taishan V120 core tested in Geekbench 6 is also made on the second-generation 7nm node.

The benchmark result doesn't really say much about what the actual CPU is, with the only hint being 'Huawei Cloud OpenStack Nova.' This implies it's a Kunpeng server CPU, which may either be the Kunpeng 916, 920, or 930. While we can only guess which one it is, it's almost certain to be the 930 given the high single-core performance shown in the result. By contrast, the few Geekbench 5 results for the Kunpeng 920 show it performing well behind AMD's first-generation Epyc Naples from 2017.

Microsoft

Microsoft is Working With Nvidia, AMD and Intel To Improve Upscaling Support in PC Games (theverge.com) 22

Microsoft has outlined a new Windows API designed to offer a seamless way for game developers to integrate super resolution AI-upscaling features from Nvidia, AMD, and Intel. From a report: In a new blog post, program manager Joshua Tucker describes Microsoft's new DirectSR API as the "missing link" between games and super resolution technologies, and says it should provide "a smoother, more efficient experience that scales across hardware."

"This API enables multi-vendor SR [super resolution] through a common set of inputs and outputs, allowing a single code path to activate a variety of solutions including Nvidia DLSS Super Resolution, AMD FidelityFX Super Resolution, and Intel XeSS," the post reads. The pitch seems to be that developers will be able to support this DirectSR API, rather than having to write code for each and every upscaling technology.

The blog post comes a couple of weeks after an "Automatic Super Resolution" feature was spotted in a test version of Windows 11, which promised to "use AI to make supported games play more smoothly with enhanced details." Now, it seems the feature will plug into existing super resolution technologies like DLSS, FSR, and XeSS rather than offering a Windows-level alternative.

IT

HDMI Forum Rejects Open-Source HDMI 2.1 Driver Support Sought By AMD (phoronix.com) 114

Michael Larabel, reporting at Phoronix: One of the limitations of AMD's open-source Linux graphics driver has been the inability to implement HDMI 2.1+ functionality on the basis of legal requirements by the HDMI Forum. AMD engineers had been working to come up with a solution in conjunction with the HDMI Forum for being able to provide HDMI 2.1+ capabilities with their open-source Linux kernel driver, but it looks like those efforts for now have concluded and failed. For three years there has been a bug report around 4K@120Hz being unavailable via HDMI 2.1 on the AMD Linux driver. Similarly, there have been bug reports like 5K @ 240Hz not possible either with the AMD graphics driver on Linux.

As covered back in 2021, the HDMI Forum closing public specification access is hurting open-source support. AMD as well as the X.Org Foundation have been engaged with the HDMI Forum to try to come up with a solution to be able to provide open-source implementations of the now-private HDMI specs. AMD Linux engineers have spent months working with their legal team and evaluating all HDMI features to determine if/how they can be exposed in their open-source driver. AMD had code working internally and then the past few months were waiting on approval from the HDMI Forum. Sadly, the HDMI Forum has turned down AMD's request for open-source driver support.

AMD

Despite Initial Claims, AMD Confirms Ryzen 8000G APUs Don't Support ECC RAM (tomshardware.com) 64

Slashdot reader ffkom shared this report from Tom's Hardware: When AMD formally introduced its Ryzen 8000G-series accelerated processing units for desktops in early January, the company mentioned that they supported ECC memory capability. Since then, the company has quietly removed mention of the technology from its website, as noted by Reddit users.

We asked AMD to clarify the situation and were told that the company has indeed removed mentions of ECC technology from the specifications of its Ryzen 3 8300G, Ryzen 5 8500G, Ryzen 5 8600G, and Ryzen 5 8700G. The technology also cannot be enabled on motherboards, so it looks like these processors indeed do not support ECC technology at all.

While it would be nice to have ECC support on AMD's latest consumer Ryzen 8000G APUs, this is a technology typically reserved for AMD's Ryzen Pro processors.

Open Source

AMD's CUDA Implementation Built On ROCm Is Now Open Source (phoronix.com) 29

Michael Larabel writes via Phoronix: While there have been efforts by AMD over the years to make it easier to port codebases targeting NVIDIA's CUDA API to run atop HIP/ROCm, it still requires work on the part of developers. The tooling has improved such as with HIPIFY to help in auto-generating but it isn't any simple, instant, and guaranteed solution -- especially if striving for optimal performance. Over the past two years AMD has quietly been funding an effort though to bring binary compatibility so that many NVIDIA CUDA applications could run atop the AMD ROCm stack at the library level -- a drop-in replacement without the need to adapt source code. In practice for many real-world workloads, it's a solution for end-users to run CUDA-enabled software without any developer intervention. Here is more information on this "skunkworks" project that is now available as open-source along with some of my own testing and performance benchmarks of this CUDA implementation built for Radeon GPUs. [...]

For those wondering about the open-source code, it's dual-licensed under either Apache 2.0 or MIT. Rust fans will be excited to know the Rust programming language is leveraged for this Radeon implementation. [...] Those wanting to check out the new ZLUDA open-source code for Radeon GPUs can do so via GitHub.

Microsoft

Microsoft Working On Its Own DLSS-like Upscaler for Windows 11 (theverge.com) 42

Microsoft appears to be readying its own DLSS-like AI upscaling feature for PC games. From a report: X user PhantomOcean3 discovered the feature inside the latest test versions of Windows 11 over the weekend, with Microsoft describing its automatic super resolution as a way to "use AI to make supported games play more smoothly with enhanced details." That sounds a lot like Nvidia's Deep Learning Super Sampling (DLSS) technology, which uses AI to upscale games and improve frame rates and image quality. AMD and Intel also offer their own variants, with FSR and XeSS both growing in popularity in recent PC game releases.
AI

AI PCs To Account for Nearly 60% of All PC Shipments by 2027, IDC Says (idc.com) 70

IDC, in a press release: A new forecast from IDC shows shipments of artificial intelligence (AI) PCs -- personal computers with specific system-on-a-chip (SoC) capabilities designed to run generative AI tasks locally -- growing from nearly 50 million units in 2024 to more than 167 million in 2027. By the end of the forecast, IDC expects AI PCs will represent nearly 60% of all PC shipments worldwide. [...] Until recently, running an AI task locally on a PC was done on the central processing unit (CPU), the graphics processing unit (GPU), or a combination of the two. However, this can have a negative impact on the PC's performance and battery life because these chips are not optimized to run AI efficiently. PC silicon vendors have now introduced AI-specific silicon to their SoCs called neural processing units (NPUs) that run these tasks more efficiently.

To date, IDC has identified three types of NPU-enabled AI PCs:
1. Hardware-enabled AI PCs include an NPU that offers less than 40 tera operations per second (TOPS) performance and typically enables specific AI features within apps to run locally. Qualcomm, Apple, AMD, and Intel are all shipping chips in this category today.

2. Next-generation AI PCs include an NPU with 40 to 60 TOPS performance and an AI-first operating system (OS) that enables persistent and pervasive AI capabilities in the OS and apps. Qualcomm, AMD, and Intel have all announced future chips for this category, with delivery expected to begin in 2024. Microsoft is expected to roll out major updates (and updated system specifications) to Windows 11 to take advantage of these high-TOPS NPUs.

3. Advanced AI PCs are PCs that offer more than 60 TOPS of NPU performance. While no silicon vendors have announced such products, IDC expects them to appear in the coming years. This IDC forecast does not include advanced AI PCs, but they will be incorporated into future updates.
Michael Dell, commenting on X: This is correct and might be underestimating it. AI PCs are coming fast and Dell is ready.
Networking

Ceph: a Journey To 1 TiB/s (ceph.io) 16

It's "a free and open-source, software-defined storage platform," according to Wikipedia, providing object storage, block storage, and file storage "built on a common distributed cluster foundation". The charter advisory board for Ceph included people from Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE.

And Nite_Hawk (Slashdot reader #1,304) is one of its core engineers — a former Red Hat principal software engineer named Mark Nelson. (He's now leading R&D for a small cloud systems company called Clyso that provides Ceph consulting.) And he's returned to Slashdot to share a blog post describing "a journey to 1 TiB/s". This gnarly tale-from-Production starts while assisting Clyso with "a fairly hip and cutting edge company that wanted to transition their HDD-backed Ceph cluster to a 10 petabyte NVMe deployment" using object-based storage devices [or OSDs]...) I can't believe they figured it out first. That was the thought going through my head back in mid-December after several weeks of 12-hour days debugging why this cluster was slow... Half-forgotten superstitions from the 90s about appeasing SCSI gods flitted through my consciousness...

Ultimately they decided to go with a Dell architecture we designed, which quoted at roughly 13% cheaper than the original configuration despite having several key advantages. The new configuration has less memory per OSD (still comfortably 12GiB each), but faster memory throughput. It also provides more aggregate CPU resources, significantly more aggregate network throughput, a simpler single-socket configuration, and utilizes the newest generation of AMD processors and DDR5 RAM. By employing smaller nodes, we halved the impact of a node failure on cluster recovery....

The initial single-OSD test looked fantastic for large reads and writes and showed nearly the same throughput we saw when running FIO tests directly against the drives. As soon as we ran the 8-OSD test, however, we observed a performance drop. Subsequent single-OSD tests continued to perform poorly until several hours later when they recovered. So long as a multi-OSD test was not introduced, performance remained high. Confusingly, we were unable to invoke the same behavior when running FIO tests directly against the drives. Just as confusing, we saw that during the 8 OSD test, a single OSD would use significantly more CPU than the others. A wallclock profile of the OSD under load showed significant time spent in io_submit, which is what we typically see when the kernel starts blocking because a drive's queue becomes full...

For over a week, we looked at everything from bios settings, NVMe multipath, low-level NVMe debugging, changing kernel/Ubuntu versions, and checking every single kernel, OS, and Ceph setting we could think of. None these things fully resolved the issue. We even performed blktrace and iowatcher analysis during "good" and "bad" single OSD tests, and could directly observe the slow IO completion behavior. At this point, we started getting the hardware vendors involved. Ultimately it turned out to be unnecessary. There was one minor, and two major fixes that got things back on track.

It's a long blog post, but here's where it ends up:
  • Fix One: "Ceph is incredibly sensitive to latency introduced by CPU c-state transitions. A quick check of the bios on these nodes showed that they weren't running in maximum performance mode which disables c-states."
  • Fix Two: [A very clever engineer working for the customer] "ran a perf profile during a bad run and made a very astute discovery: A huge amount of time is spent in the kernel contending on a spin lock while updating the IOMMU mappings. He disabled IOMMU in the kernel and immediately saw a huge increase in performance during the 8-node tests." In a comment below, Nelson adds that "We've never seen the IOMMU issue before with Ceph... I'm hoping we can work with the vendors to understand better what's going on and get it fixed without having to completely disable IOMMU."
  • Fix Three: "We were not, in fact, building RocksDB with the correct compile flags... It turns out that Canonical fixed this for their own builds as did Gentoo after seeing the note I wrote in do_cmake.sh over 6 years ago... With the issue understood, we built custom 17.2.7 packages with a fix in place. Compaction time dropped by around 3X and 4K random write performance doubled."

The story has a happy ending, with performance testing eventually showing data being read at 635 GiB/s — and a colleague daring them to attempt 1 TiB/s. They built a new testing configuration targeting 63 nodes — achieving 950GiB/s — then tried some more performance optimizations...


Security

A Flaw In Millions of Apple, AMD, and Qualcomm GPUs Could Expose AI Data (wired.com) 22

An anonymous reader quotes a report from Wired: As more companies ramp up development of artificial intelligence systems, they are increasingly turning to graphics processing unit (GPU) chips for the computing power they need to run large language models (LLMs) and to crunch data quickly at massive scale. Between video game processing and AI, demand for GPUs has never been higher, and chipmakers are rushing to bolster supply. In new findings released today, though, researchers are highlighting a vulnerability in multiple brands and models of mainstream GPUs -- including Apple, Qualcomm, and AMD chips -- that could allow an attacker to steal large quantities of data from a GPU's memory. The silicon industry has spent years refining the security of central processing units, or CPUs, so they don't leak data in memory even when they are built to optimize for speed. However, since GPUs were designed for raw graphics processing power, they haven't been architected to the same degree with data privacy as a priority. As generative AI and other machine learning applications expand the uses of these chips, though, researchers from New York -- based security firm Trail of Bits say that vulnerabilities in GPUs are an increasingly urgent concern. "There is a broader security concern about these GPUs not being as secure as they should be and leaking a significant amount of data," Heidy Khlaaf, Trail of Bits' engineering director for AI and machine learning assurance, tells WIRED. "We're looking at anywhere from 5 megabytes to 180 megabytes. In the CPU world, even a bit is too much to reveal."

To exploit the vulnerability, which the researchers call LeftoverLocals, attackers would need to already have established some amount of operating system access on a target's device. Modern computers and servers are specifically designed to silo data so multiple users can share the same processing resources without being able to access each others' data. But a LeftoverLocals attack breaks down these walls. Exploiting the vulnerability would allow a hacker to exfiltrate data they shouldn't be able to access from the local memory of vulnerable GPUs, exposing whatever data happens to be there for the taking, which could include queries and responses generated by LLMs as well as the weights driving the response. In their proof of concept, as seen in the GIF below, the researchers demonstrate an attack where a target -- shown on the left -- asks the open source LLM Llama.cpp to provide details about WIRED magazine. Within seconds, the attacker's device -- shown on the right -- collects the majority of the response provided by the LLM by carrying out a LeftoverLocals attack on vulnerable GPU memory. The attack program the researchers created uses less than 10 lines of code. [...] Though exploiting the vulnerability would require some amount of existing access to targets' devices, the potential implications are significant given that it is common for highly motivated attackers to carry out hacks by chaining multiple vulnerabilities together. Furthermore, establishing "initial access" to a device is already necessary for many common types of digital attacks.
The researchers did not find evidence that Nvidia, Intel, or Arm GPUs contain the LeftoverLocals vulnerability, but Apple, Qualcomm, and AMD all confirmed to WIRED that they are impacted. Here's what each of the affected companies had to say about the vulnerability, as reported by Wired:

Apple: An Apple spokesperson acknowledged LeftoverLocals and noted that the company shipped fixes with its latest M3 and A17 processors, which it unveiled at the end of 2023. This means that the vulnerability is seemingly still present in millions of existing iPhones, iPads, and MacBooks that depend on previous generations of Apple silicon. On January 10, the Trail of Bits researchers retested the vulnerability on a number of Apple devices. They found that Apple's M2 MacBook Air was still vulnerable, but the iPad Air 3rd generation A12 appeared to have been patched.
Qualcomm: A Qualcomm spokesperson told WIRED that the company is "in the process" of providing security updates to its customers, adding, "We encourage end users to apply security updates as they become available from their device makers." The Trail of Bits researchers say Qualcomm confirmed it has released firmware patches for the vulnerability.
AMD: AMD released a security advisory on Wednesday detailing its plans to offer fixes for LeftoverLocals. The protections will be "optional mitigations" released in March.
Google: For its part, Google says in a statement that it "is aware of this vulnerability impacting AMD, Apple, and Qualcomm GPUs. Google has released fixes for ChromeOS devices with impacted AMD and Qualcomm GPUs."
AI

CES PC Makers Bet on AI To Rekindle Sales (reuters.com) 15

PC and microchip companies struggling to get consumers to replace pandemic-era laptops offered a new feature to crowds this week at CES: AI. From a report: PC and chipmakers including AMD and Intel are betting that the so-called "neural processing units" now found in the latest chip designs will encourage consumers to once again pay for higher-end laptops. Adding additional AI capabilities could help take market share from Apple. "The conversations I'm having with customers are about 'how do I get my PC ready for what I think is coming in AI and going to be able to deliver,'" said Sam Burd, Dell Technologies' president of its PC business. Chipmakers built the NPU blocks because they can achieve a high level of performance for AI functions with relatively modest power needs. Today there are few applications that might take full advantage of the new capabilities, but more are coming, said David McAfee, corporate vice president and general manager of the client channel business at AMD.

Among the few applications that can take advantage of such chips is the creative suite of software produced by Adobe. Intel hosted an "open house" where a handful of PC vendors showed off their latest laptops with demos designed to put the new capabilities on display. Machines from the likes of Dell and Lenovo were arrayed inside one of the cavernous ballrooms at the Venetian Convention Center on Las Vegas Boulevard.

AMD

AMD Proposes An FPGA Subsystem User-Space Interface For Linux (phoronix.com) 27

Michael Larabel reports via Phoronix: AMD engineers are proposing an FPGA Subsystem User-Space Interface to overcome current limitations of the Linux kernel's FPGA manager subsystem. AMD-Xilinx engineers are proposing a new sysfs interface for the FPGA subsystem that allows for more user-space control over FPGAs. The suggested interface would handle FPGA configuration, driver probe/remove, bridges, Device Tree Overlay file support for re-programming an FPGA while the operating system is running, and other capabilities for user-space not currently presented by the mainline kernel. [...] This proposal from AMD hopes to standardize the FPGA subsystem user-space interface in a manner that is suitable for upstreaming into the mainline Linux kernel.
Displays

Linux Is the Only OS To Support Diagonal PC Monitor Mode (tomshardware.com) 170

Melbourne-based developer xssfox has championed a unique "diagonal mode" for monitors by utilizing Linux's xrandr (x resize and rotate) tool, finding a 22-degree tilt to the left to be the ideal angle for software development on her 32:9 aspect ratio monitor. As Tom's Hardware notes, Linux is the "only OS to support a diagonal monitor mode, which you can customize to any tilt of your liking." It begs the question, could 2024 be the year of the Linux diagonal desktop? From the report: Xssfox devised a consistent method to appraise various screen rotations, working through the staid old landscape and portrait modes, before deploying xrandr to test rotations like the slightly skewed 1 degree and an indecisive 45 degrees. These produced mixed results of questionable benefits, so the search for the Goldilocks solution continued. It turns out that a 22-degree tilt to the left was the sweet spot for xssfox. This rotation delivered the best working screen space on what looks like a 32:9 aspect ratio monitor from Dell. "So this here, I think, is the best monitor orientation for software development," the developer commented. "It provides the longest line lengths and no longer need to worry about that pesky 80-column limit."

If you have a monitor with the same aspect ratio, the 22-degree angle might work well for you, too. However, people with other non-conventional monitor rotation needs can use xssfox's javascript calculator to generate the xrandr command for given inputs. People who own the almost perfectly square LG DualUp 28MQ780 might be tempted to try 'diamond mode,' for example. We note that Windows users with AMD and Nvidia drivers are currently shackled to applying screen rotations using 90-degree steps. MacOS users apparently face the same restrictions.

Intel

12VO Power Standard Appears To Be Gaining Steam, Will Reduce PC Cables and Costs (tomshardware.com) 79

An anonymous reader quotes a report from Tom's Hardware: The 12VO power standard (PDF), developed by Intel, is designed to reduce the number of power cables needed to power a modern PC, ultimately reducing cost. While industry uptake of the standard has been slow, a new slew of products from MSI indicates that 12VO is gaining traction.

MSI is gearing up with two 12VO-compliant motherboards, covering both Intel and AMD platforms, and a 12VO Power Supply that it's releasing simultaneously: The Pro B650 12VO WiFi, Pro H610M 12VO, and MSI 12VO PSU power supply are all 'coming soon,' which presumably means they'll officially launch at CES 2024. HardwareLux got a pretty good look at MSI's offerings during its EHA (European Hardware Awards) tech tour, including the 'Project Zero' we covered earlier. One of the noticeable changes is the absence of a 24-pin ATX connector, as the ATX12VO connectors use only ten-pin connectors. The publications also saw a 12VO-compliant FSP power supply in a compact system with a thick graphics card.

A couple of years ago, we reported on FSP 650-watt and 750-watt SFX 12VO power supply. Apart from that, there is a 1x 6-pin ATX12VO termed 'extra board connector' according to its manual and a 1x 8-pin 12V power connector for the CPU. There are two smaller 4-pin connectors that will provide the 5V power needed for SATA drives. It is likely each of these connectors provides power to two SATA-based drives. Intel proposed the ATX12VO standard several years ago, but adoption has been slow until now. This standard is designed to provide 12v exclusively, completely removing a direct 3.3v and 5v supply. The success of the new standard will depend on the wide availability of the motherboard and power supplies.

AMD

Ryzen vs. Meteor Lake: AMD's AI Often Wins, Even On Intel's Hand-Picked Tests (tomshardware.com) 6

Velcroman1 writes: Intel's new generation of "Meteor Lake" mobile CPUs herald a new age of "AI PCs," computers that can handle inference workloads such as generating images or transcribing audio without an Internet connection. Officially named "Intel Core Ultra" processors, the chips are the first to feature an NPU (neural processing unit) that's purpose-built to handle AI tasks. But there are few ways to actually test this feature at present: software will need to be rewritten to specifically direct operations at the NPU.

Intel has steered testers toward its Open Visual Inference and Neural Network Optimization (OpenVINO) AI toolkit. With those benchmarks, Tom's Hardware tested the new Intel chips against AMD -- and surprisingly, AMD chips often came out on top, even on these hand-selected benchmarks. Clearly, optimization will take some time!

Intel

Intel Unveils New AI Chip To Compete With Nvidia and AMD (cnbc.com) 13

Intel unveiled new computer chips on Thursday, including Gaudi3, an AI chip for generative AI software. Gaudi3 will launch next year and will compete with rival chips from Nvidia and AMD that power big and power-hungry AI models. From a report: The most prominent AI models, like OpenAI's ChatGPT, run on Nvidia GPUs in the cloud. It's one reason Nvidia stock has been up nearly 230% year-to-date while Intel shares are up 68%. And it's why companies like AMD and, now Intel, have announced chips that they hope will attract AI companies away from Nvidia's dominant position in the market.

While the company was light on details, Gaudi3 will compete with Nvidia's H100, the main choice among companies that build huge farms of the chips to power AI applications, and AMD's forthcoming MI300X, when it starts shipping to customers in 2024. Intel has been building Gaudi chips since 2019, when it bought a chip developer called Habana Labs.

AMD

AMD Says Ryzen Threadripper 7000 Overclocking Triggers Hidden Fuse, Warranty Unaffected 45

Overclocking AMD's Ryzen Threadripper 7000 series blows a fuse, indicating modification. However, AMD has told Tom's Hardware that this does not automatically invalidate the warranty of these top-tier workstation CPUs. From the report: "Threadripper 7000 Series processors do contain a fuse that is blown when overclocking is enabled. To be clear, blowing this fuse does not void your warranty. Statements that enabling an overclocking/overvolting feature will 'void' the processor warranty are not correct. Per AMD's standard Terms of Sale, the warranty excludes any damage that results from overclocking/overvolting the processor. However, other unrelated issues could still qualify for warranty repair/replacement," an AMD representative told Tom's Hardware.

In summation, overclocking your Ryzen Threadripper Pro 7000 or non-Pro processor will not void the warranty -- only damages directly resulting from overclocking will. As always, AMD isn't against overclocking. If it was, the chipmaker wouldn't advertise overclocking support as one of the features of the WRX90 and TRX50 platforms. Only OEM systems lack overclocking support.
AMD

Meta and Microsoft To Buy AMD's New AI Chip As Alternative To Nvidia's (cnbc.com) 16

Meta, OpenAI, and Microsoft said at an AMD investor event today that they will use AMD's newest AI chip, the Instinct MI300X, as an alternative to Nvidia's expensive graphic processors. "If AMD's latest high-end chip is good enough for the technology companies and cloud service providers building and serving AI models when it starts shipping early next year, it could lower costs for developing AI models and put competitive pressure on Nvidia's surging AI chip sales growth," reports CNBC. From the report: "All of the interest is in big iron and big GPUs for the cloud," AMD CEO Lisa Su said Wednesday. AMD says the MI300X is based on a new architecture, which often leads to significant performance gains. Its most distinctive feature is that it has 192GB of a cutting-edge, high-performance type of memory known as HBM3, which transfers data faster and can fit larger AI models. Su directly compared the MI300X and the systems built with it to Nvidia's main AI GPU, the H100. "What this performance does is it just directly translates into a better user experience," Su said. "When you ask a model something, you'd like it to come back faster, especially as responses get more complicated."

The main question facing AMD is whether companies that have been building on Nvidia will invest the time and money to add another GPU supplier. "It takes work to adopt AMD," Su said. AMD on Wednesday told investors and partners that it had improved its software suite called ROCm to compete with Nvidia's industry standard CUDA software, addressing a key shortcoming that had been one of the primary reasons AI developers currently prefer Nvidia. Price will also be important. AMD didn't reveal pricing for the MI300X on Wednesday, but Nvidia's can cost around $40,000 for one chip, and Su told reporters that AMD's chip would have to cost less to purchase and operate than Nvidia's in order to persuade customers to buy it.

On Wednesday, AMD said it had already signed up some of the companies most hungry for GPUs to use the chip. Meta and Microsoft were the two largest purchasers of Nvidia H100 GPUs in 2023, according to a recent report from research firm Omidia. Meta said it will use MI300X GPUs for AI inference workloads such as processing AI stickers, image editing, and operating its assistant. Microsoft's CTO, Kevin Scott, said the company would offer access to MI300X chips through its Azure web service. Oracle's cloud will also use the chips. OpenAI said it would support AMD GPUs in one of its software products, called Triton, which isn't a big large language model like GPT but is used in AI research to access chip features.

Bug

Nearly Every Windows and Linux Device Vulnerable To New LogoFAIL Firmware Attack (arstechnica.com) 69

"Researchers have identified a large number of bugs to do with the processing of images at boot time," writes longtime Slashdot reader jd. "This allows malicious code to be installed undetectably (since the image doesn't have to pass any validation checks) by appending it to the image. None of the current secure boot mechanisms are capable of blocking the attack." Ars Technica reports: LogoFAIL is a constellation of two dozen newly discovered vulnerabilities that have lurked for years, if not decades, in Unified Extensible Firmware Interfaces responsible for booting modern devices that run Windows or Linux. The vulnerabilities are the product of almost a year's worth of work by Binarly, a firm that helps customers identify and secure vulnerable firmware. The vulnerabilities are the subject of a coordinated mass disclosure released Wednesday. The participating companies comprise nearly the entirety of the x64 and ARM CPU ecosystem, starting with UEFI suppliers AMI, Insyde, and Phoenix (sometimes still called IBVs or independent BIOS vendors); device manufacturers such as Lenovo, Dell, and HP; and the makers of the CPUs that go inside the devices, usually Intel, AMD or designers of ARM CPUs. The researchers unveiled the attack on Wednesday at the Black Hat Security Conference in London.

As its name suggests, LogoFAIL involves logos, specifically those of the hardware seller that are displayed on the device screen early in the boot process, while the UEFI is still running. Image parsers in UEFIs from all three major IBVs are riddled with roughly a dozen critical vulnerabilities that have gone unnoticed until now. By replacing the legitimate logo images with identical-looking ones that have been specially crafted to exploit these bugs, LogoFAIL makes it possible to execute malicious code at the most sensitive stage of the boot process, which is known as DXE, short for Driver Execution Environment. "Once arbitrary code execution is achieved during the DXE phase, it's game over for platform security," researchers from Binarly, the security firm that discovered the vulnerabilities, wrote in a whitepaper. "From this stage, we have full control over the memory and the disk of the target device, thus including the operating system that will be started." From there, LogoFAIL can deliver a second-stage payload that drops an executable onto the hard drive before the main OS has even started. The following video demonstrates a proof-of-concept exploit created by the researchers. The infected device -- a Gen 2 Lenovo ThinkCentre M70s running an 11th-Gen Intel Core with a UEFI released in June -- runs standard firmware defenses, including Secure Boot and Intel Boot Guard.
LogoFAIL vulnerabilities are tracked under the following designations: CVE-2023-5058, CVE-2023-39538, CVE-2023-39539, and CVE-2023-40238. However, this list is currently incomplete.

"A non-exhaustive list of companies releasing advisories includes AMI (PDF), Insyde, Phoenix, and Lenovo," reports Ars. "People who want to know if a specific device is vulnerable should check with the manufacturer."

"The best way to prevent LogoFAIL attacks is to install the UEFI security updates that are being released as part of Wednesday's coordinated disclosure process. Those patches will be distributed by the manufacturer of the device or the motherboard running inside the device. It's also a good idea, when possible, to configure UEFIs to use multiple layers of defenses. Besides Secure Boot, this includes both Intel Boot Guard and, when available, Intel BIOS Guard. There are similar additional defenses available for devices running AMD or ARM CPUs."
Intel

Intel Calls AMD's Chips 'Snake Oil' (tomshardware.com) 189

Aaron Klotz, reporting for Tom's Hardware: Intel recently published a new playbook titled "Core Truths" that put AMD under direct fire for utilizing its older Zen 2 CPU architecture in its latest Ryzen 7000 mobile series CPU product stack. Intel later removed the document, but we have the slides below. The playbook is designed to educate customers about AMD's product stack and even calls it "snake oil."

Intel's playbook specifically talks about AMD's latest Ryzen 5 7520U, criticizing the fact it features AMD's Zen 2 architecture from 2019 even though it sports a Ryzen 7000 series model name. Further on in the playbook, the company accuses AMD of selling "half-truths" to unsuspecting customers, stressing that the future of younger kid's education needs the best CPU performance from the latest and greatest CPU technologies made today. To make its point clear, Intel used images in its playbook referencing "snake oil" and images of used car salesmen.

The playbook also criticizes AMD's new naming scheme for its Ryzen 7000 series mobile products, quoting ArsTechnica: "As a consumer, you're still intended to see the number 7 and think, 'Oh, this is new.'" Intel also published CPU benchmark comparisons of the 7520U against its 13th Gen Core i5-1335U to back up its points. Unsurprisingly, the 1335U was substantially faster than the Zen 2 counterpart.

Slashdot Top Deals