Lateo.net - Flux RSS en pagaille (pour en ajouter : @ moi)

🔒
❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierAnandTech

Report: China to Pivot from AMD & Intel CPUs To Domestic Chips in Government PCs

China has initiated a policy shift to eliminate American processors from government computers and servers, reports Financial Times. The decision is aimed to gradually eliminate processors from AMD and Intel from system used by China's government agencies, which will mean lower sales for U.S.-based chipmakers and higher sales of China's own CPUs.

The new procurement guidelines, introduced quietly at the end of 2023, mandates government entities to prioritize 'safe and reliable' processors and operating systems in their purchases. This directive is part of a concerted effort to bolster domestic technology and parallels a similar push within state-owned enterprises to embrace technology designed in China.

The list of approved processors and operating systems, published by China's Information Technology Security Evaluation Center, exclusively features Chinese companies. There are 18 approved processors that use a mix of architectures, including x86 and ARM, while the operating systems are based on open-source Linux software. Notably, the list includes chips from Huawei and Phytium, both of which are on the U.S. export blacklist.

This shift towards domestic technology is a cornerstone of China's national strategy for technological autonomy in the military, government, and state sectors. The guidelines provide clear and detailed instructions for exclusively using Chinese processors, marking a significant step in China's quest for self-reliance in technology.

State-owned enterprises have been instructed to complete their transition to domestic CPUs by 2027. Meanwhile, Chinese government entites have to submit progress reports on their IT system overhauls quarterly. Although some foreign technology will still be permitted, the emphasis is clearly on adopting local alternatives.

The move away from foreign hardware is expected to have a measurable impact on American tech companies. China is a major market for AMD (accounting for 15% of sales last year) and Intel (commanding 27% of Intel's revenue), contributing to a substantial portion of their sales. Additionally, Microsoft, while not disclosing specific figures, has acknowledged that China accounts for a small percentage of its revenues. And while government sales are only a fraction of overall China sales (as compared to the larger commercial PC business) the Chinese government is by no means a small customer.

Analysts questioned by Financial Times predict that the transition to domestic processors will advance more swiftly for server processors than for client PCs, due to the less complex software ecosystem needing replacement. They estimate that China will need to invest approximately $91 billion from 2023 to 2027 to overhaul the IT infrastructure in government and adjascent industries.

The DeepCool PX850G 850W PSU Review: Less Than Quiet, More Than Capable

DeepCool is one of the few veterans in the PC power & cooling components field still active today. The Chinese company was first founded in 1996 and initially produced only coolers and cooling accessories, but quickly diversified into the PC Case and power supply unit (PSU) markets. To this day, DeepCool stays almost entirely focused on PC power & cooling products, with input devices and mousepads being their latest diversification attempt.

Today's review turns the spotlight toward DeepCool’s PSUs and, more specifically, the PX850G 850W ATX 3.0 PSU, which currently is their most popular power supply. The PX850G is engineered to balance all-around performance with reliability and cost, all while providing ATX 3.0 compliance. It is based on a highly popular high-output platform but, strangely, DeepCool rated the PX850G for operation up to 40°C.

AMD Announces FSR 3.1: Seriously Improved Upscaling Quality

AMD's FidelityFX Super Resolution 3 technology package introduced a plethora of enhancements to the FSR technology on Radeon RX 6000 and 7000-series graphics cards last September. But perfection has no limits, so this week, the company is rolling out its FSR 3.1 technology, which improves upscaling quality, decouples frame generation from AMD's upscaling, and makes it easier for developers to work with FSR.

Arguably, AMD's FSR 3.1's primary enhancement is its improved temporal upscaling image quality: compared to FSR 2.2, the image flickers less at rest and no longer ghosts when in movement. This is a significant improvement, as flickering and ghosting artifacts are particularly annoying. Meanwhile, FSR 3.1 has to be implemented by the game developer itself, and the first title to support this new technology sometime later this year is Ratchet & Clank: Rift Apart.

Temporal Stability

AMD FSR 2.2 AMD FSR 3.1
Ghosting Reduction

AMD FSR 2.2 AMD FSR 3.1

Another significant development brought by FSR 3.1 is its decoupling from the Frame Generation feature introduced by FSR 3. This capability relies on a form of AMD's Fluid Motion Frames (AFMF) optical flow interpolation. It uses temporal game data like motion vectors to add an additional frame between existing ones. This ability can lead to a performance boost of up to two times in compatible games, but it was initially tied to FSR 3 upscaling, which is a limitation. Starting from FSR 3.1, it will work with other upscaling methods, though AMD refrains from saying which methods and on which hardware for now. Also, the company does not disclose when it is expected to be implemented by game developers.

In addition, AMD is bringing support for FSR3 to Vulkan and Xbox Game Development Kit, enabling game developers on these platforms to use it. It also adds FSR 3.1 to the FidelityFX API, which simplifies debugging and enables forward compatibility with updated versions of FSR. 

Upon its release in September 2023, AMD FSR 3 was initially supported by two titles, Forspoken and Immortals of Aveum, with ten more games poised to join them back then. Fast forward to six months later, the lineup has expanded to an impressive roster of 40 games either currently supporting or set to incorporate FSR 3 shortly. As of March 2024, FSR is supported by games like Avatar: Frontiers of Pandora, Starfield, The Last of Us Part I. Shortly, Cyberpunk 2077, Dying Light 2 Stay Human, Frostpunk 2, and Ratchet & Clank: Rift Apart will support FSR shortly.

Source: AMD

NVIDIA Blackwell Architecture and B200/B100 Accelerators Announced: Going Bigger With Smaller Data

Already solidly in the driver’s seat of the generative AI accelerator market at this time, NVIDIA has long made it clear that the company isn’t about to slow down and check out the view. Instead, NVIDIA intends to continue iterating along its multi-generational product roadmap for GPUs and accelerators, to leverage its early advantage and stay ahead of its ever-growing coterie of competitors in the accelerator market. So while NVIDIA’s ridiculously popular H100/H200/GH200 series of accelerators are already the hottest ticket in Silicon Valley, it’s already time to talk about the next generation accelerator architecture to feed NVIDIA’s AI ambitions: Blackwell.

Asus Launches Low-Profile GeForce RTX 3050 6GB: A Tiny Graphics Card for All PCs

Asus this week has become the latest PC video card manufacturer to announce a sub-75W video card based on NVIDIA's recently-released low-power GeForce RTX 3050 6GB design. And going one step further for small form factor PC owners, Asus has used NVIDIA's low-power GPU configuration to produce a half-height video card that can fit into low-profile systems.

As Asus puts it, the GeForce RTX 3050 LP BRK 6GB GDDR6 is a 'big productivity in a small package' and for a low-profile dual-slot graphics board, it indeed is. The unit has three display outputs, including a DVI-D, HDMI 2.1, and DisplayPort 1.4a with HDCP 2.3 support, which makes the graphics card s viable option both for a a dual-display desktop and a home theater PC (Nvidia's GA107 graphics processor supports all popular codecs except AV1). Furthermore, a DVI-D output enables the card to drive outdated displays, which even over half a decade after DVI-D was retired, still hang around as spare parts. Meanwhile, because the card only consumes around 70W, it does not require any auxiliary PCIe power connectors, which are at times not available in cheap systems from big PC makers.

Underlying this card is the aforementioned GeForce RTX 3050 6 GB, which uses the GA107 GPU with 2304 CUDA cores, and it comes with 6GB of GDDR6 memory connected to a narrower 96-bit memory bus (down from 128-bits for the full 8GB version. With a lower boost clock of 1470 MHz (1500 MHz in OC mode), the RTX 3050 6GB has reduced computing performance, delivering 6.77 FP32 TFLOPS versus 9.1 FP32 TFLOPS of the full-fledged RTX 3050.

As a result, the low-profile GeForce RTX 3050 6 GB is very much an entry-level card, though the low power requirements for such a card are also what make it special. This should be plenty for low-end gaming – beating out integrated GPUs – though suffice it to say, it's not going to compete with high-end, power-hungry cards either.

With its diminutive size, the Asus GeForce RTX 3050 LP BRK 6 GB GDDR6 looks to be a nice candidate for upgrading cheap systems from OEMs as well as fixing outdated PCs. What remains to be seen is how price competitive it is going to be. The graphics board already has one low-profile rival from MSI — which costs $185 — so Asus is not the only vendor competing here.

Intel Announces Core i9-14900KS: Raptor Lake-R Hits Up To 6.2 GHz

For the last several generations of desktop processors from Intel, the company has released a higher clocked, special-edition SKU under the KS moniker, which the company positions as their no-holds-barred performance part for that generation. For the 14th Generation Core family, Intel is keeping that tradition alive and well with the announcement of the Core i9-14900KS, which has been eagerly anticipated for months and finally unveiled for launch today. The Intel Core i9-14900KS is a special edition processor with P-Core turbo clock speeds of up to 6.2 GHz, which makes it the fastest desktop processor in the world... at least in terms of advertised frequencies it can achieve.

With their latest KS processor, Intel is looking to further push the envelope on what can be achieved with the company's now venerable Raptor Lake 8+16 silicon. With a further 200 MHz increase in clockspeeds at the top end, Intel is looking to deliver unrivaled desktop performance for enthusiasts. At the same time, as this is the 4th iteration of the "flagship" configuration of the RPL 8+16 die, Intel is looking to squeeze out one more speed boost from the Alder/Raptor family in order to go out on a high note before the entire architecture starts to ride off into the sunset later this year. To get there, Intel will need quite a bit of electricity, and $689 of your savings.

The Arctic Liquid Freezer III 280 A-RGB White AIO Review: Refined Design Brings Stand-Out Cooler

ARCTIC GmbH, originally known as Arctic Cooling, first burst onto the PC cooling scene in 2001 and has since maintained its stature as a leader in cooling technologies. The company made its mark with top-notch thermal compounds and has since kept its focus on cooling solutions while also expanding into other tech accessories, including advanced monitor mounts and audio products.

With the introduction of the Liquid Freezer III series, ARCTIC has taken another significant step forward in the cooling market. This new lineup builds upon the success of the previous Liquid Freezer II series, the great price-to-performance ratio of which made it a highly popular product. Today, we're delving into ARCTIC's latest offerings with the Liquid Freezer III series and, specifically, the 280 A-RGB White model. We'll assess the features, quality, and thermal performance of the AIO (All-In-One) cooler of the series ARCTIC is hoping to dominate the bulk of the mainstream market with.

The be quiet! Pure Power 12 M 650W PSU Review: Solid Gold

Be quiet! is renowned for its dedication to excellence in the realm of PC components, specializing in products that emphasize silence and performance. The brand's product lineup is extensive, encompassing high-quality power supply units (PSUs), cases, and cooling solutions, including air and liquid coolers. Be quiet! is particularly renowned for trying to achieve whisper-quiet operation across all its products, making it a favorite among PC enthusiasts who prioritize a noiseless computing environment. The brand's portfolio reflects a dedication to meeting the diverse needs of tech aficionados and professionals, with an array of products that emphasize noise reduction and efficiency.

This review shines a spotlight on the Be quiet! Pure Power 12 M 650W PSU, a standout product in Be quiet!'s PSU collection that illustrates the company's attitude towards product design. The Pure Power 12 M series is designed to provide dependable performance and quiet operation, catering to users who demand a good balance of power efficiency and acoustics with reliability and value. This model, in particular, strives to offer a compelling blend of performance and quality, making it an attractive option for individuals seeking a PSU that aligns with the requirements of both entry-level and advanced PC builds.

SiPearl's Rhea-2 CPU Added to Roadmap: Second-Gen European CPU for HPC

SiPearl, a processor designer supported by the European Processor Initiative, is about to start shipments of its very first Rhea processor for high-performance computing workloads. But the company is already working on its successor currently known as Rhea-2, which is set to arrive sometimes in 2026 in Exascale supercomputers.

SiPearl's Rhea-1 datacenter-grade system-on-chip packs 72 off-the-shelf Arm Neoverse V1 cores designed for HPC and connected using a mesh network. The CPU has an hybrid memory subsystem that supports both HBM2E and DDR5 memory to get both high memory bandwidth and decent memory capacity as well as supports PCIe interconnects with the CXL protocol on top. The CPU was designed by a contract chip designer and is made by TSMC on its N6 (6 nm-class) process technology.

The original Rhea is to a large degree a product aimed to prove that SiPearl, a European company, can deliver a datacenter-grade processor. This CPU now powers Jupiter, Europe's first exascale system that uses nodes powered by four Rhea CPUs and NVIDIA's H200 AI and HPC GPUs. Given that Rhea is SiPearl's first processor, the project can be considered as fruitful.

With its 2nd generation Rhea processors, SiPearl will have to develop something that is considerably more competitive. This is perhaps why Rhea-2 will use a dual-chiplet implementation. Such a design will enable SiPearl to pack more processing cores and therefore offer higher performance. Of course, it remains to be seen how many cores SiPearl plans to integrate into Rhea 2, but at least the CPU company is set to adopt the same design methodologies as AMD and Intel.

Given the timing for SiPearl's Rhea 2 and the company's natural with to preserve software compatibility with Rhea 1, it is reasonable to expect the processor to adopt Arm's Neoverse V3 cores for its second processor. Arm's Neoverse V3 offer quite a significant uplift compared to Neoverse V2 (and V1) and can scale to up to 128 cores per socket, which should be quite decent for HPC applications in 2025 – 2026.

While SiPearl will continue developing CPUs, it remains to be seen whether EPI will manage to deliver AI and HPC accelerators that are competitive against those from NVIDIA, AMD, and Intel.

Intel CEO Pat Gelsinger to Deliver Computex Keynote, Showcasing Next-Gen Products

Taiwan External Trade Development Council (TAITRA), the organizer of Computex, has announced that Pat Gelsinger, chief executive of Intel, will deliver a keynote at Computex 2024 on June 4, 2024. Focusing on the trade show's theme of artificial intelligence, he will showcase Intel's next-generation AI-enhanced products for client and datacenter computers.

According to TAITRA's press release, Pat Gelsinger will discuss how Intel's product lineup, including the AI-accelerated Intel Xeon, Intel Gaudi, and Intel Core Ultra processor families, opens up new opportunities for client PCs, cloud computing, datacenters, and network and edge applications. He will also discuss superior performance-per-watt and lower cost of ownership of Intel's Xeon processors, which enhance server capacity for AI workloads.

The most intriguing part of Intel's Computex keynote will of course be the company's next-generation AI-enhanced products for client and datacenter computers. At this point Intel is prepping numerous products that pose a lot of interest, including the following:

  • Arrow Lake and Lunar Lake processors made on next-generation process technologies for desktop and mobile PCs and featuring all-new microarchitectures;
  • Granite Rapids CPUs for datacenters based on a high-performance microarchitecture;
  • Sierra Forest processors with up to 288 cores for cloud workloads based on codenamed Crestmont energy-efficient cores;
  • Gaudi 3 processors for AI workloads that promise to quadruple BF16 performance compared to Gaudi 2.
  • Battlemage graphics processing units.

All of these products are due to be released in 2024-2025, so Intel could well demonstrate them and showcase their performance advantages, or even formally launch some of them, at Computex. What remains to be seen is whether Intel will also give a glimpse at products that are further away, such as Clearwater Forest and Falcon Shores.

JEDEC Publishes GDDR7 Memory Spec: Next-Gen Graphics Memory Adds Faster PAM3 Signaling & On-Die ECC

JEDEC on Tuesday published the official specifications for GDDR7 DRAM, the latest iteration of the long-standing memory standard for graphics cards and other GPU-powered devices. The newest generation of GDDR brings a combination of memory capacity and memory bandwidth gains, with the later being driven primarily by the switch to PAM3 signaling on the memory bus. The latest graphics RAM standard also boosts the number of channels per DRAM chip, adds new interface training patterns, and brings in on-die ECC to maintain the effective reliability of the memory.

“JESD239 GDDR7 marks a substantial advancement in high-speed memory design,” said Mian Quddus, JEDEC Board of Directors Chairman. “With the shift to PAM3 signaling, the memory industry has a new path to extend the performance of GDDR devices and drive the ongoing evolution of graphics and various high-performance applications.”

GDDR7 has been in development for a few years now, with JEDEC members making the first disclosures around the memory technology about a year ago, when Cadence revealed the use of PAM3 encoding as part of their validation tools. Since then we've heard from multiple memory manufacturers that we should expect the final version of the memory to launch in 2024, with JEDEC's announcement essentially coming right on schedule.

As previously revealed, the biggest technical change with GDDR7 comes with the switch from two-bit non-return-to-zero (NRZ) encoding on the memory bus to three-bit pulse amplitude modulating (PAM3) encoding. This change allows GDDR7 to transmit 3 bits over two cycles, 50% more data than GDDR6 operating at an identical clockspeed. As a result, GDDR7 can support higher overall data transfer rates, the critical component to making each generation of GDDR successively faster than its predecessor.

GDDR Generations
  GDDR7 GDDR6X
(Non-JEDEC)
GDDR6
B/W Per Pin 32 Gbps (Gen 1)
48 Gbps (Spec Max)
24 Gbps (Shipping) 24 Gbps (Sampling)
Chip Density 2 GB (16 Gb) 2 GB (16 Gb) 2 GB (16 Gb)
Total B/W (256-bit bus) 1024 GB/sec 768 GB/sec 768 GB/sec
DRAM Voltage 1.2 V 1.35 V 1.35 V
Data Rate QDR QDR QDR
Signaling PAM-3 PAM-4 NRZ (Binary)
Maximum Density 64 Gb 32 Gb 32 Gb
Packaging 266 FBGA 180 FBGA 180 FBGA

The first generation of GDDR7 is expected to run at data rates around 32 Gbps per pin, and memory manufacturers have previously talked about rates up to 36 Gbps/pin as being easily attainable. However the GDDR7 standard itself leaves room for even higher data rates – up to 48 Gbps/pin – with JEDEC going so far as touting GDDR7 memory chips "reaching up to 192 GB/s [32b @ 48Gbps] per device" in their press release. Notably, this is a significantly higher increase in bandwidth than what PAM3 signaling brings on its own, which means there are multiple levels of enhancements within GDDR7's design.

Digging deeper into the specification, JEDEC has also once again subdivided a single 32-bit GDDR memory chip into a larger number of channels. Whereas GDDR6 offered two 16-bit channels, GDDR7 expands this to four 8-bit channels. The distinction is somewhat arbitrary from an end-user's point of view – it's still a 32-bit chip operating at 32Gbps/pin regardless – but it has a great deal of impact on how the chip works internally. Especially as JEDEC has kept the 256-bit per channel prefetch of GDDR5 and GDDR6, making GDDR7 a 32n prefetch design.


GDDR Channel Architecture. Original GDDR6-era Diagram Courtesy Micron

The net impact of all of this is that, by halving the channel width but keeping the prefetch size the same, JEDEC has effectively doubled the amount of data that is prefetched per cycle of the DRAM cells. This is a pretty standard trick to extend the bandwidth of DRAM memory, and is essentially the same thing JEDEC did with GDDR6 in 2018. But it serves as a reminder that DRAM cells are still very slow (on the order of hundreds of MHz) and aren't getting any faster. So the only way to feed faster memory buses is by fetching ever-larger amounts of data in a single go.

The change in the number of channels per memory chip also has a minor impact on how multi-channel "clamshell" mode works for higher capacity memory configurations. Whereas GDDR6 accessed a single memory channel from each chip in a clamshell configuration, GDDR7 will access two channels – what JEDEC is calling two-channel mode. Specifically, this mode reads channels A and C from each chip. It is effectively identical to how clamshell mode behaved with GDDR6, and it means that while clamshell configurations remain supported in this latest generation of memory, there aren't any other tricks being employed to improve memory capacity beyond ever-increasing memory chip densities.

On that note, the GDDR7 standard officially adds support for 64Gbit DRAM devices, twice the 32Gbit max capacity of GDDR6/GDDR6X. Non-power-of-two capacities continue to be supported as well, allowing for 24Gbit and 48Gbit chips. Support for larger memory chips further pushes the maximum memory capacity of a theoretical high-end video card with a 384-bit memory bus to as high as 192GB of memory – a development that would no doubt be welcomed by datacenter operators in the era of large language AI models. With that said, however, we're still regularly seeing 16Gbit memory chips used on today's memory cards, even though GDDR6 supports 32Gbit chips. Coupled with the fact that Samsung and Micron have already disclosed that their first generation of GDDR7 chips will also top out at 16Gbit/24Gbit respectively, it's safe to say that 64Gbit chips are pretty far off in the future right now (so don't sell off your 48GB cards quite yet).

For their latest generation of memory technology, JEDEC is also including several new-to-GDDR memory reliability features. Most notably, on-die ECC capabilities, similar to what we saw with the introduction of DDR5. And while we haven't been able to get an official comment from JEDEC on why they've opted to include ECC support now, its inclusion is not surprising given the reliability requirements for DDR5. In short, as memory chip densities have increased, it has become increasingly hard to yield a "perfect" die with no flaws; so adding on-chip ECC allows memory manufacturers to keep their chips operating reliably in the face of unavoidable errors.


This figure is reproduced, with permission, from JEDEC document JESD239, figure 124

Internally, the GDDR7 spec requires a minimum of 16 bits of parity data per 256 bits of user data (6.25%), with JEDEC giving an example implementation of a 9-bit single error correcting code (SEC) plus a 7-bit cyclic redundancy check (CRC). Overall, GDDR7 on-die ECC should be able to correct 100% of 1-bit errors, and detect 100% of 2-bit errors – falling to 99.3% in the rare case of 3-bit errors. Information about memory errors is also made available to the memory controller, via what JEDEC terms their on-die ECC transparency protocol. And while technically separate from ECC itself, GDDR7 also throws in another memory reliability feature with command address parity with command blocking (CAPARBLK), which is intended to improve the integrity of the command address bus.

Otherwise, while the inclusion of on-die ECC isn't likely to have any more of an impact on consumer video cards than its inclusion had for DDR5 memory and consumer platforms there, it remains to be seen what this will mean for workstation and server video cards. The vendors there have used soft ECC on top of unprotected memory for several generations now; presumably this will remain the case for GDDR7 cards as well, but the regular use of soft ECC makes things a lot more flexible than in the CPU space.


This figure is reproduced, with permission, from JEDEC document JESD239, figure 152

Finally, GDDR7 is also introducing a suite of other reliability-related features, primarily related to helping PAM3 operation. This includes core independent LFSR (linear-feedback shift register) training patterns with eye masking and error counters. LFSR training patterns are used to test and adjust the interface (to ensure efficiency), eye masking evaluates signal quality, and error counters track the number of errors during training.

Technical matters aside, this week's announcement includes statements of support from all of the usual players on both sides of the isle, including AMD and NVIDA, and the Micron/Samsung/SKhynix trifecta. It goes without saying that all parties are keen to getting to use or sell GDDR7 respectively, given the memory capacity and bandwidth improvements it will bring – and especially in this era where anything aimed at the AI market is selling like hotcakes.

No specific products are being announced at this time, but with Samsung and Micron having previously announced their intentions to ship GDDR7 memory this year, we should see new memory (and new GPUs to pair it with) later this year.

JEDEC standards and publications are copyrighted by the JEDEC Solid State Technology Association.  All rights reserved.

The Cooler Master MWE V2 Gold 750W PSU Review: Effective, But Limited By Aging Platform

Cooler Master, renowned for its pioneering role in cooling technologies, has evolved into a key player in the PC components industry, extending its expertise to include cases and power supply units (PSUs). The company's current catalog is a testament to its commitment to diversity, featuring over 75 PC cases, 90 coolers, and 120 PSUs, all designed to cater to the evolving demands of tech enthusiasts and professionals alike.

This review focuses on the Cooler Master MWE Gold V2 750W PSU, a key offering in Cooler Master's power supply lineup that embodies the brand's vision of combining quality and value. The MWE Gold V2 series is engineered to offer solid performance and reliability at a price point that appeals to system builders and gamers looking for an entry-level to mid-range solution. As a result, the MWE Gold V2 750W has been a consistently popular offering within Cooler Master's catalog, often cycling in and out of stock depending on what sales are going on. This makes the PSU a bit harder to track down in North America than it does Europe, and quick to vanish when it does show up.

Tenstorrent Licenses RISC-V CPU IP to Build 2nm AI Accelerator for Edge

Tenstorrent this week announced that it had signed a deal to license out its RISC-V CPU and AI processor IP to Japan's Leading-edge Semiconductor Technology Center (LSTC), which will use the technology to build its edge-focused AI accelerator. The most curious part of the announcement is that this accelerator will rely on a multi-chiplet design and the chiplets will be made by Japan's Rapidus on its 2nm fabrication process, and then will be packaged by the same company.

Under the terms of the agreement, Tenstorrent will license its datacenter-grade Ascalon general-purpose processor IP to LSTC and will help to implement the chiplet using Rapidus's 2nm fabrication process. Tenstorrent's Ascalon is a high-performance out-of-order RISC-V CPU design that features an eight-wide decoding. The Ascalon core packs six ALUs, two FPUs, and two 256-bit vector units and when combined with a 2nm-class process technology promises to offer quite formidable performance.

The Ascalon was developed by a team led by legendary CPU designer Jim Keller, the current chief executive of Tenstorrent, who used to work on successful projects by AMD, Apple, Intel, and Tesla.

In addition to general-purpose CPU IP licensing, Tenstorrent will co-design 'the chip that will redefine AI performance in Japan.' This apparently means that Tenstorrent  does not plan to license LSTC its proprietary  Tensix cores tailored for neural network inference and training, but will help to design a proprietary AI accelerator generally for inference workloads.

"The joint effort by Tenstorrent and LSTC to create a chiplet-based edge AI accelerator represents a groundbreaking venture into the first cross-organizational chiplet development in semiconductor industry," said Wei-Han Lien, Chief Architect of Tenstorrent's RISC-V products. "The edge AI accelerator will incorporate LSTC's AI chiplet along with Tenstorrent's RISC-V and peripheral chiplet technology. This pioneering strategy harnesses the collective capabilities of both organizations to use the adaptable and efficient nature of chiplet technology to meet the increasing needs of AI applications at the edge."

Rapidus aims to start production of chips on its 2nm fabrication process that is currently under development sometimes in 2027, at least a year behind TSMC and a couple of years behind Intel. Yet, if it starts high-volume 2nm manufacturing in 2027, it will be a major breakthrough from Japan, which is trying hard to return to the global semiconductor leaders.

Building an edge AI accelerator based on Tenstorrent's IP and Rapidus's 2nm-class production node is a big deal for LSTC, Tenstorrent, and Rapidus as it is a testament for technologies developed by these three companies.

"I am very pleased that this collaboration started as an actual project from the MOC conclusion with Tenstorrent last November," said Atsuyoshi Koike, president and CEO of Rapidus Corporation. "We will cooperate not only in the front-end process but also in the chiplet (back-end process), and work on as a leading example of our business model that realizes everything from design to back-end process in a shorter period of time ever."

Intel Brings vPro to 14th Gen Desktop and Core Ultra Mobile Platforms for Enterprise

As part of this week's MWC 2024 conference, Intel is announcing that it is adding support for its vPro security technologies to select 14th Generation Core series processors (Raptor Lake-R) and their latest Meteor Lake-based Core Ultra-H and U series mobile processors. As we've seen from more launches than we care to count of Intel's desktop and mobile platforms, they typically roll out their vPro platforms sometime after they've released their full stack of processors, including overclockable K series SKUs and lower-powered T series SKUs, and this year is no exception. Altogether, Intel is announcing vPro Essential and vPro Enterprise support for several 14th Gen Core series SKUs and Intel Core Ultra mobile SKUs.

Intel's vPro security features is something we've covered previously – and on that note, Intel has a new Silicon Security Engine giving the chips the ability to authentical the systems firmware. Intel also states that Intel Threat Detection within vPro has been enhanced and adds an additional layer for the NPU, with an xPU model (CPU/GPU/NPU) to help detect a variety of attacks, and also enables 3rd party software to fun faster. Intel claims is the only AI-based security deployment within a Windows PC to date. Both the total Enterprise securities and the cut-down Essentials vPro hardware-level security to select 14th Gen Core series processors, as well as their latest mobile-focused Meteor Lake processors with Arc graphics launched last year.

Intel 14th Gen vPro: Raptor Lake-R Gets Secured

As we've seen over the last few years with a global shift towards remote work due to the Coronavirus pandemic, the need for up-to-date security in small and larger enterprises is just as critical as it has ever been. Remote and employees in offices alike must have access to the latest software and hardware frameworks to ensure the security of vital data, and that's where Intel vPro comes in.

To quickly recap the current state of affairs, let's take a look at the two levels of Intel vPro securities available,  vPro Essentials and vPro Enterprise, and how they differ.

Intel's vPro Essentials was first launched back in 2022 and is a subset of Intel's complete vPro package, which is now commonly known as vPro Enterprise. The Intel vPro Essentials security package is essentially (as per the name) tailored and designed for small businesses, providing a solid foundation in security without penalizing performance. It integrates hardware-enhanced security features, ensuring hardware-level protection against emerging threats from right from its installation. It also utilizes real-time intelligence for workload optimization and Intel's Thread Detection Technology. It adds an additional layer below the operating system that uses AI-based threat detection to mitigate OS-level threats and attacks.

Pivoting to Intel vPro Enterprise security features, this is designed for SMEs to meet the high demands of large-scale business environments. It offers advanced security features and remote management capabilities, which are crucial for businesses operating with sensitive data and requiring high levels of cybersecurity. Additionally, the platform provides enhanced performance and reliability, making it suitable for intensive workloads and multitasking in a professional setting. Integrating these features from the vPro Enterprise platform ensures that large enterprises can maintain high productivity levels while ensuring data security and efficient IT management with the latest generations of processors, such as the Intel Core 14th Gen family.

Much like we saw when Intel announced their vPro for the 13th Gen Core series, it's worth noting that both the 14th and 13th Gen Core series are based on the same Raptor Lake architecture and, as such, are identical in every aspect bar base and turbo core frequencies.

Intel 14th Gen Core with vPro for Desktop
(Raptor Lake-R)
AnandTech Cores
P+E/T
P-Core
Base/Turbo
(MHz)
E-Core
Base/Turbo
(MHz)
L3 Cache
(MB)
Base
W
Turbo
W
vPRO
Support
(Ent/Ess)
Price
($)
i9-14900K 8+16/32 3200 / 6000 2400 / 4400 36 125 253 Enterprise $589
i9-14900 8+16/32 2000 / 5600 1500 / 4300 36 65 219 Both $549
i9-14900T 8+16/32 1100 / 5500 800 / 4000 36 35 106 Both $549
 
i7-14700K 8+12/28 3400 / 5600 2500 / 4300 33 125 253 Enterprise $409
i7-14700 8+12/28 2100 / 5400 1500 / 4200 33 65 219 Both $384
i7-14700T 8+12/28 1300 / 5000 900 / 3700 33 35 106 Both $384
 
i5-14600K 6+8/20 3500 / 5300 2600 / 4000 24 125 181 Enterprise $319
i5-14600 6+8/20 2700 / 5200 2000 / 3900 24 65 154 Both $255
i5-14500 6+8/20 2600 / 5000 1900 / 3700 24 65 154 Both $232
i5-14600T 6+8/20 1800 / 5100 1200 / 3600 24 35 92 Both $255
i5-14500T 6+8/20 1700 / 4800 1200 / 3400 24 35 92 Both $232

While Intel isn't technically launching any new chip SKUs (either desktop or mobile) with vPro support, the vPro desktop platform features are enabled through the use of specific motherboard chipsets, with both Q670 and W680 chipsets offering sole support for vPro on 14th Gen. Unless users are using either a Q670 or W680 motherboard with the specific chips listed above. vPro Essentials or Enterprise will not be enabled or work with each processor unless installed into a motherboard from one of these chipsets.

As with the previous 13th Gen Core series family (Raptor Lake), the 14th Gen, which is a direct refresh of these, follows a similar pattern. Specific SKUs from the 14th Gen family include support only for the full-fledged vPro Enterprise, including the Core i5-14600K, the Core i7-14700K, and the flagship Core i9-14900K. Intel's vPro Enterprise security features are supported on both Q670 and W680 motherboards, giving users more choice in which board they opt for.

The rest of the above Intel 14th Gen Core series stack, including the non-monikered chips, e.g., the Core i5-14600, as well as the T series, which are optimized for efficient workloads with a lower TDP than the rest of the stack, all support both vPro Enterprise and vPro Essentials. This includes two processors from the Core i9 family, including the Core i9-14900 and Core i9-14900T, two from the i7 series, the Core i7-14700 and Core i7-14700T, and four from the i5 series, the Core i5-14600, Core i5-14500, the Core i5-14600T and the COre i5-14500T.


The ASRock Industrial IMB-X1231 W680 mini-ITX motherboard supports vPro Enterprise and Essentials

For the processors mentioned above (non-K), different levels of vPro support are offered depending on the motherboard chipset. If a user wishes to use a Q670 motherboard, then users can specifically opt to use Intel's cut-down vPro Essentials security features. Intel states that users with a Q670 or W680 can use the full vPro Enterprise security features, including the Core i9-14900K, the Core i7-14700K, and the Core i5-14600K. Outside of this, none of the 14th Gen SKUs with the KF (unlocked with no iGPU) and F (no iGPU) monikers are listed with support for vPro.

Intel Meteor Lake with vPro: Core Ultra H and U Series get Varied vPro Support

Further to the Intel 14th Gen Core series for desktops, Intel has also enabled vPro support for their latest Meteor Lake-based Core Ultra H and U series mobile processors. Unlike the desktop platform for vPro, things are a little different in the mobile space, as Intel offers vPro on their mobile SKUs, either with vPro Enterprise or vPro Essentials, not both.

Intel Core Ultra H and U-Series Processors with vPro
(Meteor Lake)
AnandTech Cores
(P+E+LP/T)
P-Core Turbo
Freq
E-Core Turbo
Freq
GPU GPU Freq L3 Cache
(MB)
vPro Support
(Ent/Ess)
Base TDP Turbo TDP
Ultra 9  
Core Ultra 9 185H 6+8+2/22 5100 3800 Arc Xe (8) 2350 24 Enterprise 45 W 115 W
Ultra 7  
Core Ultra 7 165H 6+8+2/22 5000 3800 Arc Xe (8) 2300 24 Enterprise 28 W 64/115 W
Core Ultra 7 155H 6+8+2/22 4800 3800 Arc Xe (8) 2250 24 Essentials 28 W 64/115 W
Core Ultra 7 165U 2+8+2/14 4900 3800 Arc Xe (4) 2000 12 Enterprise 15 W 57 W
Core Ultra 7 164U 2+8+2/14 4800 3800 Arc Xe (4) 1800 12 Enterprise 9 W 30 W
Core Ultra 7 155U 2+8+2/14 4800 3800 Arc Xe (4) 1950 12 Essentials 15 W 57 W
Ultra 5  
Core Ultra 5 135H 4+8+2/18 4600 3600 Arc Xe
(7)
2200 18 Enterprise 28 W 64/115 W
Core Ultra 5 125H 4+8+2/18 4500 3600 Arc Xe (7) 2200 18 Essentials 28 W 64/115 W
Core Ultra 5 135U 2+8+2/14 4400 3600 Arc Xe (4) 1900 12 Enterprise 15 W 57 W
Core Ultra 5 134U 2+8+2/14 4400 3800 Arc Xe (4) 1750 12 Enterprise 9 W 30 W
Core Ultra 5 125U 2+8+2/14 4300 3600 Arc Xe (4) 1850 12 Essentials 15 W 57 W

The above table highlights not just the specifications of each Core Ultra 9, 7, and 5 SKU but also denotes which model gets what level of vPro support. Starting with the Core Ultra 9 185H processor, the current mobile flagship chip on Meteor Lake, this chip supports vPro Enterprise. Along with the other top-tier SKU from each of the Core Ultra 9, 7, and 5 families, including the Core Ultra 7 165H and the Core Ultra 135H, other chips with vPro Enterprise support include the Core Ultra 7 165U and Core Ultra 7 164U, as well as the Core Ultra 5 135U and Core Ultra 5 134U.

Intel's other Meteor Lake chips, including the Core Ultra 7 155H, the Core Ultra 7 155U, the Core Ultra 5 125H, and the Core Ultra 5 125U, only come with support Intel's vPro Essentials features and not with support for Enterprise This presents a slight 'dropping of the ball' from Intel on this, which we highlighted in our Intel 13th Gen Core gets vPro piece last year.

Intel vPro Support Announcement With No New Hardware, Why Announce Later?

It is worth noting that Intel's announcement of adding vPro support to their first launch of Meteor Lake Core Ultra SKUs isn't entirely new; Intel did highlight that Meteor Lake would support vPro last year within their Series 1 Product Brief dated 12/20/2023. Intel's formal announcement of vPro support for Meteor Lake is more about which SKU has which level of support, and we feel this could pose problems to users who have already purchased Core Ultra series notebooks for business and enterprise use. Multiple outlets, including Newegg and directly from HP, are alluding to mentioning vPro whatsoever.

This could mean that a user has purchased a notebook with, say, a Core Ultra 5 125H (vPro Essentials), which would be used within an SME or by said SME as a bulk purchase but wouldn't be aware that the chip doesn't have vPro Enterprise, from which they personally and from a business standpoint could benefit from the additional securities. We reached out to Intel, and they sent us the following statement.

"Since we are launching vPro powered by Intel Core Ultra & Intel Core 14th Gen this week, prospective buyers will begin seeing the relevant system information on OEM and enterprise retail partner (eg. CDW) websites in the weeks ahead. This will include information on whether a system is equipped with vPro Enterprise or Essentials so that they can purchase the right system for their compute needs."

Intel Previews Sierra Forest with 288 E-Cores, Announces Granite Rapids-D for 2025 Launch at MWC 2024

At MWC 2024, Intel confirmed that Granite Rapids-D, the successor to Ice Lake-D processors, will come to market sometime in 2025. Furthermore, Intel also provided an update on the 6th Gen Xeon Family, codenamed Sierra Forest, which is set to launch later this year and will feature up to 288 cores designed for vRAN network operators to improve performance in boost per rack for 5G workloads.

These chips are designed for handling infrastructure, applications, and AI workloads and aim to capitalize on current and future AI and automation opportunities, enhancing operational efficiency and ownership costs in next-gen applications and reflecting Intel's vision of integrating 'AI Everywhere' across various infrastructures.

Intel Sierra Forest: Up to 288 Efficiency Cores, Set for 2H 2024

The first of Intel's announcements at MWC 2024 focuses on their upcoming Sierra Forest platform, which is scheduled for the 1st half of 2024. Initially announced in February 2022 during Intel's Investor Meeting, Intel is splitting its server roadmap into solutions featuring only performance (P) and efficiency (E) cores. We already know that Sierra Forest's new chips feature a full E-core architecture designed for maximum efficiency in scale-out, cloud-native, and contained environments.

These chips utilize CPU chiplets built on the Intel 3 process alongside twin I/O chiplets based on the Intel 7 node. This combination allows for a scalable architecture, which can accommodate increasing core counts by adding more chiplets, optimizing performance for complex computing environments.

Intel's Sierra Forest, Intel's full E-core designed Xeon processor family, is anticipated to significantly enhance power efficiency with up to 288 E-cores per socket. Intel also claims that Sierra Forest is expected to deliver 2.7 times the performance-per-rack compared to an unspecified platform from 2021; this could be either Ice Lake or Cascade Lake, but Intel didn't mention which.

Additionally, Intel is promising savings of up to 30% in Infrastructure Power Management with Sierra Forest as their Infrastructure Power Manager (IPM) application is now available commercially for 5G cores. Power manageability and efficiency are growing challenges for network operators, so IPM is designed to allow network operators to optimize energy efficiency and TCO savings.

Intel also includes vRAN, which is vital for modern mobile networks, and many operators are forgoing opting for specific hardware and instead leaning towards virtualized radio access networks (vRANs). Using vRAN Boost, which is an integrated accelerator within Xeon Processors, Intel states that the 4th Gen Xeon should be able to reduce power consumption by around 20% while doubling the available network capacity.

Intel's push for 'AI Everywhere' is also a constant focus here, with AI's role in vRAN management becoming more crucial. Intel has announced the vRAN AI Developer Kit, which is available to select partners. This allows partners and 5G network providers to develop AI models to optimize for vRAN applications, tailor their vRAN-based functions to more use cases, and adapt to changes within those scenarios.

Intel Granite Rapids-D: Coming in 2025 For Edge Solutions

Intel's Granite Rapids-D, designed for Edge solutions, is set to bolster Intel's role in virtual radio access network (vRAN) workloads in 2025. Intel also promises marked efficiency enhancements and some vRAN Boost optimizations similar to those expected on Sierra Forest. Set to follow on from the current Ice Lake-D for the edge; Intel is expected to use the performance (P) cores used within Granite Rapids server parts and optimize the V/F curve designed for the lower-powered Edge platform. As outlined by Intel, the previous 4th generation Xeon platform effectively doubled vRAN capacity, enhancing network capabilities while reducing power consumption by up to 20%.

Granite Rapids-D aims to further these advancements, utilizing Intel AVX for vRAN and integrated Intel vRAN Boost acceleration, thereby offering substantial cost and performance benefits on a global scale. While Intel hasn't provided a specific date (or month) of when we can expect to see Granite Rapids-D in 2025, Intel is currently in the process of sampling these next-gen Xeon-D processors with partners, aiming to ensure a market-ready platform at launch.

Related Reading

AMD Fixed the STAPM Throttling Issue, So We Retested The Ryzen 7 8700G and Ryzen 5 8600G

When we initially reviewed the latest Ryzen 8000G APUs from AMD last month, the Ryzen 7 8700G and Ryzen 5 8600G, we became aware of an issue that caused the APUs to throttle after a few minutes. This posed an issue for a couple of reasons, the first being it compromised our data to reflect the true capabilities of the processors, and the second, it highlighted an issue that AMD forgot to disable from their mobile series of Pheonix chips (Ryzen 7040) when implementing it over to the desktop.

We updated the data in our review of the Ryzen 7 8700G and Ryzen 5 8600G to reflect performance with STAPM on the initial firmware and with STAPM removed with the latest firmware. Our updated and full review can be accessed by clicking the link below:

As we highlighted in our Ryzen 8000G APU STAPM Throttling article, AMD, through AM5 motherboard vendors such as ASUS, has implemented updated firmware that removes the STAPM limitation. Just to quickly recap the Skin Temperature-Aware Power Management (STAPM) feature and what it does, AMD introduced it in 2014. STAPM itself is a feature implemented into their mobile processors. It is designed to extend the on-die power management by considering the processor's internal temperatures taken by on-chip thermal diodes and the laptop's surface temperature (i.e., the skin temperature).

The aim of STAPM is to prevent laptops from becoming uncomfortably warm for users, allowing the processor to actively throttle back its heat generation based on the thermal parameters between the chassis and the processor itself. The fundamental issue with STAPM in the case of the Ryzen 8000G APUs, including the Ryzen 7 8700G and Ryzen 5 8600G, is that these are mobile processors packaged into a format for use with the AM5 desktop platform. As a desktop platform is built into a chassis that isn't placed on a user's lap, the STAPM feature becomes irrelevant.

As we saw when we ran a gaming load over a prolonged period of time on the Ryzen 7 8700G with the firmware available at launch, we hit power throttling (STAPM) after around 3 minutes. As we can see in the above chart, power dropped from a sustained value of 83-84 W down to around 65 W, representing a drop in power of around 22%. While we know Zen 4 is a very efficient architecture at lower power values, overall performance will drop once this limit is hit. Unfortunately, AMD forgot to remove STAPM limits when transitioning Pheonix to the AM5 platform.

Retesting the same game (F1 2023) at the same settings (720p High) with the firmware highlighting that STAPM had been removed, we can see that we aren't experiencing any of the power throttling we initially saw. We can see power is sustained for over 10 minutes of testing (we did test for double this), and we saw no drops in package power, at least not from anything related to STAPM. This means for users on the latest firmware on whatever AM5 motherboard is being used, power and, ultimately, performance remain consistent with what the Ryzen 7 8700G should have been getting at launch.

The key question is, does removing the STAPM impact our initial results in our review of the Ryzen 7 8700G and Ryzen 5 8600G? And if so, by how much, or if at all? We added the new data to our review of the Ryzen 7 8700G and Ryzen 5 8600G but kept the initial results so that users can see if there are any differences in performance. Ultimately, benchmark runs are limited to the time it takes to run them, but in real-world scenarios, tasks such as video rendering and longer sustained loads are more likely to show gains in performance. After all, a drop of 22% in power is considerable, especially over a task that could take an hour.

(4-1d) Blender 3.6: Pabellon Barcelona (CPU Only)

Using one of our longer benchmarks, such as Blender 3.6, to highlight where performance gains are notable when using the latest firmware with the STAPM limitations removed, we saw an increase in performance of around 7.5% on the Ryzen 7 8700G with this removed. In the same benchmark, we saw an increase of around 4% on the Ryzen 5 8600G APU.

Over all of the Blender 3.6 tests in the rendering section of our CPU performance suite, performance gains hovered between 2 and 4.4% on the Ryzen 5 8600G, and between 5 and 7.5% on the Ryzen 8700G, which isn't really free performance, it's the performance that should have been there to begin with at launch.

IGP World of Tanks - 768p Min - Average FPS

Looking at how STAPM affected our initial data, we can see that the difference in World of Tanks at 768p Minumum settings had a marginal effect at best through STAPM by around 1%. Given how CPU-intensive World of Tanks is, and combining this with integrated graphics, the AMD Ryzen APUs (5000G and 8000G) both shine compared to Intel's integrated UHD graphics in gaming. Given that gaming benchmarks are typically time-limited runs, it's harder to identify performance gains. The key to takeaway here is that with the STAPM limitation removed, the performance shouldn't drop over sustained periods of time, so our figures above and our updated review data aren't compromised.

(i-3) Total War Warhammer 3 - 1440p Ultra - Average FPS

Regarding gaming with a discrete graphics card, we saw no drastic changes in performance, as highlighted by our Total War Warhammer 3 at 1440p Ultra benchmark. Across the board, in our discrete graphics results with both the Ryzen 7 8700G and the Ryzen 5 8600G, we saw nothing but marginal differences in performance (less than 1%). As we've mentioned, removing the STAPM limitations doesn't necessarily improve performance. Still, it allows the APUs to keep the same performance level for sustained periods, which is how it should have been at launch. With STAPM applied as with the initial firmware at launch on AM5 motherboards, power would drop by around 22%, limiting the full performance capability over prolonged periods.

As we've mentioned, we have updated our full review of the AMD Ryzen 7 8700G and Ryzen 5 8600G APUs to reflect our latest data gathered from testing on the latest firmware. Still, we can fully confirm that the STAPM issue has been fixed and that the performance is as it should be on both chips.

You can access all of our updated data in our review of the Ryzen 7 8700G and Ryzen 5 8600G by clicking the link below.

AMD CEO Dr. Lisa Su to Deliver Opening Keynote at Computex 2024

Taiwan External Trade Development Council (TAITRA), the organizer of Computex, announced today that Dr. Lisa Su, AMD's chief executive officer, will give the trade show's Opening Keynote. Su's speech is set for the morning of June 3, 2024, shortly before the formal start of the show. According to AMD, the keynote talk will be "highlighting the next generation of AMD products enabling new experiences and breakthrough AI capabilities from the cloud to the edge, PCs and intelligent end devices."

This year's Computex is focused on six key areas: AI computing, Advanced Connectivity, Future Mobility, Immersive Reality, Sustainability, and Innovations. Being a leading developer of CPUs, AI and HPC GPUs, consumer GPUs, and DPUs, AMD can talk most of these topics quite applicably.

As AMD is already mid-cycle on most of their product architectures, the company's most recent public roadmaps have them set to deliver major new CPU and GPU architectures before the end of 2024 with Zen 5 CPUs and RDNA 4 GPUs, respectively. AMD has not previously given any finer guidance on when in the year to expect this hardware, though AMD's overall plans for 2024 are notably more aggressive than the start of their last architecture cycle in 2022. Of note, the company has previously indicated that it intends to launch all 3 flavors of the Zen 5 architecture this year – not just the basic core, but also Zen 5c and Zen 5 with V-Cache – as well as a new mobile SoC (Strix Point). By comparison, it took AMD well into 2023 to do the same with Zen 4 after starting with a fall 2022 launch for those first products.


AMD 2022 Financial Analyst Day CPU Core Roadmap

This upcoming keynote will be Lisa Su's third Computex keynote after her speeches at Computex 2019 and Computex 2022. In both cases she also announced upcoming AMD products.

In 2019, she showcased performance improvements of then upcoming 3rd Generation Ryzen desktop processors and 7nm EPYC datacenter processors. Lisa Su also highlighted AMD's advancements in 7nm process technology, showcasing the world's first 7nm gaming GPU, the Radeon VII, and the first 7nm datacenter GPU, the Radeon Instinct MI60.

In 2022, the head of AMD offered a sneak peek at the then-upcoming Ryzen 7000-series desktop processors based on the Zen 4 architecture, promising significant performance improvements. She also teased the next generation of Radeon RX 7000-series GPUs with the RDNA 3 architecture.

Arm and Samsung to Co-Develop 2nm GAA-Optimized Cortex Cores

Arm and Samsung this week announced their joint design-technology co-optimization (DTCO) program for Arm's next-generation Cortex general-purpose CPU cores as well as Samsung's next-generation process technology featuring gate-all-around (GAA) multi-bridge-channel field-effect transistors (MBCFETs). 

"Optimizing Cortex-X and Cortex-A processors on the latest Samsung process node underscores our shared vision to redefine what’s possible in mobile computing, and we look forward to continuing to push boundaries to meet the relentless performance and efficiency demands of the AI era," said Chris Bergey, SVP and GM, Client Business at Arm.

Under the program, the companies aim to deliver tailored versions of Cortex-A and Cortex-X cores made on Samsung's 2 nm-class process technology for various applications, including smartphones, datacenters, infrastructure, and various customized system-on-chips. For now, the companies does not say whether they aim to co-optimize Arm's Cortex cores for Samsung's 1st generation 2 nm production node called SF2 (due in 2025), or the plan is to optimize these cores for all SF2-series technologies, including SF2 and SF2P.

GAA nanosheet transistors with channels that are surrounded by gates on all four sides have a lot of options for optimization. For example, nanosheet channels can be widened to increase drive current and boost performance or shrunken to reduce power consumption and cost. Depending on the application, Arm and Samsung will have plenty of design choices.

Keeping in mind that we are talking about Cortex-A cores aimed at a wide variety of applications as well as Cortex-X cores designed specifically to deliver maximum performance, the results of the collaborative work promise to be quite decent. In particular, we are looking forward Cortex-X cores with maximized performance, Cortex-A cores with optimized performance and power consumption, and Cortex-A cores with reduced power consumption.

Nowadays collaboration between IP (intellectual property) developers, such as Arm, and foundries, such as Samsung Foundry, is essential to maximize performance, reduce power consumption, and optimize transistor density. The joint work with Arm will ensure that Samsung's foundry partners will have access to processor cores that can deliver exactly what they need.

Capsule Review: AlphaCool Apex Stealth Metal 120mm Fan

Alphacool, a renowned name in the realm of PC cooling solutions, recently launched their Apex Stealth Metal series of cooling fans. Prior to their launch, the new fans had amassed a significant amount of hype in the PC community, in part because of the unfortunate misconception that the entire fan would be made out of metal.

Regardless of whether they're made entirely out of metal or not, however, these fans are notable for their unique construction, combining a metallic frame with plastic parts that are decoupled from the metal. This design choice not only contributes to the fan's aesthetic appeal but also plays a role in its operational efficiency.

The series includes two distinct models, the Apex Stealth Metal 120 mm and the Apex Stealth Metal Power 120 mm, distinguished primarily by their maximum rotational speeds. The former reaches up to 2000 RPM, while the latter, designed for more demanding applications, can achieve a remarkable 3000 RPM. Available in four color options – White, Matte Black, Chrome, and Gold – these fans offer a blend of style and functionality, making them a versatile choice for various PC builds.

The Enermax LiqMaxFlo 360mm AIO Cooler Review: A Bit Bigger, A Bit Better

For established PC peripheral vendors, the biggest challenge in participating in the highly commoditized market is setting themselves apart from their numerous competitors. As designs for coolers and other peripherals have converged over the years into a handful of basic, highly-optimized designs, developing novel hardware for what is essentially a "solved" physics problem becomes harder and harder. So often then, we see vendors focus on adding non-core features to their hardware, such as RGB lighting and other aesthetics. But every now and then, we see a vendor go a little farther off of the beaten path with the physical design of their coolers.

Underscoring this point – and the subject of today's review – is Enermax's latest all-in-one (AIO) CPU cooler, the LiqMaxFlo 360mm. Designed to compete in the top-tier segment of the cooling market, Enermax has opted to play with the physics of their 360mm cooler a bit by making it 38mm thick, about 40% thicker than the industry average of 27mm. And while Enermax is hardly the first vendor to release a thick AIO cooler, they are in much more limited company here due to the design and compatibility trade-offs that come with using a thicker cooler – trade-offs that most other vendors opt to avoid.

The net result is that the LiqMaxFlo 360mm gets to immediately start off as differentiated from so many of the other 360mm coolers on the market, employing a design that can give Enermax an edge in cooling performance, at least so long as the cooler fits in a system. Otherwise, not resting on just building a bigger cooler, Enermax has also equipped the LiqMaxFlo 360mm with customizable RGB lighting, allowing it to also cater to the aesthetic preferences of modern advanced PC builders. All together, there's a little something for everyone with the LiqMaxFlo 360mm – and a lot of radiator to cram into a case. So let's get started.

Report: NVIDIA Forms Custom Chip Unit for Cloud Computing and More

With its highly successful A100 and H100 processors for artificial intelligence (AI) and high-performance computing (HPC) applications, NVIDIA dominates AI datacenter deployments these days. But among large cloud service providers as well as emerging devices like software defined vehicles (SDVs) there is a global trend towards custom silicon. And, according to a report from Reuters, NVIDIA is putting together a new business unit to take on the custom chip market.

The new business unit will reportedly be led by vice president Dina McKinney, who has a wealth of experience from working at AMD, Marvell, and Qualcomm. The new division aims to address a wide range of sectors including automotive, gaming consoles, data centers, telecom, and others that could benefit from tailored silicon solutions. Although NVIDIA has not officially acknowledged the creation of this division, McKinney’s LinkedIn profile as VP of Silicon Engineering reveals her involvement in developing silicon for 'cloud, 5G, gaming, and automotive,' hinting at the broad scope of her alleged business division.

Nine unofficial sources across the industry confirmed to Reuters the existence of the division, but NVIDIA has remained tight-lipped, only discussing its 2022 announcement regarding implementation of its networking technologies into third-party solutions. According to Reuters, NVIDIA has initiated discussions with leading tech companies, including Amazon, Meta, Microsoft, Google, and OpenAI, to investigate the potential for developing custom chips. This hints that NVIDIA intends to extend its offerings beyond the conventional off-the-shelf datacenter and gaming products, embracing the growing trend towards customized silicon solutions.

While using NVIDIA's A100 and H100 processors for AI and high-performance computing (HPC) instances, major cloud service providers (CSPs) like Amazon Web Services, Google, and Microsoft are also advancing their custom processors to meet specific AI and general computing needs. This strategy enables them to cut costs as well as tailor capabilities and power consumption of their hardware to their particular needs. As a result, while NVIDIA's AI and HPC GPUs remain indispensable for many applications, an increasing portion of workloads now run on custom-designed silicon, which means lost business opportunities for NVIDIA. This shift towards bespoke silicon solutions is widespread and the market is expanding quickly. Essentially, instead of fighting custom silicon trend, NVIDIA wants to join it.

Meanwhile, analysts are painting the possibility of an even bigger picture. Well-known GPU industry observer Jon Peddie Research notes that they believe that NVIDIA may be interested in addressing not only CSPs with datacenter offerings, but also consumer market due to huge volumes.

"NVIDIA made their loyal fan base in the consumer market which enabled them to establish the brand and develop ever more powerful processors that could then be used as compute accelerators," said JPR's president Jon Peddie. "But the company has made its fortune in the deep-pocked datacenter market where mission-critical projects see the cost of silicon as trivial to the overall objective. The consumer side gives NVIDIA the economy of scale so they can apply enormous resources to developing chips and the software infrastructure around those chips. It is not just CUDA, but a vast library of software tools and libraries."

Back in mid-2010s NVIDIA tried to address smartphones and tablets with its Tegra SoCs, but without much success. However, the company managed to secure a spot in supplying the application processor for the highly-successful Nintendo Switch console, and certainly would like expand this business. The consumer business allows NVIDIA to design a chip and then sell it to one client for many years without changing its design, amortizing the high costs of development over many millions of chips.

"NVIDIA is of course interested in expanding its footprint in consoles – right now they are supplying the biggest selling console supplier, and are calling on Microsoft and Sony every week to try and get back in," Peddie said. "NVIDIA was in the first Xbox, and in PlayStation 3. But AMD has a cost-performance advantage with their APUs, which NVIDIA hopes to match with Grace. And since Windows runs on Arm, NVIDIA has a shot at Microsoft. Sony's custom OS would not be much of a challenge for NVIDIA."

Recall of CableMods' 12VHPWR Adapters Estimates Failure Rate of 1.07%

A recall on 12VHPWR angled adapters from CableMod has reached its next stage this week, with the publication of a warning document from the U.S. Consumer Product Safety Commission. Referencing the original recall for CableMods' V1.0 and V1.1 adapters, which kicked off back in December, the CPSC notice marks the first involvement of government regulators. And with that has come to light a bit more detail on just how big the recall is overall, along with an estimated failure rate for the adapters of a hair over 1%.

According to the CPSC notice, CableMod is recalling 25,300 adapters, which were sold between February, 2023, and December, 2023. Of those, at least 272 adapters failed, as per reports and repair claims made to CableMod. That puts the failure rate for the angled adapters at 1.07% – if not a bit higher due to the underreporting that can happen with self-reported statistics. All told, the manufacturer has received at least $74,500 in property damage claims in the United States, accounting for the failed adapters themselves, as well as the video card and anything else damaged in the process.

As part of the recall, CableMod has asked owners of its angled 12VHPWR adapters V1.0 and V1.1 to stop using them immediately, and to destroy them to prevent future use. Buyers can opt for a full refund of $40, or a $60 store credit.

It is noteworthy that, despite the teething issues with the initial design of the 12VHPWR connector – culminating with the PCI-SIG replacing it with the upgraded 12V-2x6 standard – the issue with the CableMod adapters is seemingly distinct from those larger design flaws. Specifically, CableMod's recall cites issues with the male portion of their adapters, which was not altered in the 12V-2x6 update. Compared to 12VHPWR, 12V-2x6 only alters female plugs (such as those found on video cards themselves), calling for shorter sensing pins and longer conductor terminals. Male plugs, on the other hand, remain unchanged, which is why existing PSU cables made for the 12VHPWR remain compatible (and normally safe) with 12V-2x6 video cards. Though as cable mating is a two-way dance, it's unlikely having to plug into inadequate 12VHPWR female connectors did CableMod any favors here.

Sources: Consumer Product Safety Commission, HotHardware, CableMod

The Geometric Future Eskimo Junior 36 AIO Cooler Review: Subdued Minimalism

Today we're looking at a all-in-one closed loop cooler from a face that's new to AnandTech: Geometric Future. Founded in 2020, Geometric Future is a PC components manufacturer with a goal of setting themselves apart in the crowded PC marketplace by redefining modern aesthetics. Their approach to design emphasizes the application of geometric elements and minimalist philosophy, as reflected in their slogan, "Simplify". They regard themselves as a potential future backbone in China's design industry, starting with a small step in the IT sector.

For such a new company, Geometric Future has already made significant strides in the realm of PC power and cooling products. One of their most notable products – and what we're reviewing today – is the Eskimo Junior 36, an all-in-one CPU liquid cooler available in 240mm and 360mm sizes. This cooler is designed with a minimalist aesthetic in mind, featuring a simplistic CPU block and equipped with high-performance Squama 2503 fans. Geometric Future pitches the Eskimo Junior 36 as being engineered to provide an optimal balance of cooling efficiency and aesthetics, making it able to achieve excellent cooling capabilities while maintaining low noise levels.

But marketing claims aside, we shall see where it stands in today’s highly competitive market in this review.

German Court Bans Sales of Select Intel CPUs in Germany Over Patent Dispute

A German court has sided with R2 Semiconductor against Intel, ruling that the chip giant infringed one of R2's patent. This decision could lead to sales ban of select Intel processors as well as products based on them in Germany. Intel, for its part, has accused R2 of being a patent troll wielding a low-quality patent, and has said that it will appeal the decision.

The regional court in Düsseldorf, Germany, ruled that Intel infringed a patent covering an integrated voltage regulator technology that belongs to Palo Alto, California-based R2 Semiconductor. The court on Wednesday issued an injunction against sales of Intel's Core-series 'Ice Lake,' 'Tiger Lake,' 'Alder Lake,' and Xeon Scalable 'Ice Lake Server' processors as well as PCs and servers based on these CPUs. Some of these processors have already been discontinued, but there are Alder Lake chips are available in retail and inside many systems that are still on the shelves. Though the ruling does not mean that these CPUs will disappear from the German market immediately.

Meanwhile, the injunction does not cover Intel's current-generation Core 'Raptor Lake' and Core Ultra 'Meteor Lake' processors for desktops and laptops, according to The Financial Times, so the impact of the injunction is set to be fairly limited.

Intel has expressed its disappointment with the verdict and announced its intention to challenge the decision. The company criticized R2 Semiconductor's litigation strategy, accusing it of pursuing serial lawsuits against big companies, particularly after Intel managed to invalidate one of R2's U.S. patents.

"R2 files serial lawsuits to extract large sums from innovators like Intel," a statement by Intel reads. "R2 first filed suit against Intel in the U.S., but after Intel invalidated R2's low-quality U.S. patent R2 shifted its campaign against Intel to Europe. Intel believes companies like R2, which appears to be a shell company whose only business is litigation, should not be allowed to obtain injunctions on CPUs and other critical components at the expense of consumers, workers, national security, and the economy."

In its lawsuit against Intel, R2 requested the court to halt sales of infringing processors, sales of products equipped with these CPUs, and to mandate a recall of items containing these processors, as Intel revealed last September. The company contended that imposing an injunction would be an excessive response.

Meanwhile, it is important to note that in this legal battle Intel is safeguarding its customers by assuming responsibility for any legal expenses or compensations they may incur. Consequently, as of September, Intel was unable to provide a reliable estimate of the possible financial impact or the scope of potential losses that could result from the legal battle as they can be vast.

In a stark contrast with Intel, R2 welcomes the court's decisions and presents the company's own view on the legal dispute.

"We are delighted that the highly respected German court has issued an injunction and unequivocally found that Intel has infringed R2's patents for integrated voltage regulators," said David Fisher, CEO of R2. "We intend to enforce this injunction and protect our valuable intellectual property. The global patent system is here precisely for the purpose of protecting inventors like myself and R2 Semiconductor."

R2 claims that Intel planned to invest in R2 in 2015, about two years after the company first brought its Fully Integrated Voltage Regulator (FIVR) technology to market with its 4th Generation Core 'Haswell' processors, but then abandoned talks.

"R2 has been a semiconductor IP developer, similar to Arm and Rambus, for more than 15 years," Fisher said. "Intel is intimately familiar with R2's business — in fact, the companies were in the final stages of an investment by Intel into R2 in 2015 when Intel unilaterally terminated the process. R2 had asked if a technical paper Intel had just published about their approach to their FIVR technology, which had begun shipping in their chips, was accurate. The next and final communication was from Intel's patent counsel. That was when it became clear to me that Intel was using R2's patented technology in their chips without attribution or compensation."

The head of R2 states that Intel is the only company that R2 has ever sued, which contradicts Intel's R2 accusation of being a patent troll.

"That is how these lawsuits emerged, and Intel is the only entity R2 has ever accused of violating its patents," Fisher stated. "It is unsurprising but disappointing that Intel continues to peddle its false narratives rather than taking responsibility for its repeated and chronic infringement of our patents."

AMD Unveils Their Embedded+ Architecture, Ryzen Embedded with Versal Together

One area of AMD's product portfolio that doesn't get as much attention as the desktop and server parts is their Embedded platform. AMD's Embedded series has been important for on-the-edge devices, including industrial, automotive, healthcare, digital gaming machines, and thin client systems. Today, AMD has unveiled their latest Embedded architecture, Embedded+, which combines their Ryzen Embedded processors based on the Zen+ architecture with their Versal adaptive SoCs onto a single board.

The Embedded+ architecture integrates the capabilities of their Ryzen Embedded processors with their Versal AI Edge adaptive SoCs onto one packaged board. AMD targets key areas that require good computational power and power efficiency. This synergy enables Embedded+ to handle AI inferencing and manage complex sensor data in real-time, which is crucial for applications in dynamic and demanding environments.

Giving ODMs the ability to have both Ryzen Embedded and their Versal SoCs onto a single board is particularly beneficial for industries requiring low-latency response times between hardware and software, including autonomous vehicles, diagnostic equipment in healthcare, and precision machinery in industrial automation. The AMD Embedded+ architecture can also support various workloads across different processor types, including x86 and ARM, along with AI engines and FPGA fabric, which offers flexibility and scalability of embedded computing solutions within industries.

The Embedded+ platform from AMD offers plenty of compatibility with various sensor types and their corresponding interfaces. It facilitates direct connectivity with standard peripherals and industrial sensors through Ethernet, USB, and HDMI/DP interfaces. The AMD Ryzen Embedded processors within the architecture can handle inputs from traditional image sensors such as RGB, monochrome, and even advanced neuromorphic types while supporting industry-standard image sensor interfaces like MIPI and LVDS.

Further enhancing its capability, the AMD Versal AI Edge adaptive SoCs on the Embedded+ motherboard offer adaptable I/O options for real-time sensor input and industrial networking. This includes interfacing with LiDAR, RADAR, and other delicate and sophisticated sensors necessary for modern embedded systems in the industrial, medical, and automotive sectors. The platform's support for various product-level sensor interfaces, such as GMSL and Ethernet-based vision protocols, means it is designed and ready for integration into complex, sensor-driven systems.

AMD has also announced a new pre-integrated solution, which will be available for ODMs starting today. The Sapphire Technology VPR-4616-MB platform is a compact, Mini-ITX form factor motherboard that leverages the AMD Versal AI Edge 2302 SoC combined with an AMD Ryzen Embedded R2314 processor, which is based on Zen+ and has 4C/4T with 6 Radeon Vega compute units. It features a custom expansion connector for I/O boards, supporting a wide array of connectivity options, including dual DDR4 SO-DIMM slots with up to 64 GB capacity, one PCIe 3.0 x4 M.2 slot, and one SATA port for conventional HDDs and SSDs, The VPR-4616-MB also has a good array of networking capabilities including 2.5 Gb Ethernet and an M.2 Key E 2230 PCIe x1 slot for a wireless interface. It also supports the Linux-based Ubuntu 22.04 operating system.

Also announced is a series of expansion boards that significantly broaden support for the Embedded+ architecture. The Octo GMSL Camera I/O board is particularly noteworthy for its ability to interface with multiple cameras simultaneously. It is undoubtedly suitable for high bandwidth vision-based systems, integral to sectors such as advanced driver-assistance systems (ADAS) and automated surveillance systems. These systems often require the integration of numerous image inputs for real-time processing and analysis, and the Octo GMSL board is engineered to meet this demand specifically.

Additionally, a dual Ethernet I/O board is available, capable of supporting 10/100/1000 Mb connections, catering to environments that demand high-speed network communications. The Dual 10 Gb SFP+ board has 16 GPIOs for even higher bandwidth requirements, providing ample data transfer rates for tasks like real-time video streaming and large-scale sensor data aggregation. These expansion options broaden the scope of what the Embedded+ architecture is capable of in an edge and industrial scenario.

The Sapphire VPR-4616-MB is available for customers to purchase now and in a complete system configuration, including storage, memory, power supply, and chassis.

Sales of Client CPUs Soared in Q4 2023: Jon Peddie Research

Global client PC CPU shipments hit 66 million units in the fourth quarter of 2024, up both sequentially and year-over-year, a notable upturn in the PC processor market, according to the latest report from Jon Peddie Research. The data indicates that PC makers have depleted their CPU stocks and returned to purchases of processors from Intel during the quarter. This might also highlight that PC makers now have an optimistic business outlook.

AMD, Intel, and other suppliers shipped 66 million processors for client PCs during the fourth quarter of 2023, a 7% increase from the previous quarter (62 million) and a 22% rise from the year before (54 million). Despite a challenging global environment, the CPU market is showing signs of robust health.

70% of client PC CPUs sold in Q4 2023 were aimed at notebooks, which is up significantly from 63% represented by laptop CPUs in Q4 2022. Indeed, notebook PCs have been outselling desktop computers for years, so, unsurprisingly, the industry shipped more laptop-bound processors than desktop-bound CPUs. What is perhaps surprising is that the share of desktop CPUs in Q4 2022 shipments was 37%.

"Q4's increase in client CPU shipments from last quarter is positive news in what has been depressing news in general," said Jon Peddie, president of JPR. "The increase in upsetting news in the Middle East, combined with the ongoing war in Ukraine, the trade war with China, and the layoffs at many organizations, has been a torrent of bad news despite decreased inflation and increased GDP in the U.S. CPU shipments are showing continued gains and are a leading indicator."

Meanwhile, integrated graphics processors (iGPUs) also grew, with shipments reaching 60 million units, up by 7% quarter-to-quarter and 18% year-over-year. Because the majority of client CPUs now feature a built-in GPU in one form or another, it is reasonable to expect shipments of iGPUs to grow along with shipments of client CPUs. 

Jon Peddie Research predicts that iGPUs will dominate the PC segment, with their penetration expected to skyrocket to 98% within the next five years. This forecast may point to a future where integrated graphics become ubiquitous, though we would not expect discrete graphics cards to be extinct. 

Meanwhile, the server CPU segment painted a different picture in Q4 2023, with a modest 2.8% growth from the previous quarter but a significant 26% decline year-over-year, according to JPR. 

Despite these challenges, the overall positive momentum in the CPU market, as reported by Jon Peddie Research, suggests a sector that is adapting and thriving even amidst economic and geopolitical uncertainties.

Palit Releases Fanless Version of NVIDIA's New GeForce RTX 3050 6GB

NVIDIA today is quietly launching a new entry-level graphics card for the retail market, the GeForce RTX 3050 6GB. Based on a cut-down version of their budget Ampere-architecture GA107 GPU, the new card brings what was previously an OEM-only product to the retail market. Besides adding another part to NVIDIA's deep product stack, the launch of the RTX 3050 6GB also comes with another perk: lower power consumption thanks to this part targeting system installs where an external PCIe power connector would not be needed. NVIDIA's partners, in turn, have not wasted any time in taking advantage of this, and today Palit is releasing its first fanless KalmX board in years: the GeForce RTX 3050 KalmX 6GB.

The GeForce RTX 3050 6GB is based on the GA107 graphics processor with 2304 CUDA cores, which is paired with 6GB of GDDR6 attached to a petite 96-bit memory bus (versus 128-bit for the full RTX 3050 8GB). Coupled with a boost clock rating of just 1470 MHz, the RTX 3050 6GB delivers tangibly lower compute performance than the fully-fledged RTX 3050 — 6.77 FP32 TFLOPS vs 9.1 FP32 TFLOPS — but these compromises offer an indisputable advantage: a 70W power target.

Palit is the first company that takes advantage of this reduced power consumption of the GeForce RTX 3050 6 GB, as the company has launched a passively cooled graphics card based on this part, the first in four years. The Palit GeForce RTX 3050 KalmX 6GB (NE63050018JE-1170H) uses a custom printed circuit board (PCB) that not only offers modern DisplayPort 1.4a and HDMI 2.1 outputs, but, as we still see in some entry-level cards, a dual-link DVI-D connector (a first for an Ampere-based graphics card).

The dual-slot passive cooling system with two heat pipes is certainly the main selling point of Palit's GeForce RTX 3050 KalmX 6GB. The product is pretty large though — it measures 166.3×137×38.3 mm — and will not fit into tiny desktops. Still, given the fact that fanless systems are usually not the most compact ones, this may not be a significant limitation of the new KalmX device.

Another advantage of Palit's GeForce RTX 3050 KalmX 6GB in particular and NVIDIA's GeForce RTX 3050 6GB in general is that it can be powered entirely via a PCIe slot, which eliminates the need for an auxiliary PCIe power connectors (which are sometimes not present in cheap systems from big OEMs).

Wccftech reports that NVIDIA's GeForce RTX 3050 6GB graphics cards will carry a recommended price tag of $169 and indeed these cards are available for $170 - $180. This looks to be a quite competitive price point as the product offers higher compute performance than that of AMD's Radeon RX 6400 ($125) and Radeon RX 6500 XT ($140). Meanwhile, it remains to be seen how much will Palit charge for its uniquely positioned GeForce RTX 3050 KalmX 6GB.

AMD Set to Fix Ryzen 8000G APU STAPM Throttling Issue, Sustained Loads Affected

Earlier this week, we published our review of AMD's latest Zen 4 based APUs, the Ryzen 7 8700G and Ryzen 5 8600G. While we saw much better gaming performance using the integrated graphics compared to the previous Ryzen 5000G series of APUs, including the Ryzen 7 5700G, the team over at Gamers Nexus has since highlighted an issue with Skin Temperature-Aware Power Management, or STAPM, for short. This particular issue is something we have investigated ourselves, and we can confirm that there is a throttling issue within the current firmware (at the time of writing) with AMD's Ryzen 8000G APUs.

First, it's essential to understand what the Skin Temperature-Aware Power Management (STAPM) feature is and what it does. Introduced by AMD in back 2014, STAPM is a key feature within their mobile processors. STAPM extends the on-die power management by considering the processor's internal temperatures taken by on-chip thermal diodes and the laptop's surface temperature (i.e. the skin temperature). The primary goal of STAPM is to prevent laptops from becoming uncomfortably warm for users, allowing the processor to actively throttle back its heat generation based on the thermal parameters between the chassis and the processor itself.

This is where things relate directly to AMD's Ryzen 8000G series APUs. The Ryzen 8000G series of APUs is based on AMD's Phoenix silicon, which is already in use in their Ryzen Mobile 7040/8040 chips. Which means all of AMD's initial engineering for the platform was for mobile devices, and then extended to the Ryzen 8000G desktop platform. Besides the obvious physical differences, the Ryzen 8000G APUs feature a much higher 65 W TDP (88W PPT) to reflect their desktop-focused operation, making these chips the least power constrained version of Phoenix to date.

The issue is that AMD has essentially 'forgotten' to disable these STAPM features within their firmware, causing both the Ryzen 8000G APUs' Zen 4 cores and RDNA3 integrated graphics to throttle after prolonged periods of sustained load. As we can see from our investigation of the issue, in F1 2023 at 720p High settings, within 3 minutes of playing, we saw a drop in power by around 22%, which will undoubtedly impact both CPU and the integrated graphics performance during prolonged periods.

This directly affects the data in our review of the AMD Ryzen 7 8700G and Ryzen 5 8600G APUs, as the STAPM issues inherently mean that in very prolonged cases, the results may vary. Unfortunately, this issue apparently affects all AM5 motherboards and BIOSes currently available, so there's no way to properly run a Ryzen 8000G chip without STAPM throttling for the time being.

For the moment, we're putting a disclaimer on our Ryzen 8000G review, noting the issue. Once a fix is available from AMD, we'll be going back and re-testing the two chips we have to collect proper results, as well as to better quantify the performance impact of this unnecessary throttling.

Meanwhile, we reached out to AMD to confirm the issue officially, and a few minutes ago the company got back to us with a response.

"It has come to our attention that STAPM limits are being incorrectly applied to 8000 Series processors. This is causing them to drop their PPT limits under sustained load. We are working on a BIOS update to correct this behavior."

The fix that AMD will seemingly apply is through updated AGESA firmware, which, from their standpoint, should be simple in practice. Perhaps the biggest outstanding question is when this fix is coming, though we can't imagine AMD taking too long with this matter.

We must also thank Gamers Nexus for highlighting and providing additional context to the STAPM-related problems from which the Ryzen 8000G APUs suffer from. The video review from Gamers Nexus of the AMD Ryzen 7 8700G and Ryzen 5 8600 APU can be found above. Once a firmware fix has been provided, we will update our data set within our review of the Ryzen 7 8700G and Ryzen 5 8600G.

AMD: Zen 5-Based CPUs for Client and Server Applications On-Track for 2024

As part of their quarterly earnings call this week, AMD re-emphasized that its Zen 5-architecture processors for both client and datacenter applications will be available this year. While the company is not making any new disclosured on products or providing a timeline beyond "later this year", the latest statement from AMD serves as a reiteration of AMD's plans, and confirmation that those plans are still on schedule.

So far, we have heard about three Zen 5-based products from AMD: the Strix Point accelerated processing units (APUs) for laptops (and perhaps eventually desktops), the Granite Ridge processors for enthusiast-grade desktops, and Turin CPUs for datacenters. During the conference call with analysts and investors, AMD's Lisa Su confirmed plans to launch Turin and Strix this year.

"Looking ahead, customer excitement for our upcoming Turin family of EPYC processors is very strong," said Lisa Su, chief executive officer of AMD, at the company's earnings call this week (via SeekingAlpha). "Turin is a drop-in replacement for existing 4th Generation EPYC platforms that extends our performance, efficiency and TCO leadership with the addition of our next-gen Zen 5 core, new memory expansion capabilities, and higher core counts."

The head of AMD also confirmed that Turin will be drop-in compatible with existing SP5 platforms (i.e., will come in an LGA 6096 package), feature more than 96 cores, and more memory expansion capabilities (i.e., enhanced support for CXL and perhaps support for innovative DIMMs). Meanwhile, the new CPUs will also offer higher per-core performance and higher performance efficiency.


AMD High Performance CPU Core Roadmap. From AMD Financial Analyst Day 2022

As far as Strix Point is concerned, Lisa Su confirmed that this is a Zen 5 part featuring an 'enhanced RDNA 3' graphics core (also known as Navi 3.5), and an updated neural processing unit.

"Strix combines our next-gen Zen 5 core with enhanced RDNA graphics and an updated Ryzen AI engine to significantly increase the performance, energy efficiency, and AI capabilities of PCs," Su said. "Customer momentum for Strix is strong with the first notebooks on track to launch later this year."

It's notable that the head of AMD did not mention Granite Ridge CPUs foe enthusiast-grade desktops during the conference call. Though as desktop CPUs tend to have smaller margins than mobile or server parts, they are often AMD's least interesting products to investors. Despite that omission, AMD has always launched their consumer desktop chips ahead of their server chips – in part due to the longer validation times required on the latter – so Turin being confirmed for 2024 is still a positive sign for Granite Ridge.

AMD Ryzen 7 8700G and Ryzen 5 8600G Review: Zen 4 APUs with RDNA3 Graphics

One of the most desired desktop chips designed for low-cost systems has been AMD's APUs or Accelerated Processing Units. The last time we saw AMD launch a series of APUs for desktops was back in 2021, with the release of their Cezanne-based Ryzen 5000G series, which combined Zen 3 cores with Radeon Vega-based integrated graphics. During CES 2024, AMD announced the successor to Cezanne, with new Phoenix-based APUs, aptly named the Ryzen 8000G series.

The latest Ryzen 8000G series is based on their mobile Phoenix architecture and has been refitted for AMD's AM5 desktop platform. Designed to give users and gamers on a budget a pathway to build a capable yet cheaper system without the requirement of a costly discrete graphics card hanging over their head, the Ryzen 8000G series consists of three SKUs, ranging from an entry-level Phoenix 2 based Zen 4 and Zen 4c hybrid chip, all the way to a full Zen 4 8C/16T model with AMD's latest mobile RDNA3 integrated graphics. 

The Ryzen 7 8700G with 8C/16T, 16 MB of L3 cache, and AMD's Radeon 780M graphics are sitting at the top of the pile. The other chip we're taking a look at today is the middle-of-the-road AMD Ryzen 5 8600G, which has a 6C/12T configuration with fully-fledged mobile Zen 4 cores, with a third option limited to just OEMs currently, with four cores, including one full Zen 4 core and three smaller and more efficient Zen 4c cores.

The other notable inclusion of AMD's Ryzen 8000G series is it brings their Ryzen AI NPU into the desktop market for the first time. It is purposely built for AI inferencing workloads such as Generative AI and is optimized and designed to be more efficient and improve AI performance.

Much of the onus on the capability of AMD's Ryzen 8000G series will be how much of an impact the switch to Zen 4 and RDNA3 integrated graphics commands over the Ryzen 5000G series with Zen 3 and Vega, which is already three years old at this point. The other element is how the mobile-based Phoenix Zen 4 cores compare to the full-fat Raphael Zen 4 cores. In our review and analysis of the AMD Ryzen 7 8700G and Ryzen 5 8600G APUs, we aim to find out.

MLCommons To Develop PC Client Version of MLPerf AI Benchmark Suite

MLCommons, the consortium behind the MLPerf family of machine learning benchmarks, is announcing this morning that the organization will be developing a new desktop AI benchmarking suite under the MLPerf banner. Helmed by the body’s newly-formed MLPerf Client working group, the task force will be developing a client AI benchmark suit aimed at traditional desktop PCs, workstations, and laptops. According to the consortium, the first iteration of the MLPerf Client benchmark suite will be based on Meta’s Llama 2 LLM, with an initial focus on assembling a benchmark suite for Windows.

The de facto industry standard benchmark for AI inference and training on servers and HPC systems, MLCommons has slowly been extending the MLPerf family of benchmarks to additional devices over the past several years. This has included assembling benchmarks for mobile devices, and even low-power edge devices. Now, the consortium is setting about covering the “missing middle” of their family of benchmarks with an MLPerf suite designed for PCs and workstations. And while this is far from the group’s first benchmark, it is in some respects their most ambitious effort to date.

The aim of the new MLPerf Client working group will be to develop a benchmark suitable for client PCs – which is to say, a benchmark that is not only sized appropriately for the devices, but is a real-world client AI workload in order to provide useful and meaningful results. Given the cooperative, consensus-based nature of the consortium’s development structure, today’s announcement comes fairly early in the process, as the group is just now getting started on developing the MLPerf Client benchmark. As a result, there are still a number of technical details about the final benchmark suite that need to be hammered out over the coming months, but to kick things off the group has already narrowed down some of the technical aspects of their upcoming benchmark suite.

Perhaps most critically, the working group has already settled on basing the initial version of the MLPerf Client benchmark around Meta's Llama 2 large language model, which is already used in other versions of the MLPerf suite. Specifically, the group is eyeing 7 billion parameter version of that model (Llama-2-7B), as that’s believed to be the most appropriate size and complexity for client PCs (at INT8 precision, the 7B model would require roughly 7GB of RAM). Past that however, the group still needs to determine the specifics of the benchmark, most importantly the tasks which the LLM will be benchmarked executing on.

With the aim of getting it on PCs of all shapes and sizes, from laptops to workstations, the MLPerf Client working group is going straight for mass market adoption by targeting Windows first – a far cry from the *nix-focused benchmarks they’re best known for. To be sure, the group does plan to bring MLPerf Client to additional platforms over time, but their first target is to hit the bulk of the PC market where Windows reigns supreme.

In fact, the focus on client computing is arguably the most ambitious part of the project for a group that already has ample experience with machine learning workloads. Thus far, the other versions of MLPerf have been aimed at device manufacturers, data scientists, and the like – which is to say they’ve been barebones benchmarks. Even the mobile very of the MLPerf benchmark isn’t very accessible to end-users, as it’s distributed as a source-code release intended to be compiled on the target system. The MLPerf Client benchmark for PCs, on the other hand, will be a true client benchmark, distributed as a compiled application with a user-friendly front-end. Which means the MLPerf Client working group is tasked with not only figuring out what the most representative ML workloads will be for a client, but then how to tie that together into a useful graphical benchmark.

Meanwhile, although many of the finer technical points of the MLPerf Client benchmark suite remain to be sorted out, talking to MLCommons representatives, it sounds like the group has a clear direction in mind on the APIs and runtimes that they want the benchmark to run on: all of them. With Windows offering its own machine learning APIs (WinML and DirectML), and then most hardware vendors offering their own optimized platforms on top of that (CUDA, OpenVino, etc), there are numerous possible execution backends for MLPerf Client to target. And, keeping in line with the laissez faire nature of the other MLPerf benchmarks, the expectation is that MLPerf Client will support a full gamut of common and vendor-proprietary backends.

In practice, then, this would be very similar to how other desktop client AI benchmarks work today, such as UL’s Procyon AI benchmark suite, which allows for plugging in to multiple execution backends. The use of different backends does take away a bit from true apples-to-apples testing (though it would always be possible to force fallback to a common API like DirectML), but it gives the hardware vendors room to optimize the execution of the model to their hardware. MLPerf takes the same approach to their other benchmarks right now, essentially giving hardware vendors free reign to come up with new optimizations – including reduced precision and quantization – so long as they don’t lose inference accuracy and fail meet the benchmark’s overall accuracy requirements.

Even the type of hardware used to execute the benchmark is open to change: while the benchmark is clearly aimed at leveraging the new field of NPUs, vendors are also free to run it on GPUs and CPUs as they see fit. So MLPerf Client will not exclusively be an NPU or GPU benchmark.

Otherwise, keeping everyone on equal footing, the working group itself is a who’s who of hardware and software vendors. The list includes not only Intel, AMD, and NVIDIA, but Arm, Qualcomm, Microsoft, Dell, and others. So there is buy-in from all of the major industry players (at least in the Windows space), which has been critical for driving the acceptance of MLPerf for servers, and will similarly be needed to drive acceptance of MLPerf client.

The MLPerf Client benchmark itself is still quite some time from release, but once it’s out, it will be joining the current front-runners of UL’s Procyon AI benchmark and Primate Labs’ Geekbench ML, both of which already offer Windows client AI benchmarks. And while benchmark development is not necessarily a competitive field, MLCommons is hoping that their open, collaborative approach will be something that sets them apart from existing benchmarks. The nature of the consortium means that every member gets a say (and a vote) on matters, which isn’t the case for proprietary benchmarks. But it also means the group needs a complete consensus in order to move forward.

Ultimately, the initial version of the MLPerf Client benchmark is being devised as more of a beginning than an end product in and of itself. Besides expanding the benchmark to additional platforms beyond Windows, the working group will also eventually be looking at additional workloads to add to the suite – and, presumably, adding more models beyond Llama 2. So while the group has a good deal of work ahead of them just to get the initial benchmark out, the plan is for MLPerf Client to be long-lived, long-supported benchmark as the other MLPerf benchmarks are today.

The Corsair A115 CPU Cooler Review: Massive Air Cooler Is Effective, But Expensive

With recent high-performance CPUs exhibiting increasingly demanding cooling requirements, we've seen a surge in releases of new dual-tower air cooler designs. Though not new by any means, dual-tower designs have taken on increased importance as air cooler designers work to keep up with the significant thermal loads generated by the latest processors. And even in systems that aren't running the very highest-end or hottest CPUs, designers have been looking for ways to improve on air cooling efficiency, if only to hold the line on noise levels while the average TDP of enthusiast-class processors continues to eke up. All of which has been giving dual-tower coolers a bigger presence within the market.

At this point many major air cooler vendors are offering at least one dual-tower cooler, and, underscoring this broader shift in air cooler design, they're being joined by the liquid-cooling focused Corsair. Best known within the PC cooling space for their expansive lineup of all-in-one (AIO) liquid PC CPU coolers, Corsair has enjoyed a massive amount of success with their AIO coolers. But perhaps as a result of this, the company has exhibited a notable reticence towards venturing into the air cooler segment, and it's been years since the company last introduced a new CPU air cooler. This absence is finally coming to an end, however, with the launch of a new dual-tower air cooler.

Our review today centers on Corsair's latest offering in the high-end CPU air cooler market, the A115. Designed to challenge established models like the Noctua NH-D15, the A115 is Cosair's effort to jump in to the high-end air cooling market with both feet and a lot of bravado. The A115 boasts substantial dimensions to maximize its cooling efficiency, aiming not just to meet but to surpass the cooling requirements of the most demanding mainstream CPUs. This review will thoroughly examine the A115's performance characteristics and its competitive standing in the aftermarket cooling market.

AMD Rolls Out Radeon RX 7900 XT Promo Pricing Opposite GeForce RTX 40 Super Launch

In response to the launch of NVIDIA's new GeForce RTX 40 Super video cards, AMD has announced that they are instituting new promotional pricing on a handful of their high-end video cards in order to keep pace with NVIDIA's new pricing.

Kicking off what AMD is terming a special "promotional pricing" program for the quarter, AMD has been working with its retail partners to bring down the price of the Radeon RX 7900 XT to $749 (or lower), roughly $50 below its street price at the start of the month. Better still, AMD's board partners have already reduced prices further than AMD's official program/projections, and we're seeing RX 7900 XTs drop to as low as $710 in the U.S., making for a $90 drop from where prices stood a few weeks ago.

Meanwhile, AMD is also technically bringing down prices on the China and OEM-only Radeon RX 7900 GRE as well. Though as this isn't available for stand-alone purchase on North American shelves, it's mostly only of relevance for OEM pre-builds (and the mark-ups they charge).

Ultimately, the fact that this is "promotional pricing" should be underscored. The new pricing on the RX 7900 XT is not, at least for the moment, a permanent price cut. Meaning that AMD is leaving themselves some formal room to raise prices later on, if they choose to. Though in practice, it would be surprising to see card prices rebound – at least so long as we don't get a new crypto boom or the like.

Finally, to sweeten the pot, AMD is also extending their latest game bundle offer for another few weeks. The company is offering a copy of Avatar: Frontiers of Pandora with all Radeon RX 7000 video cards (and select Ryzen 7000 CPUs) through January 30, 2024.

The Be Quiet! Dark Rock Pro 5 CPU Cooler Review: When Less Is More

Last month we took a look at Be Quiet's Dark Rock Elite, the company's flagship CPU tower air cooler. The RGB LED-equipped cooler proved flashy in more ways than one, but true to its nature as a flagship product, it also carried a $115 price tag to match. Which is certainly not unearned, but it makes the Elite hard to justify when pairing it with more mainstream CPUs, especially as these chips don't throw off the same chart-topping levels of heat as their flagship counterparts.

Recognizing the limited audience for a $100+ cooler, Be Quiet! is also offering what is essentially a downmarket version of that cooler with the Dark Rock Pro 5. Utilizing the same heatsink as the Dark Rock Elite as its base, the Dark Rock Pro 5 cuts back on some of the bells and whistles that are found on the flagship Elite in order to sell at a lower price while still serving as a high-end cooler. Among these changes are getting rid of the RGB lighting, and using simple wire fan mounts in place of the Elite's nifty rails. The end result is that it allows the Dark Rock Pro 5 to hit a notably lower price point of $80, putting it within the budgets of more system builders, and making it a more practical pairing overall with mainstream CPUs.

But perhaps the most important aspect of all is a simple one: cooling performance. What does the Dark Rock Pro 5 give up in cooling performance in order to hit its lower price tag? As we'll see in this review, the answer to that is "surprisingly little," making the Dark Rock Pro 5 a very interesting choice for mid-to-high end CPUs. Particularly for system builders looking for an especially quiet CPU cooler.

EK Reveals All-In-One Liquid Cooler for Delidded CPUs

Historically, delidded CPUs have been the prerogative of die-hard enthusiasts who customized their rigs to the last bit. But with emergence of specially-designed delidding tools, removing the integrated heat spreader from a CPU has become a whole lot easier, opening the door to delidding for a wider user base. To that end, EK is now offering all-in-one liquid cooling systems tailored specifically for delidded Intel LGA1700 processors.

The key difference with EKWB's new EK-Nucleus AIO CR360 Direct Die D-RGB – 1700 cooler is in the cooling plate on the combined base pump block. While the rest of the cooler is essentially lifted from the company's premium 360-mm closed-loop all-in-one liquid cooling systems, the pump block has been equipped with a unique cooling plate specifically developed for mating with (and cooling) of delidded Intel's LGA1700 CPUs.

Meanwhile, since delidded CPUs lose the additional structural integrity provided by the IHS, EK is also bundling a contact frame with the cooler that is intended to protect CPUs against warping or bending by maintaining even pressure on the CPU. A protective foam piece is also provided to prevent liquid metal from spilling over onto electrical components surrounding the CPU die.

According to the company, critical components of the new AIO, such as its backplate and die-guard frame, were collaboratively developed by EK and Roman 'Der8auer' Hartung, a renowned German overclocker who has developed multiple tools both for extreme overclockers and enthusiasts. In addition, EK bundles Thermal Grizzly's Conductonaut liquid metal thermal paste (also co-designed with Der8auer) with the cooling system.

And since this is a high-end, high-priced cooler, EKWB has also paid some attention to aesthetics. The cooler comes with two distinct pump block covers: a standard cover features a brushed aluminum skull, surrounded by a circle of LED lighting that creates a classic yet bold aesthetic, and an alternate, more minimalist cover without the skull.

Traditionally, cooling for delidded CPUs has been primarily handled by custom loop liquid cooling systems. So the EK-Nucleus AIO CR360 Direct Die D-RGB – 1700 stands out in that regard, offering a self-contained and easier-to-install option for delidded CPUs. Especially as delidding has been shown to reduce temperature of Intel's Core i9-14900K CPU by up to 12ºC, it's no coincidence that EKWB is working to make delidding a more interesting and accessible option, particularly right as high-end desktop CPU TDPs are spiking.

Wrapping things up, EKWB has priced the direct die cooler at $170, about $20 more than the EK-Nucleus AIO CR360 Lux D-RGB cooler designed for stock Intel processors. The company is taking pre-orders now, and the finished coolers are expected to start shipping in mid-March 2024.

The Intel CES 2024 Pat Gelsinger Keynote Live Blog (Starts at 5pm PT/01:00 UTC)

This evening is the biggest PC-related keynote of CES 2024, Intel's "prime" keynote with CEO Pat Gelsinger. Part of Intel's "AI everywhere starts with Intel" campaign for the show, Gelsinger is expected to talk about the role AI will play in the future of consumer technology, along with the economic implications.

So come join us at 5pm Pacific/8pm Eastern for a look at the latest from Intel!

NVIDIA Launches RTX 5880 ProViz Card: Compliant with Sanctions, Available Globally

NVIDIA has quietly launched its RTX 5880 Ada Generation graphics card that is designed for professional graphics applications. The product is designed to be compliant with the latest U.S. export regulations for China and can be shipped to the People's Republic without restrictions. Meanwhile, it is set to be available globally and sit between the expensive flagship RTX 6000 Ada Generation and considerably less capable RTX 5000 Ada Generation. But there is a major catch about this product.

NVIDIA's RTX 5880 Ada Generation is based on the company's flagship AD102 graphics processor with 14,080 CUDA cores, 110 RT cores, 440 tensor cores, and a 384-bit memory interface to connect 48 GB of GDDR6 memory with ECC. The board comes with four DisplayPort 1.4a connectors and can support either four 4K monitors at 120 Hz, four 5K displays at 60 Hz, or two 8K monitors at 60 Hz. As for power consumption and heat dissipation, it is rated at 285W (with power delivered using one 12VHPWR connector) and comes with a standard dual-slot cooling system with a blower.

As the model number suggests, NVIDIA's RTX 5880 Ada Generation 48 GB should be close to the range-topping RTX 6000 Ada 48 GB. Unfortunately, this is not the case and this is a major catch about the RTX 5880 Ada. From performance point of view, NVIDIA's RTX 5880 Ada offers 69.3 FP32 TFLOPS and 554 FP8 TFLOPS, which is closer to the RTX 5800 Ada 32 GB (65.3 FP32 TFLOPS and 522.1 FP8 TFLOPS) rather than to the RTX 6000 Ada 48 GB (91.1 FP32 TFLOPS and 729 FP8 TFLOPS).

NVIDIA's RTX 6000 Ada has a rather whopping FP8 Total Processing Performance (TPP) score of 5,828 (listed processing power multiplied by the length of operation), which is way higher than 4800 points that the U.S. Department of Commerce wants Chinese entities to get. So, the RTX 5880 Ada has a TPP score of 4432, which comfortably fits within the export requirements of the U.S. government.

But this also means that the RTX 5880 Ada is significantly slower than the RTX 6000 Ada, despite the model number. Truth to be told, the RTX 5880 Ada looks more like an RTX 5000 Ada on steroids rather than a slightly downgraded RTX 6000 Ada.

NVIDIA's Ada Generation High-End Professional Graphics Cards
  RTX 6000 Ada RTX 5880 Ada RTX 5000 Ada
GPU AD102
Memory 48 GB GDDR6 with ECC 32 GB GDDR6 ECC
CUDA Cores 18,176 14,080 12,800
Tensor Cores 568 440 400
RT Cores 142 110 100
FP32 Performance 91.1 TFLOPS 69.3 TFLOPS 65.3 TFLOPS
RT Performance 210.6 TFLOPS 160.2 TFLOPS 151 TFLOPS
Tensor Performance 729 TFLOPS 554 TFLOPS 522.1 TFLOPS
TPP 5828 4432 4176
Encode/Decode 3 x encode, 3 x decode (+AV1 encode and decode)
Display Connectors 4 x DisplayPort 1.4a
Power 300W 285W 250W
MSRP $6999 ? $4000

What remains to be seen is the price of the RTX 5880 Ada. NVIDIA's RTX 5000 Ada has a list price of $4,000, whereas the RTX 6000 Ada has an MSRP of $6,999. It is logical to expect the RTX 5880 Ada to cost closer to the RTX 5000, but not just slightly higher, because after all it has a rather capable 48 GB memory subsystem.

Sources: NVIDIA, RTX 5880 Ada datasheet

CES 2024: Intel Briefly Shows Lunar Lake Chip; Next-Gen Mobile CPU Uses On-Package Memory

As part of the final segment of Intel’s "Intel Client Open House Keynote" at CES this afternoon, Intel EVP and GM of the Client Computing Group, Michelle Johnston Holthaus, also offered a brief update on Intel’s client chips in the work for the second half of the year. While no demos were run during the relatively short 45 minute keynote, Holthaus did reiterate that both Arrow Lake for desktops and Lunar Lake for mobile were making good progress and were expected to launch later this year.

But in lieu of black box demos we got something more surprising instead: our first look at a finished Lunar Lake chip.

Briefly holding the chip out for viewers to see – and holding the press away lest they get too close – Holthaus pulled out a finished Lunar Lake chip.

While details on Lunar Lake still remain very slim – Intel still hasn’t even confirmed what process nodes it’s using – the company has continually been reiterating that they intend to get it out the door in 2024. And having silicon to show off (and shipping to partners, we’re told) is a very effective way to demonstrate Intel’s ongoing progress.

Of note, seeing the chip in person confirms something we’ve all but been expecting from Intel for a few years now: CPUs with on-package memory. The demo chip has two DRAM packages on one of the edges of the chip (presumably LPDDR5X), making this the first time that Intel has placed regular DRAM on a Core chip package. On-package memory is of particular interest to thin & light laptop vendors, as it allows for further space savings and cuts down on the number of critical traces that need to be routed from the CPU and along the motherboard. The technique is most notably (though far from exclusively) used with Apple’s M series of SoCs.

Beyond showing off the physical chip, Holthaus also briefly talked about expected performance and architecture. Lunar Lake is slated to offer “significant” IPC improvements for the CPU core. Meanwhile the GPU and NPU will each offer three-times the AI performance. How Intel will be achieving this remains unclear, but at least on the GPU side, we know that they’ve yet to offer XMX matrix cores within an integrated GPU.

No doubt this is far from the last time we’ll hear about Lunar Lake ahead of its launch. But for now, it’s a bit of a look into the future while Intel continues to ramp production on Meteor Lake for what is now the latest generation of laptops and other mobile devices.

Intel Announces non-K 14th Gen Core Desktop Processors: Raptor Lake in 65 W to 35 W Flavors

We typically see the flagship K-series chips first whenever Intel launches a new family of desktop processors. These show the maximum potential for improvement in generational performance, such as IPC and core clock speeds, and show off the latest family to the best possible effect. Usually, a few months later, Intel launches their non-K-series SKUs, which have lower TDPs, with minor drops in core clock speeds, but offer the same level of core configurations for a usually lower price.

During CES 2024, Intel has launched the rest of their 14th Generation Core series processors, including models such as the Core i9-14900, as well as non-K F SKUs and the lower power T models. For the first time in this generation, we're also getting a trifecta of Raptor Lake refresh Core i3 processors and two entry-level models, the Intel Processor 300 and the Processor 300T.

Adding multiple models to the 14th Gen Core series for desktop families means there's plenty of choice for users regardless of their requirements, with the majority of the new SKUs coming with either a 65 W or 35 W base TDP. Finally, we have the full Raptor Lake refresh for desktop offering, and users looking for a lower cost, but typically solid performance will find plenty to sink their teeth into with lots of chips to select from.

Intel Intros Core (Series 1) U-Series Mobile Chips: Raptor Lake Refreshed for Thin & Light

Alongside Intel’s bevvy of 14th Generation Core product announcements for desktop and mobile today, the company is also releasing a collection of new SKUs under their new Core (Series 1) branding, aligning with Intel’s recently launched Meteor Lake architecture Core Ultra chips. Based on Intel’s existing Raptor Lake silicon, which was first introduced last year, the new Core U-series chips are aimed at the low-power thin & light notebook segment, and will fill out the Core chip lineup with a trio of cheaper parts using last-generation technology.

Keeping things short and sweet, there are just 3 new Core U-series chips, covering the Core 7, Core 5, and Core 3 segments respectively. Under the hood there’s nothing here that we haven’t already seen before with the 13th Generation Core U-series processors, relying on a mix of Raptor Cove performance CPU cores, Gracemont efficiency CPU cores, and an Xe-LP integrated GPU. But these parts are clocked a bit higher than their direct predecessors.

Intel Unveils 14th Gen Core HX Series Processors: Raptor Lake Mobile Refresh with Thunderbolt 5

During CES 2024, Intel unveiled their latest 14th Generation Core HX series processors for mobile. Consisting of five new SKUs, the Intel 14th Gen Core HX family is a refreshed variant of the previous 13th Gen Core HX series, much akin to the desktop processors such as the Core i9-14900K to the Core i9-13900K; Raptor Lake is again refreshed, this time for mobile.

Consisting of five new SKUs and spanning the Core i9, Core i7, and Core i5 segments, the Intel 14th Gen Core HX series features much of the same specifications such as core and thread count and integrated graphics as their predecessors, but with faster Performance (P) and Efficiency (E) core turbos, making them faster for better performance, all within the same 55/157 W TDP power envelope as the previous generation.

Ranging from 24 CPU cores (8P+16E)/32T down to 10 CPU cores (6P+4E)/16T, Intel is once again bringing their Raptor Lake architecture built on the Intel 7-node back to the forefront for another year as their desktop replacement mobile processors lead the charge in compute performance in mobile into 2024.

Some of the primary features of the Intel 14th Gen Core HX platform Intel is pushing revolve around the ability to easily add discrete controllers, thanks to the chips' ample PCIe lane count. This includes Intel's Thunderbolt 5, which has up to 160 Gbps of cable bandwidth available, as well as next-generation Wi-Fi 7 support. On top of this, much like the 13th Gen Core HX series, the 14th Gen variants are also overclockable. 

AMD Announces New Desktop Zen 3 Chips With 2 New APUs and the Ryzen 7 5700X3D

One of AMD's most innovative desktop processor designs in recent years has to be the X3D series, with their 3D V-Cache packaging technology. Since AMD launched their Zen 3-based RYzen 7 5800X3D back in April of 2022, the sky has been the limit for AMD's L3 cache-stacked chips, which have had significant benefits in gaming, especially in titles that can leverage larger amounts of L3 cache. Since the Ryzen 7 5800X3D, AMD launched their latest Ryzen 7000X3D series last year, based on Zen 4, which markedly improved gaming and compute performance. But even more than a year into the release of Zen 4, AMD still isn't done with Zen 3 and the AM4 platform.

For CES 2024, AMD has launched a third Zen 3 X3D SKU in addition to last year's limited 6 core Ryzen 5 5600X3D. The latest AMD Ryzen 7 5700X3D offers the same eight Zen 3 cores and sixteen threads (8C/16T) as the original Ryzen 7 5800X3D and with the same large 96 MB pool of 3D V-Cache, but it has lower base and turbo frequencies and an even more affordable price.

And AMD's AM4 additions aren't done there. Also announced by AMD at CES 2024 is a new pair of Zen 3-based APUs, which adds to the 5000G(T) line-up. Like the other members of this family, the new SKUs pair up Zen 3 cores with AMD Radeon Vega integrated graphics. The new Ryzen 5 5600GT and Ryzen 5 5500GT both come with a 65 W TDP and faster base and turbo core frequencies compared to the existing Ryzen 5000G series APUs.

Despite being based around the previous Zen 3 microarchitecture, the AM4 desktop platform is still thriving. With new processors to take advantage of the cheaper AM4 boards and DDR4 memory, AMD is looking to leverage the low production costs of their existing (and well amortized) silicon to offer better chips across all budgets.

AMD Ryzen 7 5700X3D: Even Cheaper 8C/16T Zen 3 X3D

With AMD over a year into the Zen 4 architecture, many would expect their focus to be on further Zen 4 chips. And while AMD has a diverse portfolio of processors, architectures, platforms, and products, the continued sales of AM4 hardware and their low costs make it easy to see why AMD has added another Zen 3-based chip with 3D V-Cache to the market.

Enter the AMD Ryzen 7 5700X3D, an alternative to the very popular Ryzen 7 5800X3D, which broke the mold on typical desktop processors through AMD's 3D V-Cache packaging technology. Coming with 96 MB of 3D packaged L3 V-cache in total (32 MB + 64 MB), all of this is accessible on a single CCD with 8C/16T, with Zen 3 cores, the same as the 5800X3D.

AMD Ryzen 7000/5000 X3D Chips with 3D V-Cache
AnandTech Cores
Threads
Base
Freq
Turbo
Freq
Memory
Support
L3
Cache
TDP PPT Price
($)
Ryzen 9 7950X3D 16C / 32T 4.2 GHz 5.7 GHz DDR5-5200 128 MB 120W 162W $650
Ryzen 9 7900X3D 12C / 24T 4.4 GHz 5.6 GHz DDR5-5200 128 MB 120W 162W $499
Ryzen 7
Ryzen 7 7800X3D 8C / 16T 4.2 GHz 5.0 GHz DDR5-5200 96 MB 120W 162W $399
Ryzen 7 5800X3D 8C / 16T 3.4 GHz 4.5 GHz DDR4-3200 96 MB 105W 142W $359
Ryzen 7 5700X3D 8C / 16T 3.0 GHz 4.1 GHz DDR4-3200 96 MB 105W 142W $249
Ryzen 5
Ryzen 5 5600X3D 6C /12T 3.3 GHz 4.4 GHz DDR4-3200 96 MB 105W 142W $230

Although the general specifications of the latest Ryzen 7 5700X3D are nearly identical to the Ryzen 7 5800X3D, there are a few notable differences to highlight. The Ryzen 7 5700X3D has a 400 MHz lower base core clock speed, which sits at 3.0 GHz. Keeping with core frequencies, the turbo frequency on the Ryzen 7 5700X3D is also 400 MHz lower than the 5800X3D (4.1 vs. 4.3 GHz), although both chips share the same 105 W base TPD, with a 142 W Package Power Tracking (PPT) rating; this is how much power is fed through the AM4 CPU socket. All of AMD's Ryzen 5000X3D processors support DDR4-3200 memory and are CPU ratio/frequency locked, meaning users can't overclock them.

What sets the Ryzen 7 5700X3D and Ryzen 7 5800X3D processors apart, aside from core frequencies, is the price, with AMD setting an MSRP of $249 for the 5700X3D. In contrast, the Ryzen 7 5800X3D is currently available to buy at Amazon for $359, so for a reduction of 400 MHz across the board, users can have a similar chip, albeit slower, for around $110 less. Given that the target market for these processors is gamers, any CPU-saving can be put towards a discrete graphics card, with graphics being a much more ideal component to upgrade for faster and higher average frame rates than the CPU or memory.

AMD Ryzen 5 5600GT & 5500GT: Zen 3 APUs with Faster Core Frequencies

Also announced for AMD's previous AM4 platform is a pair of new APUs, which adds to the already established Ryzen 5000G series of processors. Under the new "GT" moniker, the AMD Ryzen 5 5600GT and 5500GT slot in alongside the existing Ryzen 5 5600G and 5600GE APUs, which are both based on AMD's Cezanne Zen 3 silicon and paved the way for the new Zen 4 APUs which AMD also announced during CES 2024.

AMD Ryzen 5000G/GT Series APUs (Zen 3)
AnandTech Core /
Thread
Base
Freq
Turbo
Freq
GPU
CUs
GPU
Freq
PCIe
*
TDP
Ryzen 7
Ryzen 7 5700G 8 / 16 3800 4600 8 2000 16+4+4 65 W
Ryzen 7 5700GE 8 / 16 3200 4600 8 2000 16+4+4 35 W
Ryzen 5
Ryzen 5 5600G 6 / 12 3900 4400 7 1900 16+4+4 65 W
Ryzen 5 5600GE 6 / 12 3400 4400 7 1900 16+4+4 35 W
Ryzen 5 5600GT 6 / 12 3600 4600 7 ? 16+4+4 65 W
Ryzen 5 5500GT 6 / 12 3600 4400 7 ? 16+4+4 65 W
Ryzen 3
Ryzen 3 5300G 4 / 8 4000 4200 6 1700 16+4+4 65 W
Ryzen 3 5300GE 4 / 8 3600 4200 6 1700 16+4+4 35 W
*PCIe lanes on the SoC are listed in 16xGFX + 4xChipset + 4 for NVMe

Looking at what separates the new GT series from the original G line-up, both the Ryzen 5 5600GT and Ryzen 5 5500GT have similar specs to the 5600G/GE chips, with the same Radeon Vega 7 integrated graphics. This is a testament to the Vega GPU architecture's longevity, and also makes for a bit of an awkward moment as AMD is in the process of winding down driver support for it.

With regards to specs, the Ryzen 5 5600GT has a 300 MHz slower base frequency than the Ryzen 5 5600G but has a 200 MHz faster turbo core clock speed, which tops out at 4.6 GHz. The Ryzen 5 5500GT also has the same 3.6 GHz base frequency as the 5600GT but shares the same 4.4 GHz turbo frequency as the 5600G.

At the time of writing, AMD hasn't provided us with the graphics core frequencies on the Radeon Vega 7 integrated graphics, but we would expect them to be similar to the 1.9 GHz on the Ryzen 5 5600G/GE processors. It's also worth noting that the Ryzen 5 5600GT and 5500GT have a 65 W base frequency, with the same 6C/12T of Zen 3 cores.

AMD has provided some basic performance metrics from their in-house testing for the Ryzen 5 5600GT, which compares it directly to the previous Ryzen 5600G. With faster turbo clock speeds, AMD shows that the Ryzen 5600GT performs around 5% in DOTA 2 and 10% better in PUBG, while compute performance in applications such as Blender show gains of 9% and 11% in WinRAR. AMD also outlines that the Ryzen 5 5500GT also performs better in applications such as CineBench nT by around 2% compared to the 5600G and marginally better gaming performance (1% better) in World of Tanks and Mount & Blade 2.

As they are the same silicon as the other Ryzen 5000 series APUs, both the Ryzen 5 5600GT and 5500G include 16 x PCIe 3.0 lanes for a discrete graphics card should users wish to upgrade, as well as 4 x PCIe 3.0 lanes interlinking the chip to the chipset and 4 x PCIe 3.0 lanes designated for an M.2 storage drive. None of which is very fast in 2024, but it is quite cheap. On which note, it's also worth noting that AMD is bundling their Wraith Stealth cooler with both processors, which saves users money, as an additional CPU cooler purchase isn't required.

The AMD Ryzen 7 5700X3D is expected to cost $249, while the Ryzen 5 5600GT and Ryzen 5 5500GT will retail for $140 and $125, respectively. All three of AMD's new Zen 3-based processors will be available to buy from January 31st from all major retailers.

AMD Adds Radeon RX 7600 XT To Product Stack, 1080p Gaming Card Gets 16GB For $329

Kicking things off for the GPU space at this year’s CES, AMD is at the show to announce that they’re bringing an additional Radeon RX 7000 series card to their lineup: the Radeon RX 7600XT. Intended as a premium version of their existing 1080p-focused RX 7600, the RX 7600 XT bumps things up with roughly 10% higher clockspeeds, as well as a doubling the total amount of VRAM to 16GB of GDDR6. The upgraded Radeon card will hit retail shelves on January 24th for a similarly premium price of $329.

The start of 2024 finds AMD doing some minor shuffling of their product stack both to cover the latest trends in gaming and AI, as well as to cover perceived gaps in the company’s product lineup. While AMD is just in the middle of their product cycle with their actual GPUs – the mid-tier Navi 32 GPU only launched back in August – the company currently has a decently large gap between the $270 Radeon RX 7600 and $450 RX 7700 XT, which they are opting to fill now with the RX 7600 XT. At the same time, video cards with 8GB of VRAM are becoming increasingly unappealing to buyers, especially the VRAM-hungry AI crowd, so this gives AMD the chance to offer a 16GB version of their basic gaming video card.

AMD Unveils Ryzen 8000G Series Processors: Zen 4 APUs For Desktop with Ryzen AI

While it's been touted for many months that AMD will release APUs for desktops based on Zen 4, rumors and wishes have finally come to fruition during AMD's presentation at CES 2024 with the announcement of the Rzen 8000G family. The latest line-up of APUs with Zen 4 cores and upgraded Radeon integrated graphics consists of four new SKUs, with the Ryzen 7 8700G leading the charge with 8 CPU cores and AMD's RDNA3-based Radeon 780M graphics. It offers users a more cost-effective pathway to gaming and content creation without needing a discrete graphics card.

The other models announced include the AMD Ryzen 5 8600G and Ryzen 5 8500G, both of which offer 6 CPU cores and integrated graphics, all with a 65 W TDP. Bringing up the rear will be the AMD Ryzen 3 8300G, a modest, entry-level 4 core offering. AMD will be tapping both their Phoenix (1) and Phoenix 2 silicon for these parts, depending on the SKU, meaning that the higher-tier parts will exclusively use Zen 4 CPU cores, while the lower-tier parts will use a mix of Zen 4 and Zen 4c CPU cores, a similar split to what we see today with AMD's Ryzen Mobile 8000 series.

AMD Unveils XA Versal AI Edge and Ryzen Embedded V2000A For Automotive

With CES 2024 fast approaching, AMD has announced two new products designed to enhance user's experience in the automotive sphere. The first is the XA Versal AI Edge series, their first 7 nm device to be automotive qualified, with a new AI engine and vector processor array designed to drive safety and improve LiDAR, radar, and cameras, all while driving more AI inferencing within a vehicle.

The second is the AMD Ryzen Embedded V2000A series of processors, designed to deliver a paradigm shift in digital cockpits, enhance driver experience, improve in-car entertainment, and have higher levels of media focused performance. Both AMD's XA Versal AI Edge and the Ryzen Embedded V2000A series promise to deliver a much more enhanced user experience, whether that be drivers reliant on digital and AI-assisted safety features as well as passenger entertainment as we go into what is set to be a pivotal year for AI. 

AMD XA Versal AI Edge: Scalable From Edge to Accelerators

Perhaps one of the most significant talking points within the industry, and outside of it, is AI and how it looks to transform how we do things. While machine learning and generative AI are two main topics, how we use AI and how companies adopt it into their software and designs are critical as we head into 2024. AI isn't just about making things better; it's a tool that brings many benefits through things such as generative AI. Within the PC industry, both Intel (Meteor Lake) and AMD (Ryzen AI) bring on-chip inferencing to the PC market; AMD is bringing a more robust approach to the automotive industry, specifically within digital cockpits of new vehicles for those on the go with XA Versal AI Edge.

At the core of the AMD XA Versal AI Edge series lies AMD's first 7 nm device to achieve automotive qualification. This milestone underscores AMD's commitment to innovation and ensures that the series meets the stringent standards required by automotive applications. The significance of this comes through the ability of the XA Versal AI Edge series to seamlessly integrate into modern vehicles' ever-growing and complex ecosystem. The XA Versal AI Edge chip comes through AMD's acquisition of Xilinx in 2022, bringing many advancements to AMD's portfolio, including their client-focused Ryzen AI engine block within the latest Ryzen 8040 series of mobile processors.

With XA Versal AI comes a new vector processor array specifically designed to drive more comprehensive automotive safety features to the market. By improving the functionality and efficiency of crucial components such as LiDAR, radar, and cameras, the XA Versal AI Edge series enhances precision and responsiveness to vehicle systems. This is particularly vital in AI inferencing within a vehicle, where split-second decisions based on accurate sensor data can differentiate between safety and what could be an absolute disaster, such as a crash or collision.

The XA Versal AI Edge series has improved AI inferencing capabilities and is designed to boost the vehicle's ability to navigate and interact with its surroundings. This helps pave the way for more advanced autonomous driving features, with lower real-time processing and lower latencies providing benefits while driving. By processing vast amounts of data in real-time, the XA Versal AI Edge chips enable a more responsive interaction between the vehicle and the driver. This translates to a faster and more secure ecosystem, which enhances both the driver's and passengers' confidence in the vehicle's capabilities.

One advantage of AMD's XA Versal AI Edge solutions is their scalability, with their lead model, the XAVE2602, combining 152 AI Engine tiles with 820k logic cells, as well as 984 Digital Signal Processors (DSP), with AMD claiming up to 89 TOPS of INT8 performance across the AI engine, the DSP and Programmable Logic. There are options ranging from 5 TOPS to 171 TOPS, with seven options for different devices and markets. At the heart of AMD's XA Versal AI Edge chips are a dual-core Arm Cortex A72 APU and a dual-core Arm Cortex R5F RPT unit, which range from 6-9 W (XAVE2002) up to a potent 75 W (XAVE2802) estimated power usage. It's worth highlighting that all of AMD's TOPS figures for their XA Versal AI Edge chips are estimated by AMD engineers and aren't verified externally.

Having a varied level of scalability from the same Arm Cortex Dual-Dual core pairing allows AMD to accommodate varying levels of devices, from LiDAR sensors to radar and regular cameras. It also means AMD only needs one design that scales with the same tooling, software, and general ecosystem so that manufacturers using these Versal AI Edge chips can select the one that suits their performance needs. Another element to consider is that as devices scale into better designs, their iterations are on the same platform, meaning a new product design can be created using the same overall ecosystem.

Ryzen Embedded V2000A: 7 nm with Zen 2 Cores

Moving to the second of AMD's automotive-related announcements for CES 2024 is the Ryzen Embedded V2000A series. This new line of processors is designed to initiate the emergence of AI within the digital automotive cockpit. Designed to meet the evolving needs of both drivers and passengers, AMD has designed the V2000A series to deliver higher performance, enhancing multiple aspects of the in-car experience. Ryzen Embedded V2000 is not a mere incremental upgrade but represents a shift in how users can interact with and use vehicle entertainment and infotainment systems.

Replacing the previous Zen-based 4C/8T V1000 APU, the new Ryzen Embedded V2000A series is designed to handle the complex requirements of advanced driver interfaces and in-car entertainment systems. The results in improved graphics quality and a faster response to user inputs, leading to a smoother interaction with the vehicle's systems. For drivers, this means more intuitive control panels and heads-up displays and enhancing access to critical information and controls with minimal distractions. Passengers also benefit, with the series supporting high-definition streaming and gaming experiences, elevating the overall in-car entertainment experience to levels similar to home or mobile platforms.

The Ryzen Embedded V2000A series is built on the 7 nm manufacturing process and, as a result, uses Zen 2 cores combined with AMD's Radeon Vega 7 graphics. Featuring up to 6C/12T with Zen 2 cores, with 7 CUs of Radeon Vega graphics, the Ryzen Embedded V2000A has support for up to four 4K displays, with dual Gigabit Ethernet, and is AEC-Q100 automotive qualified. Looking ahead to the next generation of automotive silicon and with AI as a key selling point, Ryzen Embedded V2000A has a planned availability of 10 years, which ensures companies and automotive partners adopting V200A into their vehicles are guaranteed availability.

Some of AMD's partners onboard with XA Versal AI and Ryzen Embedded V2000A include notable electric car maker Tesla, with Ecarx, Luxoft, BlackBerry/QNX, Xylon, and Cognata also looking to implement these new chips into their automotive products.

AMD states that the first device, the XAVE1752, will be available in early 2024, with the remaining devices coming to market before the end of 2024. AMD also notes that the AXVEK 280 Evaluation Kit is now available to partners and manufacturers. AMD will demonstrate their XA Versal AI Edge at their booth, among other automotive solutions, at CES 2024. For reference, the AMD booth is located at the Las Vegas Convention Center (LVCC) within the West Hall at #W319. 

The Corsair iCUE LINK H150i RGB 360mm AIO Cooler Review: Colorful Connections

When it comes to all-in-one liquid coolers for CPUs, there are a handful of companies whose brands have become synonymous with the titanic coolers. And of those brands, it's Corsair who is inevitably at the top of any list. One of the key manufacturers responsible for popularizing AIO coolers with the enthusiast PC community, the company has built a very successful and well-renowned business segment out of providing maintenance-free AIO cooler designs – a history that at this point spans over 20 years.

With such a long history, we've seen Corsair update their cooler designs several times now, continually iterating on their designs to improve performance, increase reliability, or even just add RGB lighting to match modern styles. Most recently, Corsair introduced their iCUE LINK family of coolers, which incorporate the titular iCUE LINK system that allows for multiple Corsair peripherals to be connected together and controlled via a central hub. Besides simplifying the process of using multiple Corsair devices together, the iCUE LINK system is also designed to cut down on cable clutter by reducing the overall number of cables down to just one: the iCUE LINK cable going to the next-nearest Corsair device.

To that end, today we're taking a look at the latest generation of Corsair's popular H150i cooler, the iCUE LINK H150i RGB. Succeeding the well-received Elite Capellix models, the newest iCUE LINK H150i RGB stands out with its integration into the iCUE ecosystem, while building and improving upon the already solid foundation of the basic H150i cooler design. While the H150i is not technically Corsair's flagship cooler – that honor goes to the massive 420mm H170i series – most cases cannot accommodate coolers larger than the 360mm H150i, making it the most visible of Corsair's increasingly colorful coolers.

The FSP Hydro Ti Pro 1000W PSU Review: Titanium Shines for FSP's Flagship Power Supply

Over the last year, we've been looking at increasingly intricate 1000W power supplies from prolific PSU maker FSP. These have included their 80Plus Gold-rated Hydro G Pro, as well as their 80Plus Platinum rated Hydro PTM X Pro. Today we're finally capping things off with a look at the crème de la crème of the Hydro series, the 80Plus Titanium rated Hydro Ti Pro.

The flagship of the company's ATX PSU lineup, the Hydro Ti Pro is designed to demonstrate the apex of the company's design capabilities, offering ample power capacity while also achieving excellent energy efficiency and reliability. Which for a 1000W PSU means being able to support multiple GPUs and demanding overclocking conditions, all without wavering elsewhere. FSP's 80Plus Titanium certified unit stands out, in this regard, with its cutting-edge design and features tailored for longevity and consistent performance.

As we explore the details of the FSP Hydro Ti Pro 1000W, we will examine every aspect of this PSU to determine if it meets the high expectations associated with FSP's legacy and satisfies the demands of advanced computing environments. As well, we'll be looking at how it compares to its Gold and Platinum-rated compatriots, to see just what buying a higher efficiency brings to the table, both in direct electrical efficiency and secondary attributes, such as component quality and fan noise.

The XPG Core Reactor II 1200W PSU Review: XPG Goes for the Gold

An increasingly common face in the power supply market, the bulk of XPG's work thus far has been on high-end, high-margin power supplies, such as their 80Plus Platinum-rated Cybercore II. But as the company has become better established in the PSU market on the back of multiple successful products, the company is looking to expand their footprint by venturing into the mid-range segment.

Spearheading that effort is the new XPG Core Reactor II series. Looking to maintain their competitive edge with, what's frankly, a cheaper power supply design, XPG needs to walk a very tight rope, where where the equilibrium between performance, quality, and cost is crucial. In this category, PSUs must support a range of computing setups while maintaining a focus on value for money. The Core Reactor II series represents XPG's dedication to this segment, illustrating their capability to cater to a broad spectrum of users who seek a blend of reliable performance and economic viability.

As an 80Plus Gold certified unit and without too many bells and whistles, the Core Reactor II stands out for its practical design, tailored to deliver consistent performance without the premium cost. In examining the details of the XPG Core Reactor II series, we will evaluate how well these PSUs align with XPG’s commitment to affordable quality and whether they meet the diverse needs of mid-range computing environments.

Intel Releases Core Ultra H and U-Series Processors: Meteor Lake Brings AI and Arc to Ultra Thin Notebooks

Intel has released their first mobile processors based on their highly anticipated Meteor Lake platform, the Core Ultra H and the Core Ultra U series. Available today, the Ultra Core H series has four options, including two Ultra 7 16C/22T (6P+8E) SKUs and two 14C/18T (4P+8E) Ultra 5 SKUs, and offers a base TDP of 28 W, with a maximum turbo TDP of up to 115 W. The Core Ultra-H series is designed for ultra-portable notebooks but offers more performance in both computing and graphics within a slimline package.

Also announced is the Intel Core Ultra U-series, which includes four 15/57 W (base/turbo) SKUs, with two Core Ultra 7 and two Core Ultra 5 SKUs, and all coming with a variance in P, E-core and Intel's latest integrated Arc Xe graphics frequencies. All of Intel's announced Core Ultra U-series processors for mobile feature 10C/14T, with two Performance cores and eight efficiency cores, making them ideal for lower-powered and ultra-thin notebooks.

The launch of Intel's tile-based Meteor Lake SoC marks the first step in a series of power-efficient and AI-focused chips on Intel 4 for the mobile market, which is ultimately designed to cater to the growing need to utilize AI inferencing on-chip. Both the Intel Core Ultra H and U families of chips include two new Low Power Island (LP-E) cores for low intensity workloads, with two Neural Compute Engines within the Intel AI NPU designed to tackle generative AI inferencing.

Zhaoxin Unveils KX-7000 CPUs: Eight x86 Cores at Up to 3.70 GHz

Zhaoxin, a joint venture between Via Technologies and Shanghai Municipal Government, has introduced its Kaixian KX-7000 series of x86 CPUs. Based on the company's Century Avenue microarchitecture, the processor features up to eight general-purpose x86 cores running at 3.70 GHz, while utilizing a chiplet design under the hood. Zhaoxin expects the new CPUs to be used for client and embedded PCs in 2024.

According to details published by Zhaoxin, the company's latest Century Avenue microarchitecture looks to be significantly more advanced than the company's previous x86 microarchitecture. This new design includes improvements in the CPU core front-end design as well as the out-of-order execution engines and back-end execution units. The CPU cores themselves are backed by 4MB of L2 cache, 32 MB of L3 cache, and finally a 128-bit memory subsubsystem supporting up to two channels of DDR5-4800/DDR4-3200. Furthermore, the new CPUs pack up to eight cores, capable of reaching a maximum clockspeed of 3.70 GHz.

As a result, the new CPUs are said to double computational performance compared to their predecessors, KaixianKX-6000 launched in 2018.

On the graphics side of matters, Zhaoxin's Kaixian KX-7000 CPUs also pack the company's new integrated GPU design, which is reported to be DirectX 12/OpenGL 4.6/OpenCL 1.2-capable and offers four-times the performance of its predecessor. Though given the rather low iGPU performance of the DirectX 11.1-generation KX-6000, even a 4x improvement would make for a rather modest iGPU in 2024. Principly, the iGPU is there to drive a screen and provide media encode/decode functionality, with the KX-7000 iGPU capable of decoding and encode H.265/H.264 video at up to 4K, and can drive DisplayPort, HDMI, and D-Sub/VGA outputs.

Another interesting detail about Zhaoxin's KX-7000 processors is that the company says they're using a chiplet architecture, which resembles that of AMD Ryzen's processors. Specifically, Zhaoxin is placing their CPU cores and I/O functions in to different pieces of silicon – though it's unclear into how many different chiplets altogether.

On the I/O side of matters, the new CPUs provide 24 PCIe 4.0 lanes, two USB4 roots, four USB 3.2 Gen2 roots, two USB 2.0 root, and three SATA III ports. And, given the target market, it offers acceleration for the Chinese standard SM2 and SM3 cryptography specifications.

At the moment, Zhaxin is not disclosing where it plans to produce its Zhaoxin's KX-7000 processors, nor on what node(s) they'll be using. Though given Zhaoxin's previous parts the and the limited, regional market for the chips, it is unlikely that they're intending to use a leading-edge fabrication process.

Perhaps the final notable detail about Zhaoxin's Kaixian KX-7000 CPUs is that they are set to come in both BGA and LGA packages, something that does not often happen to Chinese CPUs. An LGA form-factor will enable an ecosystem of interchangeable and upgradeable chips, which is something that we have not seen from Chinese processors for client PCs in the recent years.

Zhaoxin says that major Chinese machine manufacturers, including Lenovo, Tongfang, Unigroup, Ascend, Lianhe Donghai, and others, have developed new desktop systems based on the KX-7000 processors. These systems – which will be available next year – will run operating systems like Tongxin, Kylin, and Zhongke Fonde.

❌