Lateo.net - Flux RSS en pagaille (pour en ajouter : @ moi)

🔒
❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 19 mars 2024AnandTech

NVIDIA's 'cuLitho' Computational Lithography Adopted By TSMC and Synopsys For Production Use

Last year, NVIDIA introduced its cuLitho software library, which promises to speed up photomask development by up to 40 times. Today, NVIDIA announced a partnership with TSMC and Synopsys to implement its computational lithography platform for production use, and use the company's next-generation Blackwell GPUs for AI and HPC applications.

The development of photomasks is a crucial step for every chip ever made, and NVIDIA's cuLitho platform, enhanced with new generative AI algorithms, significantly speeds up this process. NVIDIA says computational lithography consumes tens of billions of hours per year on CPUs. By leveraging GPU-accelerated computational lithography, cuLitho substantially improves over traditional CPU-based methods. For example, 350 NVIDIA H100 systems can now replace 40,000 CPU systems, resulting in faster production times, lower costs, and reduced space and power requirements.

NVIDIA claims its new generative AI algorithms provide an additional 2x speedup on the already accelerated processes enabled through cuLitho. This enhancement is particularly beneficial for the optical proximity correction (OPC) process, allowing the creation of near-perfect inverse masks to account for light diffraction.

TSMC says that integrating cuLitho into its workflow has resulted in a 45x speedup of curvilinear flows and an almost 60x improvement in Manhattan-style flows. Curvilinear flows involve mask shapes represented by curves, while Manhattan mask shapes are restricted to horizontal or vertical orientations.

Synopsys, a leading developer of electronic design automation (EDA), says that its Proteus mask synthesis software running on the NVIDIA cuLitho software library has accelerated computational workloads compared to current CPU-based methods. This acceleration is crucial for enabling angstrom-level scaling and reducing turnaround time in chip manufacturing.

The collaboration between NVIDIA, TSMC, and Synopsys represents a significant advancement in semiconductor manufacturing in general and cuLitho adoption in particular. By leveraging accelerated computing and generative AI, the partners are pushing semiconductor scaling possibilities and opening new innovation opportunities in chip designs.

NVIDIA Blackwell Architecture and B200/B100 Accelerators Announced: Going Bigger With Smaller Data

Already solidly in the driver’s seat of the generative AI accelerator market at this time, NVIDIA has long made it clear that the company isn’t about to slow down and check out the view. Instead, NVIDIA intends to continue iterating along its multi-generational product roadmap for GPUs and accelerators, to leverage its early advantage and stay ahead of its ever-growing coterie of competitors in the accelerator market. So while NVIDIA’s ridiculously popular H100/H200/GH200 series of accelerators are already the hottest ticket in Silicon Valley, it’s already time to talk about the next generation accelerator architecture to feed NVIDIA’s AI ambitions: Blackwell.

Hier — 18 mars 2024AnandTech

The NVIDIA GTC 2024 Keynote Live Blog (Starts at 1:00pm PT/20:00 UTC)

We're here in sunny San Jose California for the return of an event that's been a long-time coming: NVIDIA's in-person GTC. The Spring 2024 event, NVIDIA's marquee event for the year, promises to be a big one for NVIDIA, as the company is due to deliver updates on its all-important datacenter accelerator products – the successor to the GH100 GPU and its Hopper architecture – along with NVIDIA's other professional/enterprise hardware, networking gear, and, of course, a slew of software stack updates.

In the 5 years since NVIDIA was last able to hold a Spring GTC in person, a great deal has changed for the company. They're now the third biggest company in the world, thanks to explosive sales growth (and even further growth expectations) due in large part to the combination of GPT-3/4 and other transformer models, and NVIDIA's transformer-optimized H100 accelerator. As a result, NVIDIA is riding high in Silicon Valley, but to keep doing so they also will need to deliver the next big thing to push the envelope on performance, and keep a number of hungry competitors off their turf.

Headlining today's keynote is, of course, NVIDIA CEO Jensen Huang, whose kick-off address has finally outgrown the San Jose Convention Center. As a result, Huang is filling up the local SAP Center arena instead. Suffice it to say, it's a bigger venue for a bigger audience for a [i]much[/i] bigger company.

So come join the AnandTech crew for our live blog coverage of NVIDIA's biggest enterprise keynote in years. The presentation kicks off at 1pm Pacific, 4pm Eastern, 20:00 UTC.

StarTech Unveils 15-in-1 Thunderbolt 4/USB4 Dock with Quad Display Support

StarTech.com has introduced its latest Thunderbolt 4/USB4 docking station, which has a plethora of ports and supports four display outputs. This makes it suitable for 4Kp60 quad-monitor setups often used for professional applications. The Thunderbolt 4 Quad Display Docking Station can also deliver up to 98W of power to the host, which is enough to feed a high-end laptop, such as Apple's MacBook Pro 16.

StarTech's 15-in-1 docking (132N-TB4USB4DOCK) has pretty much everything that one comes to expect from a dock engineered explicitly for demanding professionals, such as those involved in photography, content creation, video production, and computer-aided design. The unit comes with one Thunderbolt 4/USB 4 port with a 98W power delivery capability to connect to the host, a 2.5 GbE adapter, six USB Type-A ports (three supporting 10 Gbps, two supporting 5 Gbps, and one being USB 2.0 for up to 7.5W charging), one USB Type-C connector (at 10 Gbps), four display outputs (two DP 1.4, two HDMI 2.1), an SD Card reader with UHS-II, a microSD card reader with UHS-II, and a 3.5-mm audio jack. 

The dock's main selling feature is, its support for up to four displays. Of course, this is a valuable capability, but it has a couple of catches. The device can support four 4Kp60 displays when connected to a laptop featuring Intel's 12th or 14th Generation Core processor using a Thunderbolt 4 or USB 4 connector and with DSC enabled. With AMD Ryzen 6000 and Intel's 11th Gen Core-based systems, only three 4Kp60 displays are supported. Meanwhile, with MacBooks, users must get on with two 5Kp60 or one 6Kp60 display. The good news is that the Thunderbolt 4 Quad Display Docking Station requires no drivers and works seamlessly with MacOS, Windows, and ChromeOS.

The docking station has a 180W power supply, so it can simultaneously charge a laptop and power on all the remaining ports.

Thunderbolt 4 and USB 4 docks with rich capabilities are not cheap as they have to pack loads of quite expensive controllers, and StarTech's 15-in-1 docking station is no exception, as it costs $330.99

The StarTech.com Thunderbolt 4 Quad Display Docking Station is available for purchase directly from the company and through various IT resellers and distributors such as CDW, Amazon, Ingram Micro, TD SYNNEX, and D&H. 

Qualcomm Announces Snapdragon 8s Gen 3: A Cheaper Chip For Premium Phones

With the launch of their flagship Snapdragon 8 SoC firmly behind them now, Qualcomm this morning is turning their collective head towards the premium market with the launch of another new Snapdragon 8 family SoC, the Snapdragon 8s Gen 3. The first of Qualcomm’s ‘s’-subseries of down-market parts to be released under the Snapdragon 8 banner, the Snapdragon 8s Gen 3 (8sG3) is intended to be a bridge part between the last-gen flagship 8 Gen 2 and current-gen flagship 8 Gen 3, offering a not-quite-flagship experience at a lower price point than Qualcomm’s top SoC. The new SoC is set to be available globally, with the first phones announced this month, though as is often the case for Qualcomm’s “premium” market SoCs, it looks like only Chinese handset OEMs will be picking up the chip, at least initially.

Although Qualcomm prefers to draw comparisons to their current gen flagship Snapdragon 8 Gen 3, the Snapdragon 8s Gen 3 is by and large and enhanced version of the Snapdragon 8 Gen 2. Many of the hardware blocks of the 8G2 have been carried over to the new chip – either in whole or in terms of functionality – a process that is made very easy thanks to the fact that Qualcomm is building the chip on the same TSMC 4nm node as the 8G2 and 8G3. Compared to the 8G2 then, there are two key differentiators for the 8sG3: a newer CPU complex lifted from the 8G3, and official on-device generative AI support.

Qualcomm Snapdragon 8 SoCs
SoC Snapdragon 8 Gen 3
(SM8650)
Snapdragon 8s Gen 3
(SM8635)
Snapdragon 8 Gen 2
(SM8550)
CPU 1x Cortex-X4
@ 3.3GHz

3x Cortex-A720
@ 3.2GHz

2x Cortex-A720
@ 3.0GHz

2x Cortex-A520
@ 2.3GHz

12MB sL3
1x Cortex-X4
@ 3.0GHz

4x Cortex-A720
@ 2.8GHz

3x Cortex-A520
@ 2.0GHz
1x Cortex-X3
@ 3.2GHz

2x Cortex-A715
@ 2.8GHz

2x Cortex-A710
@ 2.8GHz

4x Cortex-A510
@ 2.0GHz

8MB sL3
GPU Adreno
(Hardware RT & Global Illum.)
Adreno
(Hardware RT)
Adreno
(Hardware RT)
DSP / NPU Hexagon Hexagon Hexagon
Memory
Controller
4x 16-bit CH

@ 4800MHz LPDDR5X  /  76.8GB/s
4x 16-bit CH

@ 4200MHz LPDDR5X  /  67.2GB/s
4x 16-bit CH

@ 4200MHz LPDDR5X  /  67.2GB/s
ISP/Camera Triple 18-bit Spectra ISP

1x 200MP or 108MP with ZSL
or
64+36MP with ZSL
or
3x 36MP with ZSL

8K HDR video & 64MP burst capture
Triple 18-bit Spectra ISP

1x 200MP or 108MP with ZSL
or
64+36MP with ZSL
or
3x 36MP with ZSL

4K HDR video & 64MP burst capture
Triple 18-bit Spectra ISP

1x 200MP or 108MP with ZSL
or
64+36MP with ZSL
or
3x 36MP with ZSL

8K HDR video & 64MP burst capture
Encode/
Decode
8K30 / 4K120 10-bit H.265

H.265, VP9, AV1 Decoding

Dolby Vision, HDR10+, HDR10, HLG

720p960 SlowMo
4K60 10-bit H.265

H.265, VP9, AV1 Decoding

Dolby Vision, HDR10+, HDR10, HLG

1080p240 SlowMo
8K30 / 4K120 10-bit H.265

H.265, VP9, AV1 Decoding

Dolby Vision, HDR10+, HDR10, HLG

720p960 SlowMo
Integrated
Radio
FastConnect 7800
Wi-FI 7 + BT 5.4
2x2 MIMO
FastConnect 7800
Wi-FI 7 + BT 5.4
2x2 MIMO
FastConnect 7800
Wi-FI 7 + BT 5.3
2x2 MIMO
Integrated Modem X75 integrated
3GPP Rel 18

(5G NR Sub-6 + mmWave)
DL = 10000 Mbps
UL = 3500 Mbps
X70 integrated
3GPP Rel 17

(5G NR Sub-6 + mmWave)
DL = 5000 Mbps
UL = 3500 Mbps
X70 integrated
3GPP Rel 17

(5G NR Sub-6 + mmWave)
DL = 10000 Mbps
UL = 3500 Mbps
Mfc. Process TSMC 4nm TSMC 4nm TSMC 4nm

Starting with the CPU complex, Qualcomm is implementing Arm’s latest generation of Armv9 CPU cores here, meaning a mix of the Cortex-X4, Cortex-A720, and Cortex-A520. Relative to the flagship 8G3, the 8sG3 gives up one of its performance cores for another efficiency core, shifting the design from a 1/5/2 configuration to a 1/4/3 configuration – the same as the 8G2. The 8sG3 also loses some frequency headroom in the process, with the X4 prime core dropping from 3.3GHz to 3.0GHz, and the other CPU cores following similarly along.

Still, the 8sG3 should outperform the 8G2 in CPU tasks, which is the primary reason for replacing the CPU complex at all. Qualcomm is basically looking to offer an 8G2 with better CPU performance and energy efficiency, and using Arm’s latest CPU cores will be how they deliver on that.

Outside of the CPU complex, however, most of the rest of the functional blocks are either lifted from the 8G2, or are the same generation teams of features. This means the 8sG3’s integrated GPU offers hardware raytracing, for example, but not the global illumination support that was introduced for the flagship 8G3. The memory controller is also otherwise identical to the 8G2, with the SoC supporting a maximum of 24GB of LPDDDRX-8400.

The video recording and decoding capabilities of the 8sG3 are a distinct downgrade from the other Snapdragon 8 SoCs, however. Qualcomm has retained their trio of 18-bit Spectra ISPs – so the SoC can support up to 3 cameras – but all 8K support has been excised entirely. Instead, the 8sG3 can only record video at up to 4K, and even then only at 60fps, half the framerate of the 8G3/8G2. Slow-mo video capture has also been altered, as well; Qualcomm lists 1080p240 for this mode rather than 720p960. The higher resolution will no doubt be appreciated, but less so if this means it’s not possible to record above 240fps.

The lack of 8K video support also applies to the SoC’s video decode block, which can only decode videos up to 4K in resolution. Qualcomm has otherwise kept all of the underlying features of the video decode block at parity, however, so the 8sG3 gets support for AV1 decoding, along with Dolby Vision HDR.

Meanwhile, the DSP/NPU situation on the 8sG3 is a mixed bag. Officially, this SoC supports generative AI models (up to 10B parameters in size), something the 8G2 and its NPU were not capable of, and is otherwise only available on the 8G3. However, according to Qualcomm this is not the same generation of NPU IP as on the 8G3, and among other things it lacks support for speculative decoding (and I don’t see any mention of the newer NPU’s micro-tile inferencing improvements). So by all appearances, this is just the 8G2 NPU. Still, Qualcomm has at least rolled out some software/firmware updates to improve its functionality, giving it additional AI functionality right as exuberance for that is through the roof.

Finally, the comms side of the 8sG3 is essentially a slower version of the 8G2. Qualcomm is once again using their Snapdragon X70 integrated modem here, a 5G Release 17-generation design that offers 2x2 MIMO on mmWave, and 4x4 MIMO on sub-6G. Max upload speeds are unchanged, at 3.5Gbps, however max download speeds for the 8sG3 are 5Gbps, half that of the 8G2 (and 8G3). Paired with the X70 modem is Qualcomm’s FastConnect 7800 system, which offers Wi-Fi 7 support with 2x2 MIMO, as well as Bluetooth 5.4. The dual BT antenna feature from the other Snapdragon 8 chips has also made it over for this part.

Overall, the Snapdragon 8s Gen 3 is intended to occupy a very specific niche within Qualcomm’s SoC lineup, offering a cheaper alternative to their flagship SoC without giving up too many features. The marketing messaging behind the chip is made somewhat complicated by the fact that last year at this time Qualcomm launched the Snapdragon 7+ Gen 2 for the premium market, which at least partially overlaps what they’re trying to do with the 8sG3. None the less, Qualcomm insists there’s a market for chips between the Snapdragon 7 series and the flagship Snapdragon 8 SoC, and so here we are.

Absent another 7+ chip this year, it’s hard to see the 8sG3 as anything other than the 7+’s successor. Still, where the 7+ was a souped-up 7, the 8sG3 is clearly a down-market 8, so it has some significant hardware advantages, particularly when it comes to memory bandwidth. It may just be that Qualcomm aimed a bit too low for the premium market with the specs for the 7+, so this is an attempt to aim a bit higher.

In any case, expect to see the Snapdragon 8s Gen 3 picked up by many of the usual Chinese handset OEMs, including Honor, iQOO, realme, Redmi and Xiaomi. The first phones are expected to be announced this month.

À partir d’avant-hierAnandTech

BIOSTAR Debuts Barebones A620MS mATX Motherboard For Ryzen 7000 Processors

BIOSTAR has launched its AM5-based A620MS motherboard today, bringing a new low-end option for PC users on a budget. Though BIOSTAR has not disclosed what MSRP it the A620MS motherboard will carry, the specifications of the board make it clear that it targets the lowest-end segment of the market, though it makes use of the regular A620 chipset instead of the even less expensive A620A chipset.

The A620MS sports some features typical for mATX A620 boards (which make up the vast majority of current models): two DDR5 DIMM slots that support up to two 48GB sticks, an M.2 PCIe 4.0 slot for SSDs, four SATA III ports, and a PCIe Gen4 x16 slot. The motherboard also has four debug LEDs for diagnosing CPU, RAM, GPU, and booting errors.

Meanwhile the rear I/O features a one gigabit Ethernet port, four USB 3.2 ports, analog audio jacks, two USB 2.0 ports, an HDMI 1.4 port, and DisplayPort 1.2. Though there are some more fully-featured A620 motherboards available with more ports operating at a higher specification, but the rear I/O is more or less par for the course when it comes to A620.

However, there are other things about BIOSTAR’s A620MS that implies it will be quite low-end for an A620 motherboard. It has just eight total voltage regulator modules (VRMs), which appear to be in a 6+2 or 6+1+1 phase configuration. This isn’t as low-end as BIOSTAR could have gone (ASRock offers a 4+1+1 stage board), but it is still very sparing in VRM stages compared to most other A620 motherboards. These VRMs are also not covered by a heatsink, which is also typical for boards in this segment, as they're normally paired with equally chip 65W(ish) chips.

BIOSTAR doesn’t list any official CPU restrictions in either its press release or its specification sheet; instead, the company simply lists the motherboard as compatible with Ryzen 7000 and future Ryzen 8000 processors.

While the market for AM5 motherboards includes plenty of B650(E) and X670(E) models, there’s only a handful of A620 boards in total. On Newegg, there are 14 different motherboards available, and many only differ slightly in respect to things like form factor. The cheapest of these cost $75 to $100, and while BIOSTAR didn’t reveal what price we should expect of its A620MS board, given its specifications, we expect it will land in that same $75 to $100 region.

First DNA Data Storage Specification Released: First Step Towards Commercialization

The DNA Data Storage Alliance introduced its inaugural specifications for DNA-based data storage this week. This specification outlines a method for encoding essential information within a DNA data archive, crucial for developing and commercializing an interoperable storage ecosystem.

DNA data storage uses short strings of deoxyribonucleic acid (DNA) called oligonucleotides (oligos) mixed together without a specific physical ordering scheme. This storage media lacks a dedicated controller and an organizational means to understand the proximity of one media subcomponent to another. DNA storage differs significantly from traditional media like tape, HDD, and SSD, which have fixed structures and controllers that can read and write data from the structured media. DNA's lack of physical structure requires a unique approach to initiate data retrieval, which brings its peculiarities regarding standardization. 

To address this, the SNIA DNA Archive Rosetta Stone (DARS) working group, part of the DNA Data Storage Alliance, has developed two specifications, Sector Zero and Sector One, to facilitate the process of starting a DNA archive. 

Sector Zero serves as the starting point, providing minimal details necessary for the archive reader to identify the entity responsible for synthesizing the DNA (e.g., Dell, Microsoft, Twist Bioscience) and the CODEC used for encoding Sector One (e.g., Super Codec, Hyper Codec, Jimbob's Codec). Sector Zero consists of 70 bases: the first 35 bases identify the vendor, and the second 35 bases identify the codec. The information in Sector Zero enables access and decoding of data stored in Sector One. The amount of data stored in SZ is small and fits into a single oligonucleotide.

Sector One expands on this by including a description of the contents, a file table, and parameters required for transferring data to a sequencer. This specification ensures that the main body of the archive is accessible and readable, paving the way for data retrieval. Sector One contains exactly 150 bases and will span multiple oligonucleotides. 

"A key goal of the DNA Data Storage Alliance is to set and publish specifications and standards that allow an interoperable DNA data storage ecosystem to grow," said Dave Landsman, of the DNA Data Storage Alliance Board of Directors. "With the publishing of the Alliance's first specifications, we take an important step in achieving that goal. Sector Zero and Sector One are now publicly available, allowing companies working in the space to adopt and implement."

The DNA Data Storage Alliance is led by Catalog Technologies, Inc., Quantum Corporation, Twist Bioscience Corporation, and Western Digital (though we are unsure whether Western Digital's NAND or HDD division is responsible for developing the specification). Meanwhile, numerous industry giants, including Microsoft, support the DNA Data Storage Alliance.

Source: SNIA

Asus Launches Low-Profile GeForce RTX 3050 6GB: A Tiny Graphics Card for All PCs

Asus this week has become the latest PC video card manufacturer to announce a sub-75W video card based on NVIDIA's recently-released low-power GeForce RTX 3050 6GB design. And going one step further for small form factor PC owners, Asus has used NVIDIA's low-power GPU configuration to produce a half-height video card that can fit into low-profile systems.

As Asus puts it, the GeForce RTX 3050 LP BRK 6GB GDDR6 is a 'big productivity in a small package' and for a low-profile dual-slot graphics board, it indeed is. The unit has three display outputs, including a DVI-D, HDMI 2.1, and DisplayPort 1.4a with HDCP 2.3 support, which makes the graphics card s viable option both for a a dual-display desktop and a home theater PC (Nvidia's GA107 graphics processor supports all popular codecs except AV1). Furthermore, a DVI-D output enables the card to drive outdated displays, which even over half a decade after DVI-D was retired, still hang around as spare parts. Meanwhile, because the card only consumes around 70W, it does not require any auxiliary PCIe power connectors, which are at times not available in cheap systems from big PC makers.

Underlying this card is the aforementioned GeForce RTX 3050 6 GB, which uses the GA107 GPU with 2304 CUDA cores, and it comes with 6GB of GDDR6 memory connected to a narrower 96-bit memory bus (down from 128-bits for the full 8GB version. With a lower boost clock of 1470 MHz (1500 MHz in OC mode), the RTX 3050 6GB has reduced computing performance, delivering 6.77 FP32 TFLOPS versus 9.1 FP32 TFLOPS of the full-fledged RTX 3050.

As a result, the low-profile GeForce RTX 3050 6 GB is very much an entry-level card, though the low power requirements for such a card are also what make it special. This should be plenty for low-end gaming – beating out integrated GPUs – though suffice it to say, it's not going to compete with high-end, power-hungry cards either.

With its diminutive size, the Asus GeForce RTX 3050 LP BRK 6 GB GDDR6 looks to be a nice candidate for upgrading cheap systems from OEMs as well as fixing outdated PCs. What remains to be seen is how price competitive it is going to be. The graphics board already has one low-profile rival from MSI — which costs $185 — so Asus is not the only vendor competing here.

Asus Adds Support for 64GB Memory Modules to Intel 600/700 Motherboards

Asus on Thursday said it has released new versions of UEFI BIOS for DDR5-supporting Intel 600/700-series motherboards that enable support for 64 GB DIMMs. As a result, Asus's latest platforms for Intel's 12th, 13th and 14th Generation Core processors with four slots for DIMM slots can now work with up to 256 GB of DDR5 memory, and motherboards with two DIMM slots can now support up to 128 GB of memory.

To gain support for 256 GB of DDR5 memory using 64 GB unbuffered DIMMs, one needs to download the latest version of UEFI BIOS for one of the Intel 600/700-series motherboards listed at the Asus website.

The list of Asus motherboards with an LGA1700 socket supporting 256 GB of DDR5 memory includes 75 boards based on a variety of Intel's 600 and 700-series chipsets, including Intel Z790, H770, B760, Z690, W680, and Q670. Though taking stock of Asus's larger motherboard offerings, this is still a bit shy of covering all of Asus's LGA1700 motherboards, which is nearly 200 models in total. So 64 GB DIMM support has only come to a fraction of their boards, at least thus far.

Otherwise, it is noteworthy that cutting-edge high-capacity DIMMs, such as 32 GB, 48GB, and 64 GB, are typically not available with the same blistering XMP clockspeeds as some of their lower-capacity counterparts, so equipping an Intel system with 256 GB of memory will come at a cost of peak memory bandwidth, on top of the typical DDR5 2 DIMM Per Channel (2DPC) frequency penalty. In fact, the fastest 48 GB modules currently offered by Corsair and G.Skill (which could be used to build systems with 192 GB of memory) top out at 6600 MT/s and 6800 MT/s, respectively. Meanwhile, for now, there are no Intel XMP 3.0-compatible 64 GB DDR5 modules from these two renowned makers.

Ultimately, the prime market for high-capacity UDIMMs at this time is going to be content creators, data scientists, and other workstation-light workloads that need a quarter-terabyte of RAM, and can justify the cost for the leading-edge DIMMs. Otherwise 16 GB and 32 GB DIMMs are likely to remain the sweet spot for the LGA1700 platform for the rest of its lifecycle.

Finally, it should be noted that Asus is also announcing (or rather, reiterating) support for 64 GB DIMMs on their AM5 motherboards. That said, this support is already baked into that platform and BIOSes, and unlike the Intel boards, a BIOS update is not needed.

Western Digital Launches PC SN5000S SSD: Low-Cost Meets High Performance

Western Digital has introduced its new series of SSDs aimed at mainstream PCs, which combine high performance and low cost. The Western Digital PC SN5000S family of DRAM-less drives uses the company's 3D QLC NAND memory and an in-house-developed platform, so the SSDs promise to be relatively inexpensive. Meanwhile, their sequential read performance reaches 6,000 MB/s.

Western Digital's PC SN5000S drives are based on the company's latest in-house controller, which supports a PCIe 4.0 x4 host interface and BICS6 3D QLC NAND memory. The controller fully supports Western Digital's nCache 4.0 HybridSLC technology with endurance monitoring to ensure decent performance, RSA-3K and SHA-384 encryption, and TCG Opal 2.02 and Pyrite security capabilities.

On the capacity side, Western Digital's PC SN5000S drives will be available in 512 GB, 1 TB, and 2 TB configurations. As for performance, the 2TB PC SN5000S is rated for up to 6,000 MB/s sequential read speed, up to 5,600 MB/s sequential write speed, up to 750,000 random read IOPS, and up to 900,000 random write IOPS. The SSDs will be available in M.2-2230 and M.2-2280 form factors.

Western Digital SN5000S SSD Specifications
Capacity 512 GB 1 TB 2 TB
Controller Western Digital's proprietary controller
NAND Flash Western Digital / Kioxia BiCS 6 176L 3D QLC NAND
Form-Factor, Interface Single-Sided M.2-2280, PCIe 4.0 x4, NVMe 2.0
Sequential Read 6000 MB/s
Sequential Write 4200 MB/s 5400 MB/s 5600 MB/s
Random Read IOPS 500K 750K
Random Write IOPS 850K 900K
Peak Power 6.1W 6.5W 6.9W
SLC Caching Yes
Security Capabilities TCG Opal 2.02 and Pyrite
Warranty 5 years
Write Endurance 150 TBW 300 TBW 600 TBW

When it comes to endurance, Western Digital rates 2TB PC SN5000S at 600 terabytes to be written, 1TB version at 300TBW, and 512GB at 150TBW, which is significantly lower compared to entry-level SSDs with similar capacities (yet higher compared to WD Green-branded drives). 

While the performance of Western Digital's PC SN5000S hardly impresses our avid readers, who tend to look at the highest-end SSDs, 1TB and 2TB versions offer considerably higher performance than most entry-level drives on the market today. What disappoints is the relatively low endurance of Western Digital's new SSDs compared to entry-level drives from other makers.

Western Digital primarily markets its PC SN5000S solid-state drives for OEMs, where they succeed the company's SN740-series. For PC makers, the drives are fast enough, and perhaps more importantly, they support advanced encryption technologies as well as TCG Opal 2.02 and Pyrite security capabilities, which is crucial for desktops and laptops sold to various U.S. government agencies.

Source: Western Digital

Intel Announces Core i9-14900KS: Raptor Lake-R Hits Up To 6.2 GHz

For the last several generations of desktop processors from Intel, the company has released a higher clocked, special-edition SKU under the KS moniker, which the company positions as their no-holds-barred performance part for that generation. For the 14th Generation Core family, Intel is keeping that tradition alive and well with the announcement of the Core i9-14900KS, which has been eagerly anticipated for months and finally unveiled for launch today. The Intel Core i9-14900KS is a special edition processor with P-Core turbo clock speeds of up to 6.2 GHz, which makes it the fastest desktop processor in the world... at least in terms of advertised frequencies it can achieve.

With their latest KS processor, Intel is looking to further push the envelope on what can be achieved with the company's now venerable Raptor Lake 8+16 silicon. With a further 200 MHz increase in clockspeeds at the top end, Intel is looking to deliver unrivaled desktop performance for enthusiasts. At the same time, as this is the 4th iteration of the "flagship" configuration of the RPL 8+16 die, Intel is looking to squeeze out one more speed boost from the Alder/Raptor family in order to go out on a high note before the entire architecture starts to ride off into the sunset later this year. To get there, Intel will need quite a bit of electricity, and $689 of your savings.

NETGEAR Introduces WBE750: First Insight-Manageable Wi-Fi 7 Access Point Targets Congested Deployments

Wi-Fi 7 products have slowly started gaining market traction, particularly in the residential market (home consumer segment). The SMB / SME / enterprise market is traditionally a few quarters behind this, given the longer validation cycles. Earlier this year, Ubiquiti Networks introduced their first Wi-Fi 7 access point - the U7 Pro. Today, NETGEAR Business is launching the WBE750 Wi-Fi 7 Access Point (AP) in the Pro Wi-Fi lineup for businesses with heavy wireless Internet use.

The benefits of Wi-Fi 7 (802.11be) have been covered in multiple pieces earlier. The presence of a relatively interference-free 6 GHz band, wider channels (up to 320 MHz wide), and technical improvements to address interference and latency make the new standard an attractive upgrade for Wi-Fi users.

Unlike the Ubiquiti UniFi U7 Pro's entry-level focus (with 2x2 configurations in the 2.4 GHz, 5 GHz, and 6 GHz bands), the WBE750 opts for 4x4 configurations in each of the three bands. Correspondingly, the AP is able to support more concurrent connected clients (600 vs. 300), and obviously provide more bandwidth (18.4 Gbps vs. 9.2 Gbps theoretical). The pricing is also correspondingly higher ($700 vs. $190). The WBE750 also incorporates a NBASE-T (10 GbE / 5 GbE / 2.5 GbE / 1 GbE) RJ-45 uplink port with PoE++ support. Eight SSIDs are supported per channel.

Similar to other networking equipment vendors in this space, NETGEAR is also pushing for recurring subscription-based revenue with the product. A 1-year subscription to the single-pane cloud-based management interface (NETGEAR Insight) is included in the $700 purchase price.

NETGEAR Insight is particularly useful for professional installers who can manage multiple sites on the go, even from a mobile device.

The WBE750 joins a number of other Wi-Fi 6 / 6E APs in the Pro Wi-Fi line serving a wide range of deployment requirements. NETGEAR has been introducing multiple products in their Insight-manageable line over the last few quarters. As a result, they are able to offer a total network solution (gateways / routers / switches / APs) that can be managed from a single pane. As part of its Pro focus, NETGEAR is offering free site design services to installers along with expert technical support. NETGEAR's Pro WiFi Design Services (network design, product selection guide, troubleshooting, and training support) aims to be a key differentiation aspect compared to other similar offerings in the SMB / SME market.

The WBE750 is powered by a Qualcomm solution - the Waikiki Wi-Fi 7 chipset incorporated in the Networking Pro 1220 platform.

The new WBE750 AP is available for purchase today for $700.

Corsair Launches New XH405i Custom Water Cooling Kits And XG7 RTX 4080-Compatible Water Blocks

Corsair has launched its latest Hydro X series iCUE LINK XH405i RGB custom open-loop water cooling kits, replacing the older XH305i kits from 2020. The new kits feature Corsair’s latest XD5 RGB ELITE pump and reservoir, the XC7 RGB ELITE CPU waterblock, three QX120 RGB fans, and a 360mm radiator. The pump, waterblock, and fans all have the namesake iCUE LINK integration, which Corsair has been pushing throughout its entire recent generation of products.

The biggest hardware-related difference between XH405i kits and previous generation XH305i kits is undoubtedly the inclusion of iCUE LINK hardware, which Corsair recently debuted with its iCUE LINK H150i RGB AIO cooler. iCUE LINK allows individual Corsair cooling components within a system to be directly connected, primarily cutting down on cable clutter, but also offering the promise of more and finer-grained control over individual components via the iCUE LINK Hub at the center of a system. For instance, each individual iCUE LINK-compatible fan connected to the iCUE LINK hub can be set to its own speed, rather than either requiring each fan to be connected to its own fan header on the motherboard or setting a common speed for all fans via a multi-headed cable.

The XH405i is offered in two themes: stealth gray and white. Outside of cosmetics, the two variants are the same and come with a combined pump and reservoir, a CPU waterblock compatible with the AM5 and LGA 1700 sockets, three 120mm fans, a 360mm radiator, and a central iCUE LINK Hub. The kits also come with all the accessory components needed to build a custom loop: hardline tubing, a bending kit, fittings, and XL8 clear-colored coolant. These kinds of kits are usually geared towards newcomers to custom liquid cooling and users who need a brand-new loop but don’t want to spend much time scouring for individual components.

Separately, Corsair has also launched the iCUE LINK XG7 RGB GPU waterblock for GeForce RTX 4090 and 4080 Super graphics cards. As is typically the case for full-coverage GPU waterblocks, the XG7 has specific hardware compatibility requirements, and as a result Corsair is making four versions of the waterblock. The company is targetting ASUS’s ROG STRIX and TUF cards, as well as MSI’s SUPRIM and GAMING TRIO lineups, offering RTX 4090 and RTX 4080 blocks for each of those card families. Just like the other components in the XH405i kit, the GPU waterblock is also iCUE LINK-equipped.

Aimed at a premium market, the full XH405i kit doesn't come cheap: Corsair has set the MSRP at $700 for the complete cooling collection. Meanwhile, the XG7 GPU waterblock is priced at $230 for all four models.

The iCUE LINK XH405i kit is available now at Newegg and Amazon, as well as through Corsair’s own website.

The Arctic Liquid Freezer III 280 A-RGB White AIO Review: Refined Design Brings Stand-Out Cooler

ARCTIC GmbH, originally known as Arctic Cooling, first burst onto the PC cooling scene in 2001 and has since maintained its stature as a leader in cooling technologies. The company made its mark with top-notch thermal compounds and has since kept its focus on cooling solutions while also expanding into other tech accessories, including advanced monitor mounts and audio products.

With the introduction of the Liquid Freezer III series, ARCTIC has taken another significant step forward in the cooling market. This new lineup builds upon the success of the previous Liquid Freezer II series, the great price-to-performance ratio of which made it a highly popular product. Today, we're delving into ARCTIC's latest offerings with the Liquid Freezer III series and, specifically, the 280 A-RGB White model. We'll assess the features, quality, and thermal performance of the AIO (All-In-One) cooler of the series ARCTIC is hoping to dominate the bulk of the mainstream market with.

ASML Delivers First 2nm-Generation Low-NA EUV Tool, the Twinscan NXE:3800E

Our avid readers tend to look at microelectronics made using leading edge process technologies, which in case of Intel means usage of High-NA extreme ultraviolet (EUV) lithography a couple of years down the road. But the vast majority of chips that we are going to use in the next couple of years will be made using Low-NA EUV litho tools. This is why the latest announcement from ASML is particularly notable.

As spotted by Computerbase, ASML this week has delivered its first updated Twinscan NXE:3800E lithography machine for fab installation. The latest iteration of the company's line of 0.33 numerical aperture (Low-NA) lithography scanners, the NXE:3800E is aimed at making chips on 2nm and 3nm-class technologies.

Chipmakers have a need for speed! The first TWINSCAN NXE:3800E is now being installed in a chip fab. 🔧

With its new wafer stages, the system will deliver leading edge productivity for printing advanced chips. We're pushing lithography to new limits. 💪 pic.twitter.com/y5hJg5Tdot

— ASML (@ASMLcompany) March 12, 2024

ASML has not published the full details on the capabilities of the machine, but previous roadmaps from the company have indicated that the updated 3800E would offer both improved wafer throughput and increased wafer alignment precision – what ASML refers to as "matched-machine overlay". Based on that roadmap, ASML is expecting to crack 200 wafers per hour with their fifth-generation low-NA EUV scanner, which would mark a significant milestone for the technology, as one of the drawbacks of EUV lithography since the beginning has been its lower throughput rate compared to today's extremely well-researched and tuned deep UV (DUV) machines.

For ASML's logic and memory fab customers – a list these days that is only around half a dozen companies in total – the updated scanner will help these foundries continue to improve and expand their production of leading-edge chips. Even with major fabs in the midst of scaling-up their operations with additional facilities, improving throughput at existing facilities remains an important factor in meeting capacity demands, as well as bringing down production costs (or at least, keeping them in check).

Though as EUV scanners don't come cheap – a typical scanner costs some $180 million and the Twinscan NXE:3800E will likely cost more – it'll take a while to fully amortize these machines. In the meantime, shipping a faster generation of EUV scanners will have significant financial implications for ASML, who is already enjoys the status (and criticism) that comes from being the sole supplier of such a critical tool.

Following the 3800E, ASML has at least one more generation of low-NA EUV scanners in the works, with the development of the Twinscan NXE:4000F. That's expected to be released around 2026.

Source: ASML (via Computerbase)

The be quiet! Pure Power 12 M 650W PSU Review: Solid Gold

Be quiet! is renowned for its dedication to excellence in the realm of PC components, specializing in products that emphasize silence and performance. The brand's product lineup is extensive, encompassing high-quality power supply units (PSUs), cases, and cooling solutions, including air and liquid coolers. Be quiet! is particularly renowned for trying to achieve whisper-quiet operation across all its products, making it a favorite among PC enthusiasts who prioritize a noiseless computing environment. The brand's portfolio reflects a dedication to meeting the diverse needs of tech aficionados and professionals, with an array of products that emphasize noise reduction and efficiency.

This review shines a spotlight on the Be quiet! Pure Power 12 M 650W PSU, a standout product in Be quiet!'s PSU collection that illustrates the company's attitude towards product design. The Pure Power 12 M series is designed to provide dependable performance and quiet operation, catering to users who demand a good balance of power efficiency and acoustics with reliability and value. This model, in particular, strives to offer a compelling blend of performance and quality, making it an attractive option for individuals seeking a PSU that aligns with the requirements of both entry-level and advanced PC builds.

SiPearl's Rhea-2 CPU Added to Roadmap: Second-Gen European CPU for HPC

SiPearl, a processor designer supported by the European Processor Initiative, is about to start shipments of its very first Rhea processor for high-performance computing workloads. But the company is already working on its successor currently known as Rhea-2, which is set to arrive sometimes in 2026 in Exascale supercomputers.

SiPearl's Rhea-1 datacenter-grade system-on-chip packs 72 off-the-shelf Arm Neoverse V1 cores designed for HPC and connected using a mesh network. The CPU has an hybrid memory subsystem that supports both HBM2E and DDR5 memory to get both high memory bandwidth and decent memory capacity as well as supports PCIe interconnects with the CXL protocol on top. The CPU was designed by a contract chip designer and is made by TSMC on its N6 (6 nm-class) process technology.

The original Rhea is to a large degree a product aimed to prove that SiPearl, a European company, can deliver a datacenter-grade processor. This CPU now powers Jupiter, Europe's first exascale system that uses nodes powered by four Rhea CPUs and NVIDIA's H200 AI and HPC GPUs. Given that Rhea is SiPearl's first processor, the project can be considered as fruitful.

With its 2nd generation Rhea processors, SiPearl will have to develop something that is considerably more competitive. This is perhaps why Rhea-2 will use a dual-chiplet implementation. Such a design will enable SiPearl to pack more processing cores and therefore offer higher performance. Of course, it remains to be seen how many cores SiPearl plans to integrate into Rhea 2, but at least the CPU company is set to adopt the same design methodologies as AMD and Intel.

Given the timing for SiPearl's Rhea 2 and the company's natural with to preserve software compatibility with Rhea 1, it is reasonable to expect the processor to adopt Arm's Neoverse V3 cores for its second processor. Arm's Neoverse V3 offer quite a significant uplift compared to Neoverse V2 (and V1) and can scale to up to 128 cores per socket, which should be quite decent for HPC applications in 2025 – 2026.

While SiPearl will continue developing CPUs, it remains to be seen whether EPI will manage to deliver AI and HPC accelerators that are competitive against those from NVIDIA, AMD, and Intel.

Marvell's 2nm IP Platform Enables Custom Silicon for Datacenters

Marvell this week introduced its new IP technology platform specifically tailored for custom chips for accelerated infrastructure made on TSMC's 2nm-class process technologies (possibly including N2 and N2P). The platform includes technologies essential for developing cloud-optimized accelerators, Ethernet switches, and digital signal processors.

"The 2nm platform will enable Marvell to deliver highly differentiated analog, mixed-signal, and foundational IP to build accelerated infrastructure," said Sandeep Bharathi, chief development officer at Marvell. "Our partnership with TSMC on our 5nm, 3nm and now 2nm platforms has been instrumental in helping Marvell expand the boundaries of what can be achieved in silicon."

The 2nm platform is built on Marvell's extensive IP portfolio, which includes advanced SerDes capable of speeds beyond 200 Gbps, processor subsystems, encryption engines, SoC fabrics, and high-bandwidth physical layer interfaces. These IPs are crucial for developing and producing a range of devices, such as custom compute accelerators and optical interconnect digital signal processors. These are becoming common building blocks for AI clusters, cloud data centers, and other infrastructures supporting machines used for AI and HPC workloads.

While these IPs are vital for a variety of processors, DSPs, and networking gear, developing them from scratch—especially for TSMC's 2nm-class process technologies that rely on gate-all-around Nanosheet transistors—is hard, time-consuming, and sometimes inefficient, both from a die space and economics point of view. This is where Marvell's IP portfolio promises to be very useful.

Marvell does not outright say that its TSMC 2nm-certified platform is silicon-proven, but given the fact that TSMC has been working with IP providers over N2-compatible IPs for quite some time, it is reasonable to expect that at least some of Marvell's popular IPs are.

"We take a modular approach to semiconductor design R&D, focusing first on qualifying foundational analog, mixed-signal IP and advanced packaging that can be used across a broad spectrum of devices," Bharathi said. "This allows us to bring innovations such as process manufacturing advances faster to market."

Meanwhile, Marvell is not part of TSMC's Open Innovation Platform and OIP's IP Alliance, so it is unclear whether the company's N2-compatible IPs will be part of TSMC's TSMC9000 IP program, which greatly simplifies IP choices for chip designers.

"TSMC is pleased to collaborate with Marvell in pioneering a platform for advancing accelerated infrastructure on our 2nm process technology," said Kevin Zhang, senior vice president of business development at TSMC. "We are looking forward to our continued collaboration with Marvell in the development of leading-edge connectivity and compute products utilizing TSMC's best-in-class process and packaging technologies."

Source: Marvell

Intel CEO Pat Gelsinger to Deliver Computex Keynote, Showcasing Next-Gen Products

Taiwan External Trade Development Council (TAITRA), the organizer of Computex, has announced that Pat Gelsinger, chief executive of Intel, will deliver a keynote at Computex 2024 on June 4, 2024. Focusing on the trade show's theme of artificial intelligence, he will showcase Intel's next-generation AI-enhanced products for client and datacenter computers.

According to TAITRA's press release, Pat Gelsinger will discuss how Intel's product lineup, including the AI-accelerated Intel Xeon, Intel Gaudi, and Intel Core Ultra processor families, opens up new opportunities for client PCs, cloud computing, datacenters, and network and edge applications. He will also discuss superior performance-per-watt and lower cost of ownership of Intel's Xeon processors, which enhance server capacity for AI workloads.

The most intriguing part of Intel's Computex keynote will of course be the company's next-generation AI-enhanced products for client and datacenter computers. At this point Intel is prepping numerous products that pose a lot of interest, including the following:

  • Arrow Lake and Lunar Lake processors made on next-generation process technologies for desktop and mobile PCs and featuring all-new microarchitectures;
  • Granite Rapids CPUs for datacenters based on a high-performance microarchitecture;
  • Sierra Forest processors with up to 288 cores for cloud workloads based on codenamed Crestmont energy-efficient cores;
  • Gaudi 3 processors for AI workloads that promise to quadruple BF16 performance compared to Gaudi 2.
  • Battlemage graphics processing units.

All of these products are due to be released in 2024-2025, so Intel could well demonstrate them and showcase their performance advantages, or even formally launch some of them, at Computex. What remains to be seen is whether Intel will also give a glimpse at products that are further away, such as Clearwater Forest and Falcon Shores.

Variable Refresh Rate Support Comes to NVIDIA’s GeForce Now Cloud Streaming Service

Today NVIDIA has brought variable refresh rate support to its GeForce Now cloud gaming service. The company initially promised variable refresh support on GeForce Now back in early January during CES, and has seemingly waited so that it could launch alongside GeForce Now Day Passes, which are also now available.

Variable refresh rate (VRR) technologies, including NVIDIA's own G-Sync, have been around for around a decade now, and allow a monitor to synchronize its refresh rate to the instantaneous framerate of a game. This synchronization prevents screen tearing, when two or more frames are present on a display at the same time. Without a VRR technology, gamers either have to tolerate the visual incongruity of screen tearing or enable V-Sync, which solves screen tearing by locking the framerate to the refresh rate (or a fraction thereof). VRR became popular because V-Sync added latency and could depress framerates due to it effectively being a framerate limiter.

Dubbed "Cloud G-Sync", NVIDIA touts not only a screen tearing-free experience for GeForce Now thanks to variable refresh rate support, but also lower latency thanks to “varying the stream rate to the client, driving down total latency on Reflex-enabled games.” Prior to VRR’s debut on GeForce Now, users either had to enable V-Sync in-game, enable a stream-level V-Sync setting that had the benefit of not locking the game framerate, or accept screen tearing. GeForce Now Ultimate members will also be able to pair VRR with Reflex-powered 60 FPS and 120 FPS streaming modes.

According to NVIDIA’s technical documentation, variable refresh rate support on GeForce Now can work with both Mac and Windows PCs hooked up to a VRR-capable monitor. This includes G-Sync monitors on Windows, as well as VESA AdaptiveSync/FreeSync monitors, HDMI 2.1 VRR displays, and even Apple ProMotion displays, such as the panels built into their recent MacBook Pro laptops. The biggest compatibility hurdle at this time is actually on the GPU side of matters; Windows machines need an NVIDIA GPU to use VRR with GeForce Now. Intel and AMD GPUs are "not supported at this time."

Although G-SYNC originally came out in 2013 and GeForce Now has been available since 2015, the two never intersected until now. It’s not clear why NVIDIA waited so long to bring G-Sync to GeForce Now; the company’s original announcement merely states “newly improved cloud G-SYNC technology goes even further,” implying that it wasn’t possible before but doesn’t exactly explain why.

V-Color Has New RDIMM Octo-Kits For Threadripper 7000 CPUs: 768 GB Kits Starting at $4,840

V-Color has launched several EXPO-certified DDR5 RDIMM memory kits for AMD's Ryzen Threadripper 7000 Pro and non-Pro platforms. The new RDIMM memory kits, which only come in an eight-DIMM configuration, will enable workstation users to push the limits on the WRX90 platform with frequencies up to DDR5-7200 and memory kit capacities up to a staggering 768 GB (8 x 96 GB).

These are your typical run-of-the-mill modules without the fancy heatsinks and flashy RGB lighting. The recipe for the RDIMMs revolves around a 10-layer PCB paired with SK hynix's DRAM chips. And as the Threadripper platforms are all one DIMM per channel (1DPC) designs, V-Color's octo-kits are intended to populate all the memory slots on the WRX90 motherboard in one go.

V-Color is offering their RDIMM kits in several capacities and frequencies, with kit capacities ranging from 128 GB (8 x 16 GB) up to 768 GB (8 x 96 GB), while clockspeeds start at DDR5-5600 and top out at DDR5-7200.

Typical for RDIMM kits, the maximum frequency will vary depending on the memory kit capacity. There are two factors to consider: binning costs and achieving stability at faster frequencies on higher capacities is more challenging for the processor. Ryzen Threadripper 7000 Pro and non-Pro chips officially support DDR5-5200 memory modules. Anything higher is overclocking; stability depends on the processor's integrated memory controller (IMC) quality. DDR5-7200 is only available on V-Color's 128 GB, 192 GB, and 256 GB memory kits. Meanwhile, the 512 GB and 768 GB memory kits top out at DDR5-6000.

V-Color DDR5 RDIMM Octo-Kit Specifications
Memory Kit Capacity Configuration Frequency CAS Latency Voltage Price
768 GB 8 x 96 GB DDR5-5600 - DDR5-6000 CL 36 1.25 V $4,839.99 - $4,919.99
512 GB 8 x 64 GB DDR5-5600 - DDR5-6000 CL 36 1.25 V $3,429.99 - $3,509.99
384 GB 8 x 48 GB DDR5-6400 - DDR5-6800 CL 32 - CL 34 1.40 V $3,339.99 - $3,559.99
256 GB 8 x 32 GB DDR5-5600 - DDR5-7200 CL 32 - CL 36 1.25 V - 1.40 V $2,139.99 - $3,479.99
192 GB 8 x 24 GB DDR5-5600 - DDR5-7200 CL 32 - CL 36 1.25 V - 1.40 V $1,579.99 - $2,199.99
128 GB 8 x 16 GB DDR5-5600 - DDR5-7200 CL 32 - CL 36 1.25 V - 1.40 V $1,049.99 - $1,669.99

The DDR5-5600 and DDR5-6000 memory kits are the only ones rated to run at a relatively modest 1.25 V. The higher-end ones require 1.40 V due to the higher frequency and tighter memory timings. The memory timings on V-Color's RDIMM memory kits are decent, though they're far from rivaling premium DDR5 mainstream memory kits. The DDR5-5600 memory kit has 36-38-38-38-80 timings, whereas the DDR5-6000 and DDR5-6400 memory flaunts 32-39-39-102 timings. At the same time, V-Color binned the DDR5-6600 and DDR5-6800 memory kits for 34-46-46-92 and the DDR5-7000 and DDR5-7200 memory kits for 34-43-43-102 and 36-46-46-112, respectively.

V-Color's RDIMM products are overclocked memory kits with a limited lifetime warranty. They come with AMD EXPO support to facilitate one-click memory overclocking. The memory kits are built specifically for the WRX90 platform but should work on Intel platforms (your mileage will vary, of course). Regarding the QVL, V-Color has validated the brand's overclocked RDIMMs on the Asus Pro WS WRX90E-Sage SE and the ASRock WRX90 WS Evo, motherboards that cost over $1,000.

The 128 GB DDR5-5600 memory kit is the most affordable out of the lot, with an MSRP of $1,049.99, whereas the 192 GB counterpart sells for $1,579.99. At the other end of the spectrum, the flagship 768 GB DDR5-6000 memory kit has an hefty $4,919.99 price tag. V-Color's RDIMM memory kits are up for pre-order on the company's online store, and the vendor will ship orders on March 15. The memory kits will be available worldwide through official distribution partners on the same date.

Intel to Hold Webinar to Discuss Long-Term Vision for Foundry, Separating Fab and Design Reporting

As Intel prepares to move its fabs into its new Intel Foundry business, it will change the way it reports results in the coming months. To discuss the company's long-term vision and give investors a better understanding of how Intel's business will move forward with Intel Foundry and Intel Products groups, Intel plans to host a webinar on segment reporting on April 2, 2024.

"The webinar will discuss the longer-term vision for the foundry business and the importance of establishing a foundry-like relationship between Intel Foundry, Intel's manufacturing organization, and Intel Products, its product business units, to drive greater transparency and accountability," the description of the event reads.

The company plans to submit an 8-K form, revising its past financial reports to align with a new reporting framework, before the upcoming investor webinar. Starting with Q1 FY2024, Intel will disclose its financial outcomes using this new reporting structure.

One of the things that Intel will touch upon at the webinar is the financial and market performance of the Intel Foundry division. These things may not impress. It is likely that initially, Intel Foundry's business will have high costs, and the majority of orders will come from Intel itself (i.e., a significant but still relatively low market share). Meanwhile, Intel Foundry has to invest in advanced fab tools to prep its fabs for 20A and 18A, which drives its costs up, and this likely means losses.

Yet, it will take some time before Intel Foundry obtains revenue streams from major customers, such as Microsoft or the U.S. military. IF's financial numbers and market share may still not impress immediately, but this is normal at this stage. This is perhaps what Intel will communicate and discuss at its upcoming webinar.

In fact, although Intel fully expects its 18A (1.8nm-class) fabrication process to be ahead of its rivals in terms of power, performance, and area (PPA), the company's chief financial officer reiterated at a conference this week that it does not expect to win the bulk of any large customer's chip orders with this technology.

"We probably will not win anybody's major volume [with] 18A," said David Zisner, CFO of Intel, at the Morgan Stanley Technology, Media, and Telecom Conference (via SeekingAlpha). "We will win some smaller SKUs, and that is all we need, to be honest with you. That will be very significant to us, even though it seems maybe marginal in the marketplace, particularly if we can collect enough of these customers [developing high-performance compute chips]."

Intel's 18A fabrication process builds upon the company's 20A manufacturing technology (a 2nm-class node) that introduces RibbonFET gate-all-around transistors and PowerVia backside power delivery network. In GAA transistors, horizontal channels are fully encased by gates. These channels are built using epitaxial growth and selective removal, enabling adjustments in width for enhanced performance or lower power use. As for the backside power delivery network (BS PDN), the technique moves power lines to the wafer's back, separating them from I/O wiring, which allows for making power vias thicker and reducing their resistance, which helps to both increase transistor performance and lower power consumption.

Both GAA transistors and BS PDN promise to offer significant performance and power efficiency enhancements, which is good for AI, HPC, and smartphone SoCs. Meanwhile, 18A promises a 10% performance per watt improvement over 20A and GAA innovations. Thus, it promises to be quite competitive compared to TSMC's N3B and N3P.

"When it comes to the high-performance compute part of the market, that is really where we are starting to see a lot of our uptake," said Zisner. "The particular aspects of 18A with PowerVia and RibbonFET, combined with our just legacy of experience on high-performance compute, I think, makes us a really compelling partner for customers that are in that space and want to develop products."

Intel's 18A was designed to be a major foundry node, and consequently, its process design kit (PDK) is now available and the production technology is compatible with third-party electronic design automation (EDA) and simulation tools. However, Intel itself does not expect this process to be used for high-volume products of third parties. Even Microsoft is currently only slated to produce one chip on Intel's 18A.

Intel Foundry is a newcomer to the contract chipmaking market. As a result, Intel is planning for a multi-generational effort to break into the market, as the company will need to earn the confidence and business of third-party customers. As this happens, Intel Foundry expects to gain market share and reach profitability.

Sources: IntelSeekingAlpha

SanDisk Professional PRO-BLADE Portable SSD Ecosystem Review

Western Digital had unveiled the SanDisk Professional PRO-BLADE modular SSD ecosystem in mid-2022 to serve the needs of the professional market. Compact and sturdy NVMe drives (PRO-BLADE SSD Mag) swappable across discrete bus-powered enclosures (PRO-BLADE TRANSPORT), and also compatible with a multi-bay reader (PRO-BLADE STATION) have perfectly fit the requirements of multi-user / multi-site workflows in the content capture industry. Read on for a detailed look at the first-generation PRO-BLADE SSD Mags and the PRO-BLADE TRANSPORT enclosure. In addition to the evaluation of the performance consistency, power consumption, and thermal profile, an analysis of the internals is also included.

Western Digital Issues Update on Company Split: CEOs for Post-Split Entities Announced

Now well in the midst of executing its plan to divide itself into separate hard drive and NAND businesses, Western Digital today offered a fresh update on the state of that split, and what the next steps are for the company. With the eventual goal of dividing the company into two independent, publicly traded companies, Western Digital is reporting that they have made significant progress in key transactional projects, and they are also announcing their initial leadership appointments for the post-separation businesses.

Western Digital's separation, announced on October 30, 2023, aims to create two focused companies with distinct product lineups for hard drives and NAND flash memory, respectively, as well as NAND flash memory-based products. This move is expected to speed up innovation and introduce new growth opportunities, according to Western Digital. Meanwhile, with separate capital structures, operational efficiency of the two entities will be higher compared to the united company, the management of Western Digital claims.

Western Digital led the storage industry's consolidation by acquiring HGST, various SSD and flash companies in the early 2010s, and SanDisk in 2016 for NAND flash production. As a result, in late 2010s the company become a media-agnostic, vertically-integrated storage technology company. However, the company faced challenges in growing its revenue. The 3D NAND and SSD markets are highly competitive commodity markets, and as a result they tend to fluctate depending on supply and demand. Meanwhile, demand for HDDs is declining and offseting decreasing unit sales with 3D NAND-based products and nearline hard drives was challenging. Meanwhile, to avoid avoid competition with larger storage solutions providers like Dell, HPE, and IBM — which purchase Western Digital's HDDs, SSDs, and NAND memory — Western Digital had to divest its storage solutions, which presents additional challenges.

As a result, Western Digital's HDD and NAND businesses have acted largely independently since late 2020, when it became apparent that the combined company has failed to become bigger than the sum of Western Digital and SanDisk parts. So far, quite some progress has been made in preparing for the separation, including establishing legal entities in 18 countries, preparing independent financial models, and finalizing preparations for regulatory filings. As a result, the company remains on track to finish the split in the second-half of this year, according to Western Digital.

With regards to post-split leadership, David Goeckeler has been appointed as the chief executive designate for the NAND flash memory spinoff company. He expressed enthusiasm for the NAND business's potential in market growth and the development of new memory technologies.

"Today's announcement highlights the important steps we are making towards the completion of an extremely complex transaction that incorporates over a dozen countries and spans data storage technology brands for consumers to professional content creators to the world’s leading device OEMs and the largest cloud providers," said David Goeckeler, CEO of Western Digital. "I am pleased with the exceptional work the separation teams have done so far in creating a spin-ready foundation that will ensure a successful transition to independent, market-leading companies for our Flash and HDD businesses."

Meanwhile, Irving Tan, currently executive vice president of global operations, will assume the CEO role for the standalone HDD company, which will continue to operate under the Western Digital brand. It is unclear where Ashley Gorakhpurwalla, currently the head of WDC's HDD business unit, will end up, or if he'll even remain with the company at all.

"While both Western Digital's businesses will have the strategic focus and resources to pursue exciting opportunities in their respective markets once the separation is complete, the Flash business offers exciting possibilities with market growth potential and the emerging development of disruptive, new memory technologies," added Goeckeler. "I am definitely looking forward to what's next for the spinoff team."

JEDEC Publishes GDDR7 Memory Spec: Next-Gen Graphics Memory Adds Faster PAM3 Signaling & On-Die ECC

JEDEC on Tuesday published the official specifications for GDDR7 DRAM, the latest iteration of the long-standing memory standard for graphics cards and other GPU-powered devices. The newest generation of GDDR brings a combination of memory capacity and memory bandwidth gains, with the later being driven primarily by the switch to PAM3 signaling on the memory bus. The latest graphics RAM standard also boosts the number of channels per DRAM chip, adds new interface training patterns, and brings in on-die ECC to maintain the effective reliability of the memory.

“JESD239 GDDR7 marks a substantial advancement in high-speed memory design,” said Mian Quddus, JEDEC Board of Directors Chairman. “With the shift to PAM3 signaling, the memory industry has a new path to extend the performance of GDDR devices and drive the ongoing evolution of graphics and various high-performance applications.”

GDDR7 has been in development for a few years now, with JEDEC members making the first disclosures around the memory technology about a year ago, when Cadence revealed the use of PAM3 encoding as part of their validation tools. Since then we've heard from multiple memory manufacturers that we should expect the final version of the memory to launch in 2024, with JEDEC's announcement essentially coming right on schedule.

As previously revealed, the biggest technical change with GDDR7 comes with the switch from two-bit non-return-to-zero (NRZ) encoding on the memory bus to three-bit pulse amplitude modulating (PAM3) encoding. This change allows GDDR7 to transmit 3 bits over two cycles, 50% more data than GDDR6 operating at an identical clockspeed. As a result, GDDR7 can support higher overall data transfer rates, the critical component to making each generation of GDDR successively faster than its predecessor.

GDDR Generations
  GDDR7 GDDR6X
(Non-JEDEC)
GDDR6
B/W Per Pin 32 Gbps (Gen 1)
48 Gbps (Spec Max)
24 Gbps (Shipping) 24 Gbps (Sampling)
Chip Density 2 GB (16 Gb) 2 GB (16 Gb) 2 GB (16 Gb)
Total B/W (256-bit bus) 1024 GB/sec 768 GB/sec 768 GB/sec
DRAM Voltage 1.2 V 1.35 V 1.35 V
Data Rate QDR QDR QDR
Signaling PAM-3 PAM-4 NRZ (Binary)
Maximum Density 64 Gb 32 Gb 32 Gb
Packaging 266 FBGA 180 FBGA 180 FBGA

The first generation of GDDR7 is expected to run at data rates around 32 Gbps per pin, and memory manufacturers have previously talked about rates up to 36 Gbps/pin as being easily attainable. However the GDDR7 standard itself leaves room for even higher data rates – up to 48 Gbps/pin – with JEDEC going so far as touting GDDR7 memory chips "reaching up to 192 GB/s [32b @ 48Gbps] per device" in their press release. Notably, this is a significantly higher increase in bandwidth than what PAM3 signaling brings on its own, which means there are multiple levels of enhancements within GDDR7's design.

Digging deeper into the specification, JEDEC has also once again subdivided a single 32-bit GDDR memory chip into a larger number of channels. Whereas GDDR6 offered two 16-bit channels, GDDR7 expands this to four 8-bit channels. The distinction is somewhat arbitrary from an end-user's point of view – it's still a 32-bit chip operating at 32Gbps/pin regardless – but it has a great deal of impact on how the chip works internally. Especially as JEDEC has kept the 256-bit per channel prefetch of GDDR5 and GDDR6, making GDDR7 a 32n prefetch design.


GDDR Channel Architecture. Original GDDR6-era Diagram Courtesy Micron

The net impact of all of this is that, by halving the channel width but keeping the prefetch size the same, JEDEC has effectively doubled the amount of data that is prefetched per cycle of the DRAM cells. This is a pretty standard trick to extend the bandwidth of DRAM memory, and is essentially the same thing JEDEC did with GDDR6 in 2018. But it serves as a reminder that DRAM cells are still very slow (on the order of hundreds of MHz) and aren't getting any faster. So the only way to feed faster memory buses is by fetching ever-larger amounts of data in a single go.

The change in the number of channels per memory chip also has a minor impact on how multi-channel "clamshell" mode works for higher capacity memory configurations. Whereas GDDR6 accessed a single memory channel from each chip in a clamshell configuration, GDDR7 will access two channels – what JEDEC is calling two-channel mode. Specifically, this mode reads channels A and C from each chip. It is effectively identical to how clamshell mode behaved with GDDR6, and it means that while clamshell configurations remain supported in this latest generation of memory, there aren't any other tricks being employed to improve memory capacity beyond ever-increasing memory chip densities.

On that note, the GDDR7 standard officially adds support for 64Gbit DRAM devices, twice the 32Gbit max capacity of GDDR6/GDDR6X. Non-power-of-two capacities continue to be supported as well, allowing for 24Gbit and 48Gbit chips. Support for larger memory chips further pushes the maximum memory capacity of a theoretical high-end video card with a 384-bit memory bus to as high as 192GB of memory – a development that would no doubt be welcomed by datacenter operators in the era of large language AI models. With that said, however, we're still regularly seeing 16Gbit memory chips used on today's memory cards, even though GDDR6 supports 32Gbit chips. Coupled with the fact that Samsung and Micron have already disclosed that their first generation of GDDR7 chips will also top out at 16Gbit/24Gbit respectively, it's safe to say that 64Gbit chips are pretty far off in the future right now (so don't sell off your 48GB cards quite yet).

For their latest generation of memory technology, JEDEC is also including several new-to-GDDR memory reliability features. Most notably, on-die ECC capabilities, similar to what we saw with the introduction of DDR5. And while we haven't been able to get an official comment from JEDEC on why they've opted to include ECC support now, its inclusion is not surprising given the reliability requirements for DDR5. In short, as memory chip densities have increased, it has become increasingly hard to yield a "perfect" die with no flaws; so adding on-chip ECC allows memory manufacturers to keep their chips operating reliably in the face of unavoidable errors.


This figure is reproduced, with permission, from JEDEC document JESD239, figure 124

Internally, the GDDR7 spec requires a minimum of 16 bits of parity data per 256 bits of user data (6.25%), with JEDEC giving an example implementation of a 9-bit single error correcting code (SEC) plus a 7-bit cyclic redundancy check (CRC). Overall, GDDR7 on-die ECC should be able to correct 100% of 1-bit errors, and detect 100% of 2-bit errors – falling to 99.3% in the rare case of 3-bit errors. Information about memory errors is also made available to the memory controller, via what JEDEC terms their on-die ECC transparency protocol. And while technically separate from ECC itself, GDDR7 also throws in another memory reliability feature with command address parity with command blocking (CAPARBLK), which is intended to improve the integrity of the command address bus.

Otherwise, while the inclusion of on-die ECC isn't likely to have any more of an impact on consumer video cards than its inclusion had for DDR5 memory and consumer platforms there, it remains to be seen what this will mean for workstation and server video cards. The vendors there have used soft ECC on top of unprotected memory for several generations now; presumably this will remain the case for GDDR7 cards as well, but the regular use of soft ECC makes things a lot more flexible than in the CPU space.


This figure is reproduced, with permission, from JEDEC document JESD239, figure 152

Finally, GDDR7 is also introducing a suite of other reliability-related features, primarily related to helping PAM3 operation. This includes core independent LFSR (linear-feedback shift register) training patterns with eye masking and error counters. LFSR training patterns are used to test and adjust the interface (to ensure efficiency), eye masking evaluates signal quality, and error counters track the number of errors during training.

Technical matters aside, this week's announcement includes statements of support from all of the usual players on both sides of the isle, including AMD and NVIDA, and the Micron/Samsung/SKhynix trifecta. It goes without saying that all parties are keen to getting to use or sell GDDR7 respectively, given the memory capacity and bandwidth improvements it will bring – and especially in this era where anything aimed at the AI market is selling like hotcakes.

No specific products are being announced at this time, but with Samsung and Micron having previously announced their intentions to ship GDDR7 memory this year, we should see new memory (and new GPUs to pair it with) later this year.

JEDEC standards and publications are copyrighted by the JEDEC Solid State Technology Association.  All rights reserved.

Apple Launches M3-Based MacBook Air 13 and 15: 3nm CPU for the Masses

Apple on Monday introduced its new generation MacBook Air laptops based on the company's most-recent M3 system-on-chip (SoC). The new MacBook Air notebooks come in the same sizes as the previous models – 13.6 inches and 15.3 inches – with prices starting from $1,099 and $1,299 respectively.

The key improvement in Apple's 2024 MacBook Air laptops is of course the M3 processor. Fabbed on TSMC's N3B process, Apple's latest mainstream SoC was first launched late last year as part of the 2023 MacBook Pro lineup, and is now being brought down to the MacBook Air family. The vanilla M3 features four high-performance cores operating at up to 4.05 GHz, four energy-efficient cores, a 10 core GPU based on the latest graphics architecture (with dynamic caching, hardware-accelerated ray tracing, and hardware-accelerated mesh shading), and a new media engine with hardware-accelerated AV1 decoding.

MacBook Air Specifications
Model MBA 15
2024
MBA 13
2024
MBA 15
2023
MBA 13
2022
MBA 13
2020
CPU Apple M3
4C/4T High-Perf + 4C/4T High-Eff
Apple M2
4C/4T High-Perf + 4C/4T High-Eff
Apple M1
4C/4T High-Perf +
4C/4T High-Eff
GPU Apple M3 Integrated
(8 or 10 Cores)
Apple M2 Integrated
(8 or 10 Cores)
Apple M1 Integrated
(7 or 8 Cores)
Memory 8 - 24 GB LPDDR5-6400 8 - 24 GB LPDDR5-6400 8 - 16 GB LPDDR4X-4266
SSD 256 GB - 2 TB 256 GB - 2 TB 256 GB - 2 TB
I/O 2x USB4 Type-C
w/Thunderbolt 3
1x MagSafe 3
3.5mm Audio
Touch ID
2x USB4 Type-C
w/Thunderbolt 3
1x MagSafe 3
3.5mm Audio
Touch ID
2x USB4 Type-C
w/Thunderbolt 3

3.5mm Audio
Touch ID
Display 15.3-inch 2880x1864 IPS LCD
P3 with True Tone
13.6-inch 2560x1664 IPS LCD
P3 with True Tone
15.3-inch 2880x1864 IPS LCD
P3 with True Tone
13.6-inch 2560x1664 IPS LCD
P3 with True Tone
13.3-inch 2560x1600 IPS LCD
P3 with True Tone
Dimensions Width 34.0 cm 30.4 cm 34.0 cm 30.4 cm 30.4 cm
Depth 23.7 cm 21.5 cm 23.7 cm 21.5 cm 21.2 cm
Height 1.1 cm 1.1 cm 1.1 cm 1.1 cm 0.41 - 1.61 cm
Weight 3.3 lbs (1.5 kg) 2.7 lbs (1.22 kg) 3.3 lbs (1.5 kg) 2.7 lbs (1.22 kg) 2.8 lbs (1.29 kg)
Battery Capacity 66.5 Wh 52.6 Wh 66.5 Wh 52.6 Wh 49.9 Wh
Battery Life 15 - 18 Hours 15 - 18 Hours 15 - 18 Hours
Price $1299 $1099 $1299 $1199 $999

Like prior vanilla M-series SoC, the M3 offers two display engines, allowing it to drive up to two displays. Normally this has been one internal and one external display, but new to the M3/2024 MBAs, the laptop can also drive two external 5K displays when the internal display is disabled (e.g. the lid's closed).

With regards to performance, Apple is opting to compare the new AIrs to the 2020 models with Apple's M1 SoC. The CPU is said to be up to 35% – 60% faster compared to the original M1 chip depending on the workload, but such comparisons should be taken with a grain of salt as companies tend to overhype their biggest advantages. One thing to keep in mind is that since MacBook Airs come without active cooling, their performance is typically lower than MacBook Pros running the same processor.

The SoC supports up to 24 GB of LPDDR5-6400 memory (featuring bandwidth of 100 GB/s), though entry-level MacBook Air models still feature only a diminutive 8 GB of RAM and a 256 GB SSD. More advanced (and usable) configurations offer 16 GB or 24 GB of memory and up to 2 TB of solid-state storage.

Other improvements of Apple's 2024 MacBook Air laptops based on the M3 processor compared to predecessors include Wi-Fi 6E support;  improved three-microphones array with enhanced voice clarity, voice isolation, and wide spectrum modes.

As for input/output capabilities, the new MacBook Air notebooks feature two Thunderbolt 4/USB-C ports, a MagSafe port for charging, a 3.5-mm jack for headsets, and a 1080p FaceTime HD camera.

The 2024 Apple MacBook Air come in midnight, starlight, silver, and space gray colors. The machines are equipped with a 52.6 Wh battery that provides up to 18 hours of video playback. The 13.6-inch machine is 0.44 inch (1.13 cm) thick and weighs 2.7 pounds (1.24 kilograms), whereas the 15.3-inch laptop is 0.45 inch (1.15 cm) thick and weighs 3.3 pounds (1.51 kilograms).

With the launch of its M3-based MacBook Airs, Apple will discontinue its M2-based MacBook Air 15, but will retain the M2-based MacBook Air 13 as their entry-level option, with prices now starting at $999.

Silicon Power PX10 Portable SSD Review: One Step Forward, Two Steps Back

Silicon Power announced the MS70 and PX10 Portable SSDs in late 2023. The company is well known for offering entry- and mid-range products at compelling price points, but the two products came with plenty of promises in the 1GBps-class category. The MS70 promised high storage density (up to 2TB in a compact thumb drive), while the PX10 targeted power users and professionals with performance consistency as the focus. Read on for a detailed look at the Silicon Power PX10 including an analysis of its internals, value proposition, and evaluation of its performance consistency, power consumption, and thermal profile.

SK Hynix Mulls 'Differentiated' HBM Memory Amid AI Frenzy

SK Hynix and AMD were at the forefront of the memory industry with the first generation of high bandwidth memory (HBM) back in 2013 – 2015, and SK Hynix is still leading this market in terms of share. In a bid to maintain and grow its position, SK Hynix has to adapt to the requirements of its customers, particularly in the AI space, and to do so it's mulling over how to make 'differentiated' HBM products for large customers.

"Developing customer-specific AI memory requires a new approach as the flexibility and scalability of the technology becomes critical," said Hoyoung Son, the head of Advanced Package Development at SK Hynix in the status of a vice president

When it comes to performance, HBM memory with a 1024-bit interface has been evolving fairly fast: it started with a data transfer rate of 1 GT/s in 2014 – 2015 and reached upwards of 9.2 GT/s – 10 GT/s with the recently introduced HBM3E memory devices. With HBM4, the memory is set to transit to a 2048-bit interface, which will ensure steady bandwidth improvement over HBM3E.

But there are customers which may benefit from differentiated (or semi-custom) HBM-based solutions, according to the vice president.

"For implementing diverse AI, the characteristics of AI memory also need to become more varied," Hoyoung Son said in an interview with BusinessKorea. "Our goal is to have a variety of advanced packaging technologies capable of responding to these changes. We plan to provide differentiated solutions that can meet any customer needs."

With a 2048-bit interface, many (if not the vast majority) of HBM4 solutions will likely be custom or at least semi-custom based on what we know from official and unofficial information about the upcoming standard. Some customers might want to keep using interposers (but this time they are going to get very expensive) and others will prefer to install HBM4 modules directly on logic dies using direct bonding techniques, which are also expensive.

Making differentiated HBM offerings requires sophisticated packaging techniques, including (but certainly not limited to) SK Hynix's Advanced Mass Reflow Molded Underfill (MR-RUF) technology. Given the company's vast experience with HBM, it may well come up with something else, especially for differentiated offerings.

"For different types of AI to be realized, the characteristics of AI memory also need to be more diverse," the VP said. "Our goal is to have a range of advanced packaging technologies to respond to the shifting technological landscape. Looking ahead, we plan to provide differentiated solutions to meet all customer needs."

Sources: BusinessKorea, SK Hynix

The Cooler Master MWE V2 Gold 750W PSU Review: Effective, But Limited By Aging Platform

Cooler Master, renowned for its pioneering role in cooling technologies, has evolved into a key player in the PC components industry, extending its expertise to include cases and power supply units (PSUs). The company's current catalog is a testament to its commitment to diversity, featuring over 75 PC cases, 90 coolers, and 120 PSUs, all designed to cater to the evolving demands of tech enthusiasts and professionals alike.

This review focuses on the Cooler Master MWE Gold V2 750W PSU, a key offering in Cooler Master's power supply lineup that embodies the brand's vision of combining quality and value. The MWE Gold V2 series is engineered to offer solid performance and reliability at a price point that appeals to system builders and gamers looking for an entry-level to mid-range solution. As a result, the MWE Gold V2 750W has been a consistently popular offering within Cooler Master's catalog, often cycling in and out of stock depending on what sales are going on. This makes the PSU a bit harder to track down in North America than it does Europe, and quick to vanish when it does show up.

Tenstorrent Licenses RISC-V CPU IP to Build 2nm AI Accelerator for Edge

Tenstorrent this week announced that it had signed a deal to license out its RISC-V CPU and AI processor IP to Japan's Leading-edge Semiconductor Technology Center (LSTC), which will use the technology to build its edge-focused AI accelerator. The most curious part of the announcement is that this accelerator will rely on a multi-chiplet design and the chiplets will be made by Japan's Rapidus on its 2nm fabrication process, and then will be packaged by the same company.

Under the terms of the agreement, Tenstorrent will license its datacenter-grade Ascalon general-purpose processor IP to LSTC and will help to implement the chiplet using Rapidus's 2nm fabrication process. Tenstorrent's Ascalon is a high-performance out-of-order RISC-V CPU design that features an eight-wide decoding. The Ascalon core packs six ALUs, two FPUs, and two 256-bit vector units and when combined with a 2nm-class process technology promises to offer quite formidable performance.

The Ascalon was developed by a team led by legendary CPU designer Jim Keller, the current chief executive of Tenstorrent, who used to work on successful projects by AMD, Apple, Intel, and Tesla.

In addition to general-purpose CPU IP licensing, Tenstorrent will co-design 'the chip that will redefine AI performance in Japan.' This apparently means that Tenstorrent  does not plan to license LSTC its proprietary  Tensix cores tailored for neural network inference and training, but will help to design a proprietary AI accelerator generally for inference workloads.

"The joint effort by Tenstorrent and LSTC to create a chiplet-based edge AI accelerator represents a groundbreaking venture into the first cross-organizational chiplet development in semiconductor industry," said Wei-Han Lien, Chief Architect of Tenstorrent's RISC-V products. "The edge AI accelerator will incorporate LSTC's AI chiplet along with Tenstorrent's RISC-V and peripheral chiplet technology. This pioneering strategy harnesses the collective capabilities of both organizations to use the adaptable and efficient nature of chiplet technology to meet the increasing needs of AI applications at the edge."

Rapidus aims to start production of chips on its 2nm fabrication process that is currently under development sometimes in 2027, at least a year behind TSMC and a couple of years behind Intel. Yet, if it starts high-volume 2nm manufacturing in 2027, it will be a major breakthrough from Japan, which is trying hard to return to the global semiconductor leaders.

Building an edge AI accelerator based on Tenstorrent's IP and Rapidus's 2nm-class production node is a big deal for LSTC, Tenstorrent, and Rapidus as it is a testament for technologies developed by these three companies.

"I am very pleased that this collaboration started as an actual project from the MOC conclusion with Tenstorrent last November," said Atsuyoshi Koike, president and CEO of Rapidus Corporation. "We will cooperate not only in the front-end process but also in the chiplet (back-end process), and work on as a leading example of our business model that realizes everything from design to back-end process in a shorter period of time ever."

Intel Brings vPro to 14th Gen Desktop and Core Ultra Mobile Platforms for Enterprise

As part of this week's MWC 2024 conference, Intel is announcing that it is adding support for its vPro security technologies to select 14th Generation Core series processors (Raptor Lake-R) and their latest Meteor Lake-based Core Ultra-H and U series mobile processors. As we've seen from more launches than we care to count of Intel's desktop and mobile platforms, they typically roll out their vPro platforms sometime after they've released their full stack of processors, including overclockable K series SKUs and lower-powered T series SKUs, and this year is no exception. Altogether, Intel is announcing vPro Essential and vPro Enterprise support for several 14th Gen Core series SKUs and Intel Core Ultra mobile SKUs.

Intel's vPro security features is something we've covered previously – and on that note, Intel has a new Silicon Security Engine giving the chips the ability to authentical the systems firmware. Intel also states that Intel Threat Detection within vPro has been enhanced and adds an additional layer for the NPU, with an xPU model (CPU/GPU/NPU) to help detect a variety of attacks, and also enables 3rd party software to fun faster. Intel claims is the only AI-based security deployment within a Windows PC to date. Both the total Enterprise securities and the cut-down Essentials vPro hardware-level security to select 14th Gen Core series processors, as well as their latest mobile-focused Meteor Lake processors with Arc graphics launched last year.

Intel 14th Gen vPro: Raptor Lake-R Gets Secured

As we've seen over the last few years with a global shift towards remote work due to the Coronavirus pandemic, the need for up-to-date security in small and larger enterprises is just as critical as it has ever been. Remote and employees in offices alike must have access to the latest software and hardware frameworks to ensure the security of vital data, and that's where Intel vPro comes in.

To quickly recap the current state of affairs, let's take a look at the two levels of Intel vPro securities available,  vPro Essentials and vPro Enterprise, and how they differ.

Intel's vPro Essentials was first launched back in 2022 and is a subset of Intel's complete vPro package, which is now commonly known as vPro Enterprise. The Intel vPro Essentials security package is essentially (as per the name) tailored and designed for small businesses, providing a solid foundation in security without penalizing performance. It integrates hardware-enhanced security features, ensuring hardware-level protection against emerging threats from right from its installation. It also utilizes real-time intelligence for workload optimization and Intel's Thread Detection Technology. It adds an additional layer below the operating system that uses AI-based threat detection to mitigate OS-level threats and attacks.

Pivoting to Intel vPro Enterprise security features, this is designed for SMEs to meet the high demands of large-scale business environments. It offers advanced security features and remote management capabilities, which are crucial for businesses operating with sensitive data and requiring high levels of cybersecurity. Additionally, the platform provides enhanced performance and reliability, making it suitable for intensive workloads and multitasking in a professional setting. Integrating these features from the vPro Enterprise platform ensures that large enterprises can maintain high productivity levels while ensuring data security and efficient IT management with the latest generations of processors, such as the Intel Core 14th Gen family.

Much like we saw when Intel announced their vPro for the 13th Gen Core series, it's worth noting that both the 14th and 13th Gen Core series are based on the same Raptor Lake architecture and, as such, are identical in every aspect bar base and turbo core frequencies.

Intel 14th Gen Core with vPro for Desktop
(Raptor Lake-R)
AnandTech Cores
P+E/T
P-Core
Base/Turbo
(MHz)
E-Core
Base/Turbo
(MHz)
L3 Cache
(MB)
Base
W
Turbo
W
vPRO
Support
(Ent/Ess)
Price
($)
i9-14900K 8+16/32 3200 / 6000 2400 / 4400 36 125 253 Enterprise $589
i9-14900 8+16/32 2000 / 5600 1500 / 4300 36 65 219 Both $549
i9-14900T 8+16/32 1100 / 5500 800 / 4000 36 35 106 Both $549
 
i7-14700K 8+12/28 3400 / 5600 2500 / 4300 33 125 253 Enterprise $409
i7-14700 8+12/28 2100 / 5400 1500 / 4200 33 65 219 Both $384
i7-14700T 8+12/28 1300 / 5000 900 / 3700 33 35 106 Both $384
 
i5-14600K 6+8/20 3500 / 5300 2600 / 4000 24 125 181 Enterprise $319
i5-14600 6+8/20 2700 / 5200 2000 / 3900 24 65 154 Both $255
i5-14500 6+8/20 2600 / 5000 1900 / 3700 24 65 154 Both $232
i5-14600T 6+8/20 1800 / 5100 1200 / 3600 24 35 92 Both $255
i5-14500T 6+8/20 1700 / 4800 1200 / 3400 24 35 92 Both $232

While Intel isn't technically launching any new chip SKUs (either desktop or mobile) with vPro support, the vPro desktop platform features are enabled through the use of specific motherboard chipsets, with both Q670 and W680 chipsets offering sole support for vPro on 14th Gen. Unless users are using either a Q670 or W680 motherboard with the specific chips listed above. vPro Essentials or Enterprise will not be enabled or work with each processor unless installed into a motherboard from one of these chipsets.

As with the previous 13th Gen Core series family (Raptor Lake), the 14th Gen, which is a direct refresh of these, follows a similar pattern. Specific SKUs from the 14th Gen family include support only for the full-fledged vPro Enterprise, including the Core i5-14600K, the Core i7-14700K, and the flagship Core i9-14900K. Intel's vPro Enterprise security features are supported on both Q670 and W680 motherboards, giving users more choice in which board they opt for.

The rest of the above Intel 14th Gen Core series stack, including the non-monikered chips, e.g., the Core i5-14600, as well as the T series, which are optimized for efficient workloads with a lower TDP than the rest of the stack, all support both vPro Enterprise and vPro Essentials. This includes two processors from the Core i9 family, including the Core i9-14900 and Core i9-14900T, two from the i7 series, the Core i7-14700 and Core i7-14700T, and four from the i5 series, the Core i5-14600, Core i5-14500, the Core i5-14600T and the COre i5-14500T.


The ASRock Industrial IMB-X1231 W680 mini-ITX motherboard supports vPro Enterprise and Essentials

For the processors mentioned above (non-K), different levels of vPro support are offered depending on the motherboard chipset. If a user wishes to use a Q670 motherboard, then users can specifically opt to use Intel's cut-down vPro Essentials security features. Intel states that users with a Q670 or W680 can use the full vPro Enterprise security features, including the Core i9-14900K, the Core i7-14700K, and the Core i5-14600K. Outside of this, none of the 14th Gen SKUs with the KF (unlocked with no iGPU) and F (no iGPU) monikers are listed with support for vPro.

Intel Meteor Lake with vPro: Core Ultra H and U Series get Varied vPro Support

Further to the Intel 14th Gen Core series for desktops, Intel has also enabled vPro support for their latest Meteor Lake-based Core Ultra H and U series mobile processors. Unlike the desktop platform for vPro, things are a little different in the mobile space, as Intel offers vPro on their mobile SKUs, either with vPro Enterprise or vPro Essentials, not both.

Intel Core Ultra H and U-Series Processors with vPro
(Meteor Lake)
AnandTech Cores
(P+E+LP/T)
P-Core Turbo
Freq
E-Core Turbo
Freq
GPU GPU Freq L3 Cache
(MB)
vPro Support
(Ent/Ess)
Base TDP Turbo TDP
Ultra 9  
Core Ultra 9 185H 6+8+2/22 5100 3800 Arc Xe (8) 2350 24 Enterprise 45 W 115 W
Ultra 7  
Core Ultra 7 165H 6+8+2/22 5000 3800 Arc Xe (8) 2300 24 Enterprise 28 W 64/115 W
Core Ultra 7 155H 6+8+2/22 4800 3800 Arc Xe (8) 2250 24 Essentials 28 W 64/115 W
Core Ultra 7 165U 2+8+2/14 4900 3800 Arc Xe (4) 2000 12 Enterprise 15 W 57 W
Core Ultra 7 164U 2+8+2/14 4800 3800 Arc Xe (4) 1800 12 Enterprise 9 W 30 W
Core Ultra 7 155U 2+8+2/14 4800 3800 Arc Xe (4) 1950 12 Essentials 15 W 57 W
Ultra 5  
Core Ultra 5 135H 4+8+2/18 4600 3600 Arc Xe
(7)
2200 18 Enterprise 28 W 64/115 W
Core Ultra 5 125H 4+8+2/18 4500 3600 Arc Xe (7) 2200 18 Essentials 28 W 64/115 W
Core Ultra 5 135U 2+8+2/14 4400 3600 Arc Xe (4) 1900 12 Enterprise 15 W 57 W
Core Ultra 5 134U 2+8+2/14 4400 3800 Arc Xe (4) 1750 12 Enterprise 9 W 30 W
Core Ultra 5 125U 2+8+2/14 4300 3600 Arc Xe (4) 1850 12 Essentials 15 W 57 W

The above table highlights not just the specifications of each Core Ultra 9, 7, and 5 SKU but also denotes which model gets what level of vPro support. Starting with the Core Ultra 9 185H processor, the current mobile flagship chip on Meteor Lake, this chip supports vPro Enterprise. Along with the other top-tier SKU from each of the Core Ultra 9, 7, and 5 families, including the Core Ultra 7 165H and the Core Ultra 135H, other chips with vPro Enterprise support include the Core Ultra 7 165U and Core Ultra 7 164U, as well as the Core Ultra 5 135U and Core Ultra 5 134U.

Intel's other Meteor Lake chips, including the Core Ultra 7 155H, the Core Ultra 7 155U, the Core Ultra 5 125H, and the Core Ultra 5 125U, only come with support Intel's vPro Essentials features and not with support for Enterprise This presents a slight 'dropping of the ball' from Intel on this, which we highlighted in our Intel 13th Gen Core gets vPro piece last year.

Intel vPro Support Announcement With No New Hardware, Why Announce Later?

It is worth noting that Intel's announcement of adding vPro support to their first launch of Meteor Lake Core Ultra SKUs isn't entirely new; Intel did highlight that Meteor Lake would support vPro last year within their Series 1 Product Brief dated 12/20/2023. Intel's formal announcement of vPro support for Meteor Lake is more about which SKU has which level of support, and we feel this could pose problems to users who have already purchased Core Ultra series notebooks for business and enterprise use. Multiple outlets, including Newegg and directly from HP, are alluding to mentioning vPro whatsoever.

This could mean that a user has purchased a notebook with, say, a Core Ultra 5 125H (vPro Essentials), which would be used within an SME or by said SME as a bulk purchase but wouldn't be aware that the chip doesn't have vPro Enterprise, from which they personally and from a business standpoint could benefit from the additional securities. We reached out to Intel, and they sent us the following statement.

"Since we are launching vPro powered by Intel Core Ultra & Intel Core 14th Gen this week, prospective buyers will begin seeing the relevant system information on OEM and enterprise retail partner (eg. CDW) websites in the weeks ahead. This will include information on whether a system is equipped with vPro Enterprise or Essentials so that they can purchase the right system for their compute needs."

Samsung Launches 12-Hi 36GB HBM3E Memory Stacks with 10 GT/s Speed

Samsung announced late on Monday the completion of the development of its 12-Hi 36 GB HBM3E memory stacks, just hours after Micron said it had kicked off mass production of its 8-Hi 24 GB HBM3E memory products. The new memory packages, codenamed Shinebolt, increase peak bandwidth and capacity compared to their predecessors, codenamed Icebolt, by over 50% and are currently the world's fastest memory devices.

As the description suggests, Samsung's Shinebolt 12-Hi 36 GB HBM3E stacks pack 12 24Gb memory devices on top of a logic die featuring a 1024-bit interface. The new 36 GB HBM3E memory modules feature a data transfer rate of 10 GT/s and thus offer a peak bandwidth of 1.28 TB/s per stack, the industry's highest per-device (or rather per-module) memory bandwidth.

Meanwhile, keep in mind that developers of HBM-supporting processors tend to be cautious, so they will use Samsung's HBM3E at much lower data transfer rates to some degree because of power consumption and to some degree to ensure ultimate stability for artificial intelligence (AI) and high-performance computing (HPC) applications.

Samsung HBM Memory Generations
  HBM3E
(Shinebolt)
HBM3
(Icebolt)
HBM2E
(Flashbolt)
HBM2
(Aquabolt)
Max Capacity 36GB 24 GB 16 GB 8 GB
Max Bandwidth Per Pin 9.8 Gb/s 6.4 Gb/s 3.6 Gb/s 2.0 Gb/s
Number of DRAM ICs per Stack 12 12 8 8
Effective Bus Width 1024-bit
Voltage ? 1.1 V 1.2 V 1.2 V
Bandwidth per Stack 1.225 TB/s 819.2 GB/s 460.8 GB/s 256 GB/s

To make its Shinebolt 12-Hi 36 GB HBM3E memory stacks, Samsung had to use several advanced technologies. First, the 36 GB HBM3E memory products are based on memory devices made on Samsung's 4th generation 10nm-class (14nm) fabrication technology, which is called and uses extreme ultraviolet (EUV) lithography.

Secondly, to ensure that 12-Hi HBM3E stacks have the same z-height as 8-Hi HBM3 products, Samsung used its advanced thermal compression non-conductive film (TC NCF), which allowed it to achieve the industry's smallest gap between memory devices at seven micrometers (7 µm). By shrinking gaps between DRAMs, Samsung increases vertical density and mitigates chip die warping. Furthermore, Samsung uses bumps of various sizes between the DRAM ICs; smaller bumps are used in areas for signaling. In contrast, larger ones are placed in spots that require heat dissipation, which improves thermal management.

Samsung estimates that its 12-Hi HBM3E 36 GB modules can increase the average speed for AI training by 34% and expand the number of simultaneous users of inference services by more than 11.5 times. However, the company has not elaborated on the size of the LLM.

Samsung has already begun providing samples of the HBM3E 12H to customers, with mass production scheduled to commence in the first half of this year.

Source: Samsung

Micron Kicks Off Production of HBM3E Memory

Micron Technology on Monday said that it had initiated volume production of its HBM3E memory. The company's HBM3E known good stack dies (KGSDs) will be used for Nvidia's H200 compute GPU for artificial intelligence (AI) and high-performance computing (HPC) applications, which will ship in the second quarter of 2024.

Micron has announced it is mass-producing 24 GB 8-Hi HBM3E devices with a data transfer rate of 9.2 GT/s and a peak memory bandwidth of over 1.2 TB/s per device. Compared to HBM3, HBM3E increases data transfer rate and peak memory bandwidth by a whopping 44%, which is particularly important for bandwidth-hungry processors like Nvidia's H200.

Nvidia's H200 product relies on the Hopper architecture and offers the same computing performance as the H100. Meanwhile, it is equipped with 141 GB of HBM3E memory featuring bandwidth of up to 4.8 TB/s, a significant upgrade from 80 GB of HBM3 and up to 3.35 TB/s bandwidth in the case of the H100.

Micron's memory roadmap for AI is further solidified with the upcoming release of a 36 GB 12-Hi HBM3E product in March 2024. Meanwhile, it remains to be seen where those devices will be used.

Micron uses its 1β (1-beta) process technology to produce its HBM3E, which is a significant achievement for the company as it uses its latest production node for its data center-grade products, which is a testament to the manufacturing technology.

Starting mass production of HBM3E memory ahead of competitors SK Hynix and Samsung is a significant achievement for Micron, which currently holds a 10% market share in the HBM sector. This move is crucial for the company, as it allows Micron to introduce a premium product earlier than its rivals, potentially increasing its revenue and profit margins while gaining a larger market share.

"Micron is delivering a trifecta with this HBM3E milestone: time-to-market leadership, best-in-class industry performance, and a differentiated power efficiency profile," said Sumit Sadana, executive vice president and chief business officer at Micron Technology. "AI workloads are heavily reliant on memory bandwidth and capacity, and Micron is very well-positioned to support the significant AI growth ahead through our industry-leading HBM3E and HBM4 roadmap, as well as our full portfolio of DRAM and NAND solutions for AI applications."

Source: Micron

Intel Previews Sierra Forest with 288 E-Cores, Announces Granite Rapids-D for 2025 Launch at MWC 2024

At MWC 2024, Intel confirmed that Granite Rapids-D, the successor to Ice Lake-D processors, will come to market sometime in 2025. Furthermore, Intel also provided an update on the 6th Gen Xeon Family, codenamed Sierra Forest, which is set to launch later this year and will feature up to 288 cores designed for vRAN network operators to improve performance in boost per rack for 5G workloads.

These chips are designed for handling infrastructure, applications, and AI workloads and aim to capitalize on current and future AI and automation opportunities, enhancing operational efficiency and ownership costs in next-gen applications and reflecting Intel's vision of integrating 'AI Everywhere' across various infrastructures.

Intel Sierra Forest: Up to 288 Efficiency Cores, Set for 2H 2024

The first of Intel's announcements at MWC 2024 focuses on their upcoming Sierra Forest platform, which is scheduled for the 1st half of 2024. Initially announced in February 2022 during Intel's Investor Meeting, Intel is splitting its server roadmap into solutions featuring only performance (P) and efficiency (E) cores. We already know that Sierra Forest's new chips feature a full E-core architecture designed for maximum efficiency in scale-out, cloud-native, and contained environments.

These chips utilize CPU chiplets built on the Intel 3 process alongside twin I/O chiplets based on the Intel 7 node. This combination allows for a scalable architecture, which can accommodate increasing core counts by adding more chiplets, optimizing performance for complex computing environments.

Intel's Sierra Forest, Intel's full E-core designed Xeon processor family, is anticipated to significantly enhance power efficiency with up to 288 E-cores per socket. Intel also claims that Sierra Forest is expected to deliver 2.7 times the performance-per-rack compared to an unspecified platform from 2021; this could be either Ice Lake or Cascade Lake, but Intel didn't mention which.

Additionally, Intel is promising savings of up to 30% in Infrastructure Power Management with Sierra Forest as their Infrastructure Power Manager (IPM) application is now available commercially for 5G cores. Power manageability and efficiency are growing challenges for network operators, so IPM is designed to allow network operators to optimize energy efficiency and TCO savings.

Intel also includes vRAN, which is vital for modern mobile networks, and many operators are forgoing opting for specific hardware and instead leaning towards virtualized radio access networks (vRANs). Using vRAN Boost, which is an integrated accelerator within Xeon Processors, Intel states that the 4th Gen Xeon should be able to reduce power consumption by around 20% while doubling the available network capacity.

Intel's push for 'AI Everywhere' is also a constant focus here, with AI's role in vRAN management becoming more crucial. Intel has announced the vRAN AI Developer Kit, which is available to select partners. This allows partners and 5G network providers to develop AI models to optimize for vRAN applications, tailor their vRAN-based functions to more use cases, and adapt to changes within those scenarios.

Intel Granite Rapids-D: Coming in 2025 For Edge Solutions

Intel's Granite Rapids-D, designed for Edge solutions, is set to bolster Intel's role in virtual radio access network (vRAN) workloads in 2025. Intel also promises marked efficiency enhancements and some vRAN Boost optimizations similar to those expected on Sierra Forest. Set to follow on from the current Ice Lake-D for the edge; Intel is expected to use the performance (P) cores used within Granite Rapids server parts and optimize the V/F curve designed for the lower-powered Edge platform. As outlined by Intel, the previous 4th generation Xeon platform effectively doubled vRAN capacity, enhancing network capabilities while reducing power consumption by up to 20%.

Granite Rapids-D aims to further these advancements, utilizing Intel AVX for vRAN and integrated Intel vRAN Boost acceleration, thereby offering substantial cost and performance benefits on a global scale. While Intel hasn't provided a specific date (or month) of when we can expect to see Granite Rapids-D in 2025, Intel is currently in the process of sampling these next-gen Xeon-D processors with partners, aiming to ensure a market-ready platform at launch.

Related Reading

AMD Fixed the STAPM Throttling Issue, So We Retested The Ryzen 7 8700G and Ryzen 5 8600G

When we initially reviewed the latest Ryzen 8000G APUs from AMD last month, the Ryzen 7 8700G and Ryzen 5 8600G, we became aware of an issue that caused the APUs to throttle after a few minutes. This posed an issue for a couple of reasons, the first being it compromised our data to reflect the true capabilities of the processors, and the second, it highlighted an issue that AMD forgot to disable from their mobile series of Pheonix chips (Ryzen 7040) when implementing it over to the desktop.

We updated the data in our review of the Ryzen 7 8700G and Ryzen 5 8600G to reflect performance with STAPM on the initial firmware and with STAPM removed with the latest firmware. Our updated and full review can be accessed by clicking the link below:

As we highlighted in our Ryzen 8000G APU STAPM Throttling article, AMD, through AM5 motherboard vendors such as ASUS, has implemented updated firmware that removes the STAPM limitation. Just to quickly recap the Skin Temperature-Aware Power Management (STAPM) feature and what it does, AMD introduced it in 2014. STAPM itself is a feature implemented into their mobile processors. It is designed to extend the on-die power management by considering the processor's internal temperatures taken by on-chip thermal diodes and the laptop's surface temperature (i.e., the skin temperature).

The aim of STAPM is to prevent laptops from becoming uncomfortably warm for users, allowing the processor to actively throttle back its heat generation based on the thermal parameters between the chassis and the processor itself. The fundamental issue with STAPM in the case of the Ryzen 8000G APUs, including the Ryzen 7 8700G and Ryzen 5 8600G, is that these are mobile processors packaged into a format for use with the AM5 desktop platform. As a desktop platform is built into a chassis that isn't placed on a user's lap, the STAPM feature becomes irrelevant.

As we saw when we ran a gaming load over a prolonged period of time on the Ryzen 7 8700G with the firmware available at launch, we hit power throttling (STAPM) after around 3 minutes. As we can see in the above chart, power dropped from a sustained value of 83-84 W down to around 65 W, representing a drop in power of around 22%. While we know Zen 4 is a very efficient architecture at lower power values, overall performance will drop once this limit is hit. Unfortunately, AMD forgot to remove STAPM limits when transitioning Pheonix to the AM5 platform.

Retesting the same game (F1 2023) at the same settings (720p High) with the firmware highlighting that STAPM had been removed, we can see that we aren't experiencing any of the power throttling we initially saw. We can see power is sustained for over 10 minutes of testing (we did test for double this), and we saw no drops in package power, at least not from anything related to STAPM. This means for users on the latest firmware on whatever AM5 motherboard is being used, power and, ultimately, performance remain consistent with what the Ryzen 7 8700G should have been getting at launch.

The key question is, does removing the STAPM impact our initial results in our review of the Ryzen 7 8700G and Ryzen 5 8600G? And if so, by how much, or if at all? We added the new data to our review of the Ryzen 7 8700G and Ryzen 5 8600G but kept the initial results so that users can see if there are any differences in performance. Ultimately, benchmark runs are limited to the time it takes to run them, but in real-world scenarios, tasks such as video rendering and longer sustained loads are more likely to show gains in performance. After all, a drop of 22% in power is considerable, especially over a task that could take an hour.

(4-1d) Blender 3.6: Pabellon Barcelona (CPU Only)

Using one of our longer benchmarks, such as Blender 3.6, to highlight where performance gains are notable when using the latest firmware with the STAPM limitations removed, we saw an increase in performance of around 7.5% on the Ryzen 7 8700G with this removed. In the same benchmark, we saw an increase of around 4% on the Ryzen 5 8600G APU.

Over all of the Blender 3.6 tests in the rendering section of our CPU performance suite, performance gains hovered between 2 and 4.4% on the Ryzen 5 8600G, and between 5 and 7.5% on the Ryzen 8700G, which isn't really free performance, it's the performance that should have been there to begin with at launch.

IGP World of Tanks - 768p Min - Average FPS

Looking at how STAPM affected our initial data, we can see that the difference in World of Tanks at 768p Minumum settings had a marginal effect at best through STAPM by around 1%. Given how CPU-intensive World of Tanks is, and combining this with integrated graphics, the AMD Ryzen APUs (5000G and 8000G) both shine compared to Intel's integrated UHD graphics in gaming. Given that gaming benchmarks are typically time-limited runs, it's harder to identify performance gains. The key to takeaway here is that with the STAPM limitation removed, the performance shouldn't drop over sustained periods of time, so our figures above and our updated review data aren't compromised.

(i-3) Total War Warhammer 3 - 1440p Ultra - Average FPS

Regarding gaming with a discrete graphics card, we saw no drastic changes in performance, as highlighted by our Total War Warhammer 3 at 1440p Ultra benchmark. Across the board, in our discrete graphics results with both the Ryzen 7 8700G and the Ryzen 5 8600G, we saw nothing but marginal differences in performance (less than 1%). As we've mentioned, removing the STAPM limitations doesn't necessarily improve performance. Still, it allows the APUs to keep the same performance level for sustained periods, which is how it should have been at launch. With STAPM applied as with the initial firmware at launch on AM5 motherboards, power would drop by around 22%, limiting the full performance capability over prolonged periods.

As we've mentioned, we have updated our full review of the AMD Ryzen 7 8700G and Ryzen 5 8600G APUs to reflect our latest data gathered from testing on the latest firmware. Still, we can fully confirm that the STAPM issue has been fixed and that the performance is as it should be on both chips.

You can access all of our updated data in our review of the Ryzen 7 8700G and Ryzen 5 8600G by clicking the link below.

AMD CEO Dr. Lisa Su to Deliver Opening Keynote at Computex 2024

Taiwan External Trade Development Council (TAITRA), the organizer of Computex, announced today that Dr. Lisa Su, AMD's chief executive officer, will give the trade show's Opening Keynote. Su's speech is set for the morning of June 3, 2024, shortly before the formal start of the show. According to AMD, the keynote talk will be "highlighting the next generation of AMD products enabling new experiences and breakthrough AI capabilities from the cloud to the edge, PCs and intelligent end devices."

This year's Computex is focused on six key areas: AI computing, Advanced Connectivity, Future Mobility, Immersive Reality, Sustainability, and Innovations. Being a leading developer of CPUs, AI and HPC GPUs, consumer GPUs, and DPUs, AMD can talk most of these topics quite applicably.

As AMD is already mid-cycle on most of their product architectures, the company's most recent public roadmaps have them set to deliver major new CPU and GPU architectures before the end of 2024 with Zen 5 CPUs and RDNA 4 GPUs, respectively. AMD has not previously given any finer guidance on when in the year to expect this hardware, though AMD's overall plans for 2024 are notably more aggressive than the start of their last architecture cycle in 2022. Of note, the company has previously indicated that it intends to launch all 3 flavors of the Zen 5 architecture this year – not just the basic core, but also Zen 5c and Zen 5 with V-Cache – as well as a new mobile SoC (Strix Point). By comparison, it took AMD well into 2023 to do the same with Zen 4 after starting with a fall 2022 launch for those first products.


AMD 2022 Financial Analyst Day CPU Core Roadmap

This upcoming keynote will be Lisa Su's third Computex keynote after her speeches at Computex 2019 and Computex 2022. In both cases she also announced upcoming AMD products.

In 2019, she showcased performance improvements of then upcoming 3rd Generation Ryzen desktop processors and 7nm EPYC datacenter processors. Lisa Su also highlighted AMD's advancements in 7nm process technology, showcasing the world's first 7nm gaming GPU, the Radeon VII, and the first 7nm datacenter GPU, the Radeon Instinct MI60.

In 2022, the head of AMD offered a sneak peek at the then-upcoming Ryzen 7000-series desktop processors based on the Zen 4 architecture, promising significant performance improvements. She also teased the next generation of Radeon RX 7000-series GPUs with the RDNA 3 architecture.

Arm and Samsung to Co-Develop 2nm GAA-Optimized Cortex Cores

Arm and Samsung this week announced their joint design-technology co-optimization (DTCO) program for Arm's next-generation Cortex general-purpose CPU cores as well as Samsung's next-generation process technology featuring gate-all-around (GAA) multi-bridge-channel field-effect transistors (MBCFETs). 

"Optimizing Cortex-X and Cortex-A processors on the latest Samsung process node underscores our shared vision to redefine what’s possible in mobile computing, and we look forward to continuing to push boundaries to meet the relentless performance and efficiency demands of the AI era," said Chris Bergey, SVP and GM, Client Business at Arm.

Under the program, the companies aim to deliver tailored versions of Cortex-A and Cortex-X cores made on Samsung's 2 nm-class process technology for various applications, including smartphones, datacenters, infrastructure, and various customized system-on-chips. For now, the companies does not say whether they aim to co-optimize Arm's Cortex cores for Samsung's 1st generation 2 nm production node called SF2 (due in 2025), or the plan is to optimize these cores for all SF2-series technologies, including SF2 and SF2P.

GAA nanosheet transistors with channels that are surrounded by gates on all four sides have a lot of options for optimization. For example, nanosheet channels can be widened to increase drive current and boost performance or shrunken to reduce power consumption and cost. Depending on the application, Arm and Samsung will have plenty of design choices.

Keeping in mind that we are talking about Cortex-A cores aimed at a wide variety of applications as well as Cortex-X cores designed specifically to deliver maximum performance, the results of the collaborative work promise to be quite decent. In particular, we are looking forward Cortex-X cores with maximized performance, Cortex-A cores with optimized performance and power consumption, and Cortex-A cores with reduced power consumption.

Nowadays collaboration between IP (intellectual property) developers, such as Arm, and foundries, such as Samsung Foundry, is essential to maximize performance, reduce power consumption, and optimize transistor density. The joint work with Arm will ensure that Samsung's foundry partners will have access to processor cores that can deliver exactly what they need.

IFS Reborn as Intel Foundry: Expanded Foundry Business Adds 14A Process To Roadmap

5 nodes in 4 years. This is what Intel CEO Pat Gelsinger promised Intel’s customers, investors, and the world at large back in 2021, when he laid out Intel’s ambitious plan to regain leadership in the foundry space. After losing Intel’s long-held spot as the top fab in the world thanks to compounding delays in the 2010s, the then-new Intel CEO bucked calls from investors to sell off Intel’s fabs, and instead go all-in on fabs like Intel has never done before, to become a top-to-bottom foundry service for the entire world to use.

Now a bit over two years later, and Intel is just starting to see the first fruits from that aggressive roadmap, both in terms of technologies and customers. Products based on Intel’s first EUV-based node, Intel 4, are available in the market today, and its high-volume counterpart, Intel 3, is ready as well. Meanwhile, Intel is putting the final touches on its first Gate-All-Around (GAAFET)/RibbonFET for 2024 and 2025. It’s a heady time for the company, but it’s also a critical one. Intel has reached the point where they need to deliver on those promises – and they need to do so in a very visible way.

To that end, today Intel’s Foundry group – the artist formally known as Intel Foundry Services – is holding its first conference, Direct Connect. And even more than being a showcase for customers and press, this is Intel’s coming-out party for the fab industry as a whole, where Intel’s foundry (and only Intel’s foundry) gets the spotlight, a rarity in the massive business that is Intel.

Arm Announces Neoverse V3 and N3 CPU Cores: Building Bigger and Moving Faster with CSS

A bit over 5 years ago, Arm announced their Neoverse initiative for server, cloud, and infrastructure CPU cores. Doubling-down on their efforts to break into the infrastructure CPU market in a big way, the company set about an ambitious multi-year plan to develop what would become a trio of CPU core lineups to address different segments of the market – ranging  from the powerful V series to the petite E series core. And while things have gone a little differently than Arm initially projected, they’re hardly in a position to complain, as the Neoverse line of CPU cores has never been as successful as it is now. Custom CPU designs based on Neoverse cores are all the rage with cloud providers, and the broader infrastructure market has seen its own surge.

Now, as the company and its customers turn towards 2024 and a compute market that is in the throes of another transformative change due to insatiable demand for AI hardware, Arm is preparing to release its next generation of Neoverse CPU core designs to its customers. And in the process, the company is reaching the culmination of the original Neoverse roadmap.

This morning the company is taking the wraps off of the V3 CPU architecture (codename Poseidon) for high-performance systems, as well as the N3 CPU architecture (codename Hermes) for balanced systems. These designs are now ready for customers to begin integrating into their own chip designs, with both the individual CPU core designs as well as the larger Neoverse Compute Subsystems (CSS) available. Between the various combinations of IP configurations, Arm is looking to offer something for everyone, and especially chip designers who are looking to integrate ready-made IP for a quick turnaround in developing their own chips.

With that said, it should be noted that today’s announcement is also a lighter one than what we’ve come to expect from previous Neoverse announcements. Arm isn’t releasing any of the deep architectural details on the new Neoverse platforms today, so while we have the high-level details on the hardware and some basic performance estimates, the underlying details on the CPU cores and their related plumbing is something Arm is keeping to themselves until a later time.

The Intel IFS Direct Connect 2024 Keynote (Starts at 8:30am PT/16:30 UTC)

This morning, Intel is set to provide updates on its foundry business (IFS) and process roadmap at its IFS Direct Connect event in Santa Clara. Intel is expected to unveil plans for how the company will transform the foundry industry and how it is set to become the world's first and only fully integrated systems foundry in the AI space. Led by Intel CEO Pat Gelsinger and Stuart Pann, the Senior Vice President and General Manager of Intel Foundry Services, both will deliver the event's opening keynote. Expected guests throughout the keynote include Sam Altman, the co-founder and CEO of OpenAI, Gina Raimondo, the US Secretary of Commerce, and Satya Nadella, the Chairman and CEO of Microsoft.

Join us at 8:30 am PT/16:30 pm UTC.

Crucial T705 Gen5 NVMe SSD: A 14.5 GBps Consumer Flagship with 2400 MT/s 232L NAND

Crucial is unveiling the latest addition to its Gen5 consumer NVMe SSD lineup today - the T705 PCIe 5.0 M.2 2280 NVMe SSD. It takes over flagship duties from the Crucial T700 released last year. The company has been putting focus on the high-end consumer SSD segment in the last few quarters. The T700 was one of the first to offer more than 12 GBps read speeds, and the T705 being launched today is one of the first drives available for purchase in the 14+ GBps read speeds category.

The Crucial T705 utilizes the same platform as the T700 from last year - Phison's E26 controller with Micron's B58R 232L 3D TLC NAND. The key difference is the B58R NAND operating at 2400 MT/s in the new T705 (compared to the 2000 MT/s in the T700). Micron's 232L NAND process has now matured enough for the company to put out 2400 MT/s versions with enough margins. Similar to the T700, this drive is targeted towards gamers, content creators, and professional users as well as data-heavy AI use-cases.

The move to 2400 MT/s NAND has allowed Crucial to claim an increase in the performance of the drive in all four corners - up to 20% faster random writes, and 18% higher sequential reads. Additionally, Crucial also claims more bandwidth in a similar power window for the new drive.

The T705 is launching in three capacities - 1TB, 2TB, and 4TB. Both heatsink and non-heatsink versions are available. Crucial is also offering a white heatsink limited edition for the 2TB version. This caters to users with white-themed motherboards that are increasingly gaining market presence.

Phison has been pushing DirectStorage optimizations in its high-end controllers, and it is no surprise that the T705 advertises the use of Phison's 'I/O+ Technology' to appeal to gamers. Given its high-performance nature, it is no surprise that the E26 controller needs to be equipped with DRAM for managing the flash translation layer (FTL). Crucial is using Micron LPDDR4 DRAM (1GB / TB of flash) in the T705 for this purpose.

Crucial T705 Gen5 NVMe SSD Specifications
Capacity 1 TB 2 TB 4 TB
Model Numbers CT1000T705SSD3 (Non-Heatsink)
CT1000T705SSD5 (Heatsink)
CT2000T705SSD3 (Non-Heatsink)
CT2000T705SSD5 (Black Heatsink)
CT2000T705SSD5A (White Heatsink)
CT4000T705SSD3 (Non-Heatsink)
CT4000T705SSD5 (Heatsink)
Controller Phison PS5026-E26
NAND Flash Micron B58R 232L 3D TLC NAND at 2400 MT/s
Form-Factor, Interface Double-Sided M.2-2280, PCIe 5.0 x4, NVMe 2.0
Sequential Read 13600 MB/s 14500 MB/s 14100 MB/s
Sequential Write 10200 MB/s 12700 MB/s 12600 MB/s
Random Read IOPS 1.4 M 1.55 M 1.5 M
Random Write IOPS 1.75 M 1.8 M 1.8 M
SLC Caching Dynamic (up to 11% of user capacity)
TCG Opal Encryption Yes
Warranty 5 years
Write Endurance 600 TBW
0.33 DWPD
1200 TBW
0.33 DWPD
2400 TBW
0.33 DWPD
MSRP $240 (24¢/GB) (Non- Heatsink)
$260 (26¢/GB) (Heatsink)
$400 (20¢/GB) (Non- Heatsink)
$440 (22¢/GB) (Black Heatsink)
$484 (24.2¢/GB) (White Heatsink)
$714 (17.85¢/GB) (Non- Heatsink)
$730 (18.25¢/GB) (Heatsink)

Crucial is confident that the supplied passive heatsink is enough to keep the T705 from heavy throttling under extended use. The firmware throttling kicks in at 81C and protective shutdown at 90C. Flash pricing is not quite as low as it was last year, and the 2400 MT/s flash allows Micron / Crucial to place a premium on the product. At the 4TB capacity point, the drive can be purchased for as low as 18¢/GB, but the traditional 1TB and 2TB ones go for 20 - 26 ¢/GB depending on the heatsink option.

There are a number of Gen5 consumer SSDs slated to appear in the market over the next few months using the same 2400 MT/s B58R 3D TLC NAND and Phison's E26 controller (Sabrent's Rocket 5 is one such drive). The Crucial / Micron vertical integration on the NAND front may offer some advantage for the T705 when it comes to the pricing aspect against such SSDs. That said, the Gen5 consumer SSD market is still in its infancy with only one mass market (Phison E26) controller in the picture. The rise in consumer demand for these high-performance SSDs may coincide with other vendors such as Innogrit (with their IG5666) and Silicon Motion (with their SM2508) gaining traction. Currently, Crucial / Micron (with their Phison partnership) is the only Tier-1 vendor with a high-performance consumer Gen5 SSD portfolio, and the T705 cements their leadership position in the category further.

Capsule Review: AlphaCool Apex Stealth Metal 120mm Fan

Alphacool, a renowned name in the realm of PC cooling solutions, recently launched their Apex Stealth Metal series of cooling fans. Prior to their launch, the new fans had amassed a significant amount of hype in the PC community, in part because of the unfortunate misconception that the entire fan would be made out of metal.

Regardless of whether they're made entirely out of metal or not, however, these fans are notable for their unique construction, combining a metallic frame with plastic parts that are decoupled from the metal. This design choice not only contributes to the fan's aesthetic appeal but also plays a role in its operational efficiency.

The series includes two distinct models, the Apex Stealth Metal 120 mm and the Apex Stealth Metal Power 120 mm, distinguished primarily by their maximum rotational speeds. The former reaches up to 2000 RPM, while the latter, designed for more demanding applications, can achieve a remarkable 3000 RPM. Available in four color options – White, Matte Black, Chrome, and Gold – these fans offer a blend of style and functionality, making them a versatile choice for various PC builds.

GlobalFoundries to Receive $1.5 Billion In Funding from U.S. CHIPS Act

The United States Department of Commerce and GlobalFoundires announced on Monday that the US will be awarding GlobalFoundries with $1.5 billion in funding under the CHIPS and Science Act. The latest domestic chip fab to receive money under the act, GlobalFoundries's funding will be spent to upgrade company's New York and Vermont fabs as well as build a brand-new fab module. In addition, GlobalFoundries is set to get $600+ million funding from the state of New York to support its expansion and modernization efforts over the next 10 years.

"These proposed investments, along with the investment tax credit (ITC) for semiconductor manufacturing, are central to the next chapter of the GlobalFoundries story and our industry," said Dr. Thomas Caulfield, president and CEO of GlobalFoundries. "They will also play an important role in making the U.S. semiconductor ecosystem more globally competitive and resilient and cements the New York Capital Region as a global semiconductor hub. With new onshore capacity and technology on the horizon, as an industry we now need to turn our attention to increasing the demand for U.S.-made chips, and to growing our talented U.S. semiconductor workforce."

There are three projects that GlobalFoundries is set to fund using the direct subsidies in the coming quarters.

First up, the company plans to expand its Fab 8 in Malta, NY, and enable it to build chips for automotive industry on technologies already adopted by its sites in Germany and Singapore. This expansion is crucial for meeting the increasing demand for chips by the transforming automotive industry. Furthermore, the project will diversify GF's flagship Malta fab into different technologies and end markets, which is something that will ensure its utilization going forward.

In addition to the Malta expansion, GlobalFoundries plans to construct a new state-of-the-art fab (or rather a module) on the same campus. This new facility aims to meet the anticipated demand for U.S.-made essential chips across a wide range of markets, including automotive, aerospace, defense, and AI. The construction of this new fab, along with the expansion of the existing production facility, is expected to triple Malta's current capacity over the next decade, with a projected increase in wafer production to one million per year.

Finally, GlobalFoundries plans modernization of its Essex Junction, Vermont facility focuses on upgrading existing infrastructure and expanding capacity. This project will also establish the first U.S. facility capable of high-volume manufacturing of next-generation gallium nitride (GaN) semiconductors. These chips are vital for various applications, including electric vehicles, datacenters. power grids, and communication technologies.

In general, GlobalFoundries's investment plan exceeds $12 billion across its two U.S. sites over the next decade, supported by public-private partnerships with federal and state governments and strategic ecosystem partners. According to the company, this investment is expected to generate over 1,500 manufacturing jobs and approximately 9,000 construction jobs, which the company is promoting as contributing significantly to the local economy.

The funding and expansion efforts by GlobalFoundries, in collaboration with the U.S. Department of Commerce and New York State, are aimed at enhancing the competitiveness and resilience of the U.S. semiconductor ecosystem. These initiatives also underscore the contract chipmaker's commitment to sustainable operations and workforce development, aligning with the company's strategic goals to strengthen the semiconductor talent pipeline and support the growing demand for U.S.-made chips.

AMD, Qualcomm, General Motors, and Lockheed Martin welcomed the grants and highlighted importance of the U.S. semiconductor supply chain for emerging applications like software defined vehicles and autonomous vehicles as well as global trends like 5G, AI, HPC, and edge computing.

Sources: U.S. Department of Commerce, GlobalFoundries

GlobalFoundries: Clients Are Migrating to Sub-10nm Faster Than Expected

When GlobalFoundries abandoned development of its 7 nm-class process technology in 2018 and refocused on specialty process technologies, it ceased pathfinding, research, and development of all technologies related to bleeding-edge sub-10nm nodes. At the time, this was the correct (and arguably only) move for the company, which was bleeding money and trailing behind both TSMC and Samsung in the bleeding-edge node race. But in the competitive fab market, that trade-off for reduced investment was going to eventually have consequences further down the road, and it looks like those consequences are finally starting to impact the company. In a recent earnings call, GlobalFoundries disclosed that some of the company's clients are leaving for other foundries, as they adopt sub-10nm technologies faster than GlobalFoundries expected.

"Our communications infrastructure and data center segment continued to show weakness through 2023, partly due to the prolonged channel digestion of wireless and wired infrastructure inventory levels across our customers, as well as the accelerated node migration of data center, and digital-centric customers to single-digit nanometers," said Tom Caulfield, chief executive of GlobalFoundries, at the company's earnings call with financial analysts and investors (via SeekingAlpha).

There are four key reasons why companies migrate to 'single-digit nanometers' (e.g., 5 nm, 7 nm): they want to get higher performance, they want to get lower power, they want to reduce their costs by reducing die size, and most often, they want a combination of all three factors. There could be other reasons too, such as support for lower voltages or necessity to reduce form-factor. For now, the best node that GlobalFoundries has to offer is its 12LP+ fabrication process which is substantially better than its 12LP and 14LPP process technologies and should be comparable to 10nm-class nodes of other foundries.

Meanwhile, based on characteristics of 12LP+ demonstrated by GlobalFoundries, it cannot really compete against 7nm-class process technologies in terms of transistor density, performance, and power. Assuming that TSMC or Samsung Foundry offer competitive prices for their 7 nm-class nodes, at least some of 12LP+ customers are probably inclined to use 7 nm fabrication technologies instead, which is what GlobalFoundries confirms.

"We are actively [watching] these industry trends and executing opportunities to remake some of our excess capacity to serve this demand in more durable and growing segments such as automotive, and smart mobile devices," Caulfield said.

Back in 2022, communication infrastructure and datacenter revenue accounted for 18% of the company's earnings, but in 2023, that share dropped to 12%. Shares of PC and smart mobile devices declined from 4% and 46% in 2022 to 3% and 41%, respectively. Meanwhile the share of automotive-related revenue increased from 5% in 2022 to 14% in 2023, which is a reason for optimism as GlobalFoundries expects automotive growth to offset declines of other applications that transit from 12LP+ to newer nodes.

"[Automotive] products span the breadth of our portfolio from 12 LP+, our FinFET platform, all the way through our expanded voltage handling capabilities at a 130 nm and a 180 nm technologies," said Caulfield. "Through these offerings, we believe that GF will play a key role in the long-term transition of the automotive industry, and our customer partnerships are central to that.

GlobalFoundries revenue topped $7.392 billion for the whole year 2023, down from $8.108 billion in 2022 due to inventory adjustments by some customers and migration of others to different foundries and nodes. Meanwhile, the company remained profitable and earned $1.018 billion, down from $1.446 billion a year before.

ASML to Ship Multiple High-NA Tools in 2025, Expands Production Capacities

ASML began to ship its first High-NA lithography tool to Intel late last year ,and the machine will be fully assembled in Oregon in the coming months. Shipping only a single extreme ultraviolet (EUV) system with a 0.55 numerical aperture lens may not seem like too impressive, but the company aims to ship a much larger number of such devices this year, and further production increases in the coming years.

ASML did not disclose how many High-NA EUV litho tools it plans to ship this year, but the company has already announced that it had obtained orders for these machines from all leading makers of logic chips (Intel, Samsung Foundry, TSMC) and memory (Micron, Samsung, SK Hynix), and that the total number currently stands between 10 and 20 systems. Essentially, this means that High-NA EUV will be widely used. But the question is when.

ASML's High-NA EUV Twinscan EXE lithography systems are the company's next-generation flagship production tools that will enable chipmakers to decrease critical dimensions of chips to 8nm in a single exposure, a substantial improvement over 13nm offered by today's Low-NA EUV Twinscan NXE. But that improvement comes at a cost. Each Twinscan EXE costs €350 million ($380 million), which is over two times more than the price of a Twinscan NXE (€170 million, $183 million).

The steep price tag of the new tools has led to debates on its immediate economic feasibility as it is still possible to print 8nm features using Low-NA tools, albeit using double patterning, which is a more expensive and yield-impacting technique. For example, Intel is expected to insert High-NA EUV lithography into its production flow for its post-18A fabrication process (1.8 nm-class) sometimes in 2026 – 2027, whereas analysts from China Renaissance believe that TSMC only intends to start using these tools for its 1 nm-class production node sometime in 2030. Other industry analysts, like Jeff Koch from Semianalysis, also believe that the broader adoption of these high-cost machines might not occur until it becomes economically sensible, anticipated around 2030-2031.

Nevertheless, ASML executives, including chief executive Peter Wennink, argue that elimination of double patterning by High-NA EUV machines will provide enough advantages — such as process simplification and potentially shorter production cycle — to deploy them sooner than analysts predict, around 2026-2027.

Having secured between 10 and 20 orders for the High NA EUV machines, ASML is preparing to increase its production capacity to meet the demand for 20 units annually by 2028. That said, the uncertainties around other chipmakers' plans to use High-NA tools in the next two or three years raises concerns about potential overcapacity in the near term as ASML ramps up production.

Sources: Bloomberg, Reuters

The Enermax LiqMaxFlo 360mm AIO Cooler Review: A Bit Bigger, A Bit Better

For established PC peripheral vendors, the biggest challenge in participating in the highly commoditized market is setting themselves apart from their numerous competitors. As designs for coolers and other peripherals have converged over the years into a handful of basic, highly-optimized designs, developing novel hardware for what is essentially a "solved" physics problem becomes harder and harder. So often then, we see vendors focus on adding non-core features to their hardware, such as RGB lighting and other aesthetics. But every now and then, we see a vendor go a little farther off of the beaten path with the physical design of their coolers.

Underscoring this point – and the subject of today's review – is Enermax's latest all-in-one (AIO) CPU cooler, the LiqMaxFlo 360mm. Designed to compete in the top-tier segment of the cooling market, Enermax has opted to play with the physics of their 360mm cooler a bit by making it 38mm thick, about 40% thicker than the industry average of 27mm. And while Enermax is hardly the first vendor to release a thick AIO cooler, they are in much more limited company here due to the design and compatibility trade-offs that come with using a thicker cooler – trade-offs that most other vendors opt to avoid.

The net result is that the LiqMaxFlo 360mm gets to immediately start off as differentiated from so many of the other 360mm coolers on the market, employing a design that can give Enermax an edge in cooling performance, at least so long as the cooler fits in a system. Otherwise, not resting on just building a bigger cooler, Enermax has also equipped the LiqMaxFlo 360mm with customizable RGB lighting, allowing it to also cater to the aesthetic preferences of modern advanced PC builders. All together, there's a little something for everyone with the LiqMaxFlo 360mm – and a lot of radiator to cram into a case. So let's get started.

Report: NVIDIA Forms Custom Chip Unit for Cloud Computing and More

With its highly successful A100 and H100 processors for artificial intelligence (AI) and high-performance computing (HPC) applications, NVIDIA dominates AI datacenter deployments these days. But among large cloud service providers as well as emerging devices like software defined vehicles (SDVs) there is a global trend towards custom silicon. And, according to a report from Reuters, NVIDIA is putting together a new business unit to take on the custom chip market.

The new business unit will reportedly be led by vice president Dina McKinney, who has a wealth of experience from working at AMD, Marvell, and Qualcomm. The new division aims to address a wide range of sectors including automotive, gaming consoles, data centers, telecom, and others that could benefit from tailored silicon solutions. Although NVIDIA has not officially acknowledged the creation of this division, McKinney’s LinkedIn profile as VP of Silicon Engineering reveals her involvement in developing silicon for 'cloud, 5G, gaming, and automotive,' hinting at the broad scope of her alleged business division.

Nine unofficial sources across the industry confirmed to Reuters the existence of the division, but NVIDIA has remained tight-lipped, only discussing its 2022 announcement regarding implementation of its networking technologies into third-party solutions. According to Reuters, NVIDIA has initiated discussions with leading tech companies, including Amazon, Meta, Microsoft, Google, and OpenAI, to investigate the potential for developing custom chips. This hints that NVIDIA intends to extend its offerings beyond the conventional off-the-shelf datacenter and gaming products, embracing the growing trend towards customized silicon solutions.

While using NVIDIA's A100 and H100 processors for AI and high-performance computing (HPC) instances, major cloud service providers (CSPs) like Amazon Web Services, Google, and Microsoft are also advancing their custom processors to meet specific AI and general computing needs. This strategy enables them to cut costs as well as tailor capabilities and power consumption of their hardware to their particular needs. As a result, while NVIDIA's AI and HPC GPUs remain indispensable for many applications, an increasing portion of workloads now run on custom-designed silicon, which means lost business opportunities for NVIDIA. This shift towards bespoke silicon solutions is widespread and the market is expanding quickly. Essentially, instead of fighting custom silicon trend, NVIDIA wants to join it.

Meanwhile, analysts are painting the possibility of an even bigger picture. Well-known GPU industry observer Jon Peddie Research notes that they believe that NVIDIA may be interested in addressing not only CSPs with datacenter offerings, but also consumer market due to huge volumes.

"NVIDIA made their loyal fan base in the consumer market which enabled them to establish the brand and develop ever more powerful processors that could then be used as compute accelerators," said JPR's president Jon Peddie. "But the company has made its fortune in the deep-pocked datacenter market where mission-critical projects see the cost of silicon as trivial to the overall objective. The consumer side gives NVIDIA the economy of scale so they can apply enormous resources to developing chips and the software infrastructure around those chips. It is not just CUDA, but a vast library of software tools and libraries."

Back in mid-2010s NVIDIA tried to address smartphones and tablets with its Tegra SoCs, but without much success. However, the company managed to secure a spot in supplying the application processor for the highly-successful Nintendo Switch console, and certainly would like expand this business. The consumer business allows NVIDIA to design a chip and then sell it to one client for many years without changing its design, amortizing the high costs of development over many millions of chips.

"NVIDIA is of course interested in expanding its footprint in consoles – right now they are supplying the biggest selling console supplier, and are calling on Microsoft and Sony every week to try and get back in," Peddie said. "NVIDIA was in the first Xbox, and in PlayStation 3. But AMD has a cost-performance advantage with their APUs, which NVIDIA hopes to match with Grace. And since Windows runs on Arm, NVIDIA has a shot at Microsoft. Sony's custom OS would not be much of a challenge for NVIDIA."

Global Semiconductor Sales Hit $526.8 Billion in 2023

The global semiconductor industry saw its sales dropped around $47 billion to nearly $527 billion in 2023, according to estimations by the Semiconductor Industry Association (SIA). This was a sharp downturn from the record 2022, but good news is that sales picked up significantly in the second half of the year, showing signs of a strong recovery and positive expectations for the future.

The semiconductor industry supplied chips worth $526.8 billion in 2023, an 8.2% decrease from 2022's all-time high of $574.1 billion. Slow sales of chips in the first half of the year was attributed to inventory corrections by client PC, consumer electronics, and server sectors. Meanwhile, chip sales in Q4 2023 jumped to $146 billion, up 11.6% compared to Q4 2022 and 8.4% higher than in Q3 2023. December also ended on a high note with sales reaching $48.6 billion, a 1.5% increase from November, according to the SIA.

In terms of product categories, logic products — CPUs, GPUs, FPGAs, and similar devices that process data — led the charge with $178.5 billion in sales, making it the industry's largest segment that outsells all three others combined. Memory followed with revenue of $92.3 billion, which was a result of declining prices of 3D NAND and DRAM in the first half of the year. In both cases, sales were down year over year.

By contrast, sales of microcontroller units (MCUs) and automotive integrated circuits (ICs) saw impressive of 11.4% and 23.7% year-over-year, respectively, with MCUs revenue reaching $27.9 billion and automotive ICs hitting a new high of $42.2 billion. Strong shipments of MCUs and automotive ICs indicate rapid chip demand growth from makers of cars as well as various smart devices as these industries now use more semiconductors than ever.

"Global semiconductor sales were sluggish early in 2023 but rebounded strongly during the second half of the year, and double-digit market growth is projected for 2024," said John Neuffer, SIA president and CEO. "With chips playing a larger and more important role in countless products the world depends on, the long-term outlook for the semiconductor market is extremely strong."

As far as sales of chips across different parts of the world are concerned, Europe was the only region that saw an increase in sales, growing by 4%. Other regions did not perform this well: sales of chips in the Americas declined by 5.2%, Japan declined by 3.1%, and China experienced the biggest drop at 14%, according to the SIA.

"Advancing government policies that invest in R&D, strengthen the semiconductor workforce, and reduce barriers to trade will help the industry continue to grow and innovate for many years to come," Neuffer said.

Graphs generated by DALL-E/OpenAI based on data from the SIA

Recall of CableMods' 12VHPWR Adapters Estimates Failure Rate of 1.07%

A recall on 12VHPWR angled adapters from CableMod has reached its next stage this week, with the publication of a warning document from the U.S. Consumer Product Safety Commission. Referencing the original recall for CableMods' V1.0 and V1.1 adapters, which kicked off back in December, the CPSC notice marks the first involvement of government regulators. And with that has come to light a bit more detail on just how big the recall is overall, along with an estimated failure rate for the adapters of a hair over 1%.

According to the CPSC notice, CableMod is recalling 25,300 adapters, which were sold between February, 2023, and December, 2023. Of those, at least 272 adapters failed, as per reports and repair claims made to CableMod. That puts the failure rate for the angled adapters at 1.07% – if not a bit higher due to the underreporting that can happen with self-reported statistics. All told, the manufacturer has received at least $74,500 in property damage claims in the United States, accounting for the failed adapters themselves, as well as the video card and anything else damaged in the process.

As part of the recall, CableMod has asked owners of its angled 12VHPWR adapters V1.0 and V1.1 to stop using them immediately, and to destroy them to prevent future use. Buyers can opt for a full refund of $40, or a $60 store credit.

It is noteworthy that, despite the teething issues with the initial design of the 12VHPWR connector – culminating with the PCI-SIG replacing it with the upgraded 12V-2x6 standard – the issue with the CableMod adapters is seemingly distinct from those larger design flaws. Specifically, CableMod's recall cites issues with the male portion of their adapters, which was not altered in the 12V-2x6 update. Compared to 12VHPWR, 12V-2x6 only alters female plugs (such as those found on video cards themselves), calling for shorter sensing pins and longer conductor terminals. Male plugs, on the other hand, remain unchanged, which is why existing PSU cables made for the 12VHPWR remain compatible (and normally safe) with 12V-2x6 video cards. Though as cable mating is a two-way dance, it's unlikely having to plug into inadequate 12VHPWR female connectors did CableMod any favors here.

Sources: Consumer Product Safety Commission, HotHardware, CableMod

Minisforum Unveils V3: A 2-in-1 Tablet with Ryzen 7 8840U and Windows 11 Pro

Minisforum has formally announced its V3, one of the industry's first AMD Ryzen 7 8840U-based hybrid PCs that can serve as a tablet, a laptop, and an external display, which is why the company positions it as a '3-in-1' system. As the machine packs an eight-core CPU, it may offer the performance of some mid-range laptops. Meanwhile, despite being a 'tablet,' it has two USB4 ports and an SD card reader, a rare feature for this class of devices.

The Minisforum V3 is closer to a classic laptop than a tablet, but this happens often. The system features a 14-inch detachable multitouch display with a 2560×1600 resolution, a 500 nits brightness, a 165 Hz refresh rate (which will undoubtedly be appreciated by gamers), and a stylus support that measures 323.26×219×9.8 mm and weighs 946 grams without the keyboard. The device is made of die-cast magnesium alloy and packs all the common sensors and features for tablets, such as a 5MP rear and 2MP front camera, gyroscopes, and a fingerprint reader.

The Minisforum V3 is powered by AMD's Ryzen 7 8840U (8C/16T, 3.30 GHz – 5.10 GHz, up to 28W) with built-in Radeon 780M graphics (768 stream processors) and is mated to up to 32 GB of LPDDR5-6400 memory as well as an up to 2 TB M.2-2280 SSD with a PCIe interface. To ensure consistent performance under high loads, Minisforum squeezed a cooling system into the tablet with four copper tubes and two fans, a rare feature. 

When it comes to connectivity, the Minisforum V3 comes with a Wi-Fi 6E + Bluetooth 5.3 adapter, two USB4 ports, a V-Link connector (a USB-C that acts like a DisplayPort In), a UHS-II SD card reader, and a 3.5-mm jack for headsets. Thanks to the V-Link connector, Minisforum's V3 can act like a tablet and a laptop and as a display for another notebook.

Minisforum V3 comes with an integrated 50 Wh battery, which is more or less in line with what some other thin-and-light 14-inch laptops offer. To easily balance between a long battery life and maximum performance, V3 has three power profiles, including power-saving mode (15W), balanced mode (18W – 22W), and high-performance (28W). Meanwhile, Minisforum does not estimate the actual battery life of the device for now.

Minisforum is expected to announce the pricing of its V3 hybrid PC next month.

❌