SK Hynix today announced that they’ve begun sampling new generation enterprise EDSFF E1.L SSDs based on their 128-layer “4D NAND” flash modules, in the form of the new PE8111, as well as for the first time introducing PCIe 4.0 connectivity in its new 96-layer U.2/U.3 form-factor PE8010 and PE8030 enterprise SSDs.
We had expected the new PE8111 eSSD for some time know as we reported about SK Hynix’s plans to introduce such a product last November. The biggest change here is the company’s use of new 128-layer 3D NAND modules that the company dubs as “4D-NAND” because of a new denser cell structure design and higher per-die I/O speeds.
The PE8111 still retains as PCIe 3.0 interface and its corresponding performance characteristics plateau at 3400MB/s sequential reads and 3000MB/s sequential writes – whilst supporting random reads and writes up to respectively 700K and 100K IOPs. Because it’s a long-factor EDSFF E1.L form-factor, storage capacity for the unit falls in at 16TB, and SK Hynix is reporting that they’re working on a 32TB solution in the future.
The new PE8010 and PE8030 come in an U.2/U.3 form-factor and are the company’s first SSDs support PCIe 4.0. The SSDs here still rely on 96-layer NAND modules from the company – but are using an in-house controller chip. Bandwidth here is naturally higher, reaching up to 6500MB/s reads and 3700MB/s write sequentially, with random IOPs falling in at respectively 1100K for reads and 320K for writes.
Power consumption for the new U.2/U.3 drives is actually extremely competitive given their jump to PCIe 4.0 – rising only up to 17W as opposed to their previous generation PCIe 3.0 products which fell in at 14W. This is likely to be attributed to the new generation custom controller, which might be more optimised for low-power compared some or the early third-party 4.0 controllers out there.
The PE8010 and PE8030 are sampling right now with customers – with the PE8111 planned to be sampled in the second half of the year.
Mobile benchmark cheating has a long story that goes far back for the industry (well – at least in smartphone industry years), and has also been a controversial coverage topic at AnandTech for several years now. I remember back in 2013 where I had tipped off Brian and Anand about some of the shenanigans Samsung was doing on the GPU of Exynos chipsets on the Galaxy S4, only for the thing to blow up into a wider analysis of the practice amongst many of the mobile vendors back then – with all of them being found guilty.
In recent years however we saw a big resurgence of such methods, particularly from Chinese vendors. The one big difference here however is that there’s always been somewhat of a firewall in our coverage between what a device vendor did, and what chip vendors enabled them to do, and that’s where we come to MediaTek’s behavior over the last few years. In most past cases we always blamed the device vendors for cheating as it had been their mechanisms and initiative – we hadn’t had evidence of enablement by chipset vendors, at least until now.
With the vast majority of vendors updating its notebook portfolios with Intel 10th generation processors, ASUS has added a new model its Chromebook lineup, the Chromebook Flip C436. Two new models are available with Intel's new Comet Lake CPUs, one equipped with a dual-core Intel i3-10110U, while the other variation comes with a quad-core Intel i5-10210U.
Adding to its 'premium' range of Chromebooks, which has models starting from entry-level Intel Celeron variants, the new ASUS Chromebook Flip C436 has a 2-in-1 convertible design. It features a compact 14-inch 1080p touchscreen with its 4-sided NanoEdge display technology. The screen itself has a 360° hinge which allows it to be used in multiple configurations including tablet mode, and as a stand. It's constructed from magnesium alloy and is available in two colors, transparent silver and aerogel white.
Looking at the technical specifications, there are two current variations available for purchase from the ASUS Store. The cheaper model comes with an Intel i3-10110U, a PCIe 3.0 x2 128 GB NVMe M.2 SSD, 8 GB of DDR3L memory, and a Wi-Fi 6 wireless interface with BT 5.0 support. The more expensive version of the C436 has an Intel Core i5-10210U quad-core processor, a 512 GB PCIe 3.0 x2 NVMe M.2 SSD, 16 GB of DDR3L memory, and the same Wi-Fi 6 adaptor with BT 5.0 connectivity.
Both models include two USB 3.1 G1 Type-C ports, which both support display and power delivery. Along the top bezel is an HD webcam, with an illuminated chiclet keyboard, and a fingerprint sensor integrated into the power button. For sound, the ASUS Chromebook Flip C436 is using a pair of Harman Kardon stereo speakers, with a 3.5 mm Headphone-out and audio-in combo jack for users looking to use headphones or headsets.
The ASUS Chromebook Flip C436 conforms to Intel's Project Athena certification with a reported battery life of up to 12 hours. The battery spec stands at 42 WHrs due to its 3S1P 3-cell Li-ion battery. It also weighs just 2.6 lbs and sits as its premium Chromebook model, succeeding the C434 we reported on last year.
It's one of the most extravagant 2-in-1 Chromebooks on the market at present, with prices starting at $800 for the i3 model, with the i5 model costing $1000. Both are currently available to buy from the ASUS store.
Danish peripherals and gaming headset manufacturer SteelSeries has announced its acquisition of A-Volute, the independent developer of the Nahimic audio software. The Danish manufacturer is looking to take full advantage of A-Volutes' experience to bolster its audio range, with its Arctis series already established in the gaming headset market.
SteelSeries is no stranger to the peripherals market, with experience spanning over the best part of two decades. The company has seen successful product launches, and its popular products including its World of Warcraft branded gaming mice. So in a bid to boost its audio range which is spearheaded by its premium Arctics range, it has acquired A-Volute.
A-Volutes' portfolio is impressive and its software is used by many system integrators, including Dell, GIGABYTE, and MSI. For those unfamiliar with it, the Nahimic audio software allows users to setup various audio enhancements/adjustments via with a control panel application. Among other things, the Nahimic software can provide virtual surround sound mixing, as well as audio equalization settings including bass, treble, and voice when used with a microphone. It remains to be seen how SteelSeries is looking to implement Nahimic into its gaming products, but its Arctis Pro comes with a GameDAC which could shed some light on possible use case scenarios.
The Nahimic 3 Audio Control Panel bundled with the MSI MEG X570 Godlike Motherboard
Current SteelSeries CEO Ehtisham Rabbani said this about its purchase of A-Volute:
"With our award-winning innovations that have redefined the gaming audio experience, and our best-in-class SteelSeries Engine software, bringing A-Volute into the SteelSeries family seemed like a natural fit and we are extremely excited about partnering with Tuyen and his team,” said Ehtisham Rabbani, CEO of SteelSeries. “With their excellence in audio software, they’ll help us improve gamers’ audio experiences even further".
No details regarding the financials of the transaction have been revealed, but SteelSeries says the deal will close later on this spring.
As devices become ever more interconnected and increase their capabilities to sense the world through different kind of sensors, there’s an ever more increasing stream of data that is being created. Naturally, not all of that data is useful, with the vast majority of it being thrown away. To differentiate between useful data and less useful noise, there’s an increasing need for processing power on the part of the brains of newer generation devices.
In the past generally we’d have more simplistic sensors such as microphones or accelerometers being the main data sources, and that’s where we usually know the term of “sensor hub” from – when mobile devices first trying to optimise the handling of smartphone sensors. First these were discrete chips, but later on they have been integrated into SoCs.
As data complexity rises, and as new and more complex sensors types appear, CEVA sees to address the need for higher performance sensor hubs. Today’s announcement is about the new SensPro IP family from CEVA, offering a new IP architecture that leverages the company’s existing IP expertise, combining various processing capabilities and flexibility into one single self-contained product offering.
SensPro is a ground-up design that focuses on maximising power efficiency, combining processing designs that were found in CEVA’s NeuPro designs in terms of ML capabilities, the XM6 image processing prowess, as well as the company’s in-house BX2 scalar DSP microarchitecture which serves as the control unit for the whole new IP.
The idea to combine these elements of what are usually different individual IPs into a single processing block is said to be a first in the industry – hence their calling It the first ever “High performance sensor hub DSP”. The goals here are extremely high in terms of the flexibility of the design and what kind of use-cases it’s meant to be deployed in. Usually we think as smartphone being the first such use-case, but it’s actually more in other areas where a device wouldn’t have as high processing capabilities where we’d see the SensPro have a larger impact in. Quoted are use-cases of robotics, automotive, AR/VR headsets, voice assistants, smart home devices and more importantly new industrial applications where we’re seeing a larger shift to more integrated and smarter automation in areas such as production lines.
Combining scalar and vector processing into one IP, with the ability to also process floating-point operations is quite unique – well you’d think a CPU could do that as well, but here CEVA’s advantages lie in their ability to do all of this extremely efficiently in a low-power design.
From a performance standpoint, the new SensPro is a major architectural upgrade over what was previously offered by familiar IP such as the XM6. CEVA here quoted figures such as 400 GFLOPs of power on a 1.6GHz design target in terms of FP performance, achieved through either 64 32-bit FP MAC operations or 128 16-bit ops. FP capability is said to be important for higher precision arithmetic use-cases where higher dynamic range is required, Radar being one data type that is being brought up in this context.
There’s also the fixed-point vector processing pipelines whose configuration contains up to 1024 8x8 MACs, allowing for up to 3 TOPs 8x8 inferencing. CEVA actually also has an execution mode for binary neural networks and promises here up to 20TOPs inferencing throughput, which is a wild number, but we have to remember that this only to applies to specific models that are able to work with only 2-bits of data.
The IP’s data bandwidth capabilities are actually quite massive, employing a super-wide 2048-bit load unit alongside a 1024-bit store unit, which corresponds to 400GB/s of data ingestion and 200GB/s of output. It sounds like a lot, but we have to remember that the IP would be handling immense data streams coming from a myriad of different sensors.
From a high-level perspective, what’s important to note in the block diagram is the configuration flexibility that the IP offers. Generally, amongst the processing units, the scalar processors as well as one vector processing unit are the minimum configuration of the design. Within a vector unit though things can quite a bit more complicated:
A SensPro VCU consists of different execution units handling either fixed point MACs, floating point MACs, or other specifically dedicated special function units for their specific instructions.
The configurability for customers is even more fine-grained than just choosing the amount and type of units that’s integrated into the IP, in the floating point units for example CEVA also gives one the choice between different throughput designs, with a choice of doubling the single-precision throughput to the optional possibility of doubling throughput again for FP16 operations.
CEVA’s initial IP configurations consists of 3 designs – the SP250, SP500F and SP1000, with each incremental step corresponding to the 8-bit MAC configuration.
The SP500F is the only starting design that implements the floating-point execution units and is more targeted towards vision and SLAM use-cases with radar or LIDAR. For consumer electronics we’ll most likely see the SP250 being used in devices such as smartphones, IP cameras and other similar products.
Ran Snir, Vice President of Research and Development at CEVA, commented:
“With the growth in the number and variety of sensors in modern systems, and their substantially different computation needs, we set out to design a new architecture from the ground up to address this challenge. We constructed SensPro as a highly configurable, holistic architecture that could handle these intensive workloads using a combination of scalar, vector processing and AI acceleration, while utilizing the latest micro-architecture design techniques of deep pipelining, parallelism, and multi-tasking. The result is the most powerful DSP architecture ever conceived for sensor hubs and we’re truly excited to work with our customers and partners to bring contextually-aware products to market based on it.”
The IP is targeted for general licensing in Q3 2020 – meaning it’ll be a few years before we see any kind of silicon design-ins and even products with the new IP.
Even though AMD has already released its Ryzen Mobile 4000 Series of processors, Origin PC has gone one step further for their new AMD gaming laptop, assuming a machine that uses AMD's Ryzen 3000 desktop processors. Dubbed the EON15-X AMD, the high-end gaming laptop is available with a choice of three different Ryzen 3000 desktop SKUs, including the 12 core Ryzen 9 3900. And yet even with a desktop class processor, this isn't a luggable, desktop-type laptop; the 15-inch notebook is only 1.2-inches thick and weighs less than 6 pounds.
Thew new EON15-X AMD is the latest update to Origin PC's lineup of EON15 gaming laptops, with a specific emphasis on supporting AMD's Ryzen desktop processors. The EON15-X AMD is in 6, 8, and 12 core configurations, which is perfect for gamers and content creators looking to utilize the multi-core performance of AMD's 7 nm Zen 2 architecture. This is paired with various memory configurations, with the notebook able to accomodate up to 64 GB of DDR4-2666.
And since this is a gaming laptop, it's of course equipped with high-end display and GPU options. The EON15-X AMD comes with a 15.6" 1080p 144 Hz screen which is fitting given current trends in gaming laptops, as we as being a good fit for what resolutions current-generation mobile GPUs can handle. Speaking of GPUs, Origin PC is offering users a choice between an NVIDIA GeForce RTX 2060 6 GB, and a GeForce RTX 2070 8 GB graphics card.
Overall, Origin is offering a fairly comprehensive set of customizations on its official product page. Along with the memory options mentioned earlier, the laptop offers support for up to two M.2 SSDs, and a single HDD bay which can accommodate SATA based drives, resulting in a plethora of storage options. In a maximum configuration, the EON15-X AMD supports up to 2 TB NVMe Gen4 drives, M.2 SATA based 2 TB drives, and up to 4 TB of SATA SSDs which can be configured within the customizer.
|Origin PC EON15-X AMD Specifications|
|CPU||AMD Ryzen 5 3600 6-Core, 3.6 GHz Base, 4.2 GHz Boost
AMD Ryzen 7 3700X 8-Core, 3.6 GHz Base, 4.4 GHz Boost
AMD Ryzen 9 3900 12-Core, 3.8 GHz Base, 4.6 GHz Boost
|GPU||NVIDIA GeForce RTX 2060 6 GB Max-P
NVIDIA GeForce RTX 2070 8 GB Max-Q
|Display||15.6" IPS FHD 1080p 144 Hz|
|Memory||Origin Approved DDR4-2400
8 GB (2 x 4 GB)
16 GB (2 x 8 GB)
16 GB (4 x 4 GB)
32 GB (4 x 8 GB)
32 GB (2 x 16 GB)
64 GB (4 x 16 GB)
G.Skill Ripjaws DDR4-2400
16 GB (2 x 8 GB)
Kingston HyperX Impact DDR4-2400
16 GB (2 x 8 GB)
32 GB (2 x 16 GB)
64 GB (4 x 16 GB)
Kingston HyperX Impact DDR4-2666
16 GB (2 x 8 GB)
32 GB (2 x 16 GB)
64 GB (4 x 16 GB)
|Storage||NVMe Up To 2 TB
M.2 SATA Up To 2 TB
SATA SSD Up To 4 TB
|Networking||Intel AX200 Wi-Fi 6 + BT 5.0|
|Ports||1 x USB 3.2 G2 Type-C (DisplayPort 1.4)
2 x USB 3.1 G2 Type-A
1 x USB 2.0 Type-A
1 x 3.5 mm Phono/Mic
1 x 3.5 mm Mic
1 x HDMI TM Output
1 x Mini DisplayPort 1.4 Output
1 x RJ-45
|Battery>||62 Wh Li-ion|
|Dimensions (WxDxH)||14.2 x 10.1 x 1.2 inches|
|Price (USD)||Starts at $1624|
On the connectivity front, the Origin PC EON15-X AMD has plenty to shout about with two USB 3.1 G2 Type-A ports, one USB 2.0 port, and a USB 3.2 G2 Type-C port with DisplayPort 1.4 alt mode functionality. Supporting up to three external screens, including the Type-C DisplayPort 1.4 output, and included Mini DisplayPort 1.4 output, with a single HDMI TM video output also present. Gamers looking for RGB are in luck with a full-size customizable keyboard with integrated RGB LEDs and a touchpad which includes an embedded fingerprint reader for extra security.
Users with deep pockets can also add a bit of flair to their system, as Origin PC offers painting and customization options for the EON15-X AMD. Its HD UV printing service starts at $149, with a metallic finish starting at $175. Origin PC even offer a custom hydro dipping option which begins at $199.
The Origin PC EON15-X AMD is currently available from the Origin PC website with prices starting at $1624 for the base model, with the price when fully customized can easily surpass $3400.
While this past week's wave of new laptop announcements was focused squarely around the launch of Intel's 10th gen Comet Lake-H mobile processors, a couple of vendors have also been using the occasion to update their 15 Watt U-series laptops as well. Among these was HP, who has updated its Envy 17 family of notebooks. Joining the existing Comet Lake models, the HP Envy 17 series now also features models with Intel's Ice Lake-U processors, with various configurations available up to Intel's Core i7-1065G7 CPU.
Designed for professional users looking for a sleek and stylish design, the updated HP Envy 17s include multiple models across its range, with a customizable touch screen model in its arsenal. (17t-cg000 Touch). The new HP Envy 17 can be fully configured to a suitable specification dependent on the user's requirements, with multiple memory, CPU, graphics, and storage options available. Users can also equip it with a standard Intel AC9560 Gigabit Wi-Fi wireless adapter, or with an Intel AX201 Wi-Fi 6 adapter. A larger 55 Wh Li-ion Polymer battery is included, with its weight starting at 6.02 lb, dependent on the chosen configuration.
Starting off with displays, the latest-generation Envy 17 features a 17.3" IPS backlit WLED display, with options for either a 1080p or 4K display. Both displays are rated for similar performance, with a screen brightness of around 300 nits. HP is also offering optional touchscreen functionality on some models, though only with the 1080p display. Overall, the HP Envy 17 weighs around 6 lb and has dimensions of 15.71 (W) x 10.20 (D) 0.76 (H) inches.
Under the hood, the HP Envy 17 is powered by a Intel's Core i5 and Core i7 Ice Lake processors. Curiously, HP is taking a very binary route here: the only CPU options are the slowest Core i5, the i5-1035G1, or the fastest i7, the i7-1065G7. Both processor options offer 4 CPU cores, but along with clockspeed differences, the i5's integrated GPU is only half as powerful as the i7's. Perhaps that's why HP is also including a discrete GPU with all of the Envy 17s, using NVIDIA's GeForce MX330, which comes with either 2GB or 4GB of GDDR5 memory.
Meanwhile storage options inside the silver sandblasted anodized aluminum frame run the full gamut, from Optane-cached rotating rust all the way up to a 1TB PCIe SSD. All models come with some form of solid state storage, starting with a 1TB HDD and 16GB of Optane Memory at the low end, as well as other combinations of HDDs, SSDs, and Optane Memory including a 512GB PCIe SSD with a 32GB Optane cache. As for the memory, HP offers between 8GB and 32GB of DDR4-3200 SDRAM, including a curious 12 GB configuration with one 8 GB stick and one 4 GB stick in an unbalanced dual-channel mode.
Also included in the HP Envy 17 is either an Intel AC9560 Wi-Fi 5 adapter, or one of Intel's newer AX201 Wi-Fi 6 wireless adapter. For connectivity, the laptop offers a USB 3.2 G2 10 Gbps Type-C port, with support for DisplayPort 1.4, as well as three USB 3.1 G1 Type-A ports, an AC Smart pin, and a headphone and microphone combo port. For users with HDMI, the HP Envy 17 also has a single HDMI 2.0 video output. Meanwhile, along the top of the bezel is a wide vision HD webcam with a built-in microphone, with the Envy 17's sound coming from a pair of integrated Bang & Olufsen speakers.
|HP Envy 17 Intel 10th Gen Refresh Specifications|
|Intel i7-1065G7||Intel i5-10210U||Intel i7-10510U|
|GPU||GeForce MX330 (2 GB)
GeForce MX330 (4 GB)
|GeForce MX330 (2 GB)||GeForce MX250 (2 GB)||GeForce MX250 (2 GB)|
|Display||17.3" FHD IPS
17.3" FHD IPS Touch
17.3" 4K UHD IPS
|17.3" FHD IPS||17.3" FHD IPS||17.3" FHD IPS|
|Memory||8 GB DDR4-3200 (2 x 4 GB)
12 GB DDR4-3200 (1 x 4 GB, 1 x 8 GB)
16 GB DDR4-3200 (1 x 16 GB
32 GB DDR4-3200 (2 x 16 GB)
|12 GB DDR4-3200 (1 x 4GB, 1 x 8GB)||8 GB DDR4-2666 (2 x 4 GB)||16 GB DDR4-2666 (1 x 16 GB)|
|Storage||1 TB HDD + 16 GB Optane
1 TB HDD + 128 GB M.2
1 TB HDD + 256 GB NVme M.2
512 GB NVMe M.2
512 GB NVMe M.2 + 32 GB Optane
1 TB NVMe M.2
|512 GB M.2
32 GB Intel Optane
|512 GB M.2
16 GB Intel Optane
|512 GB M.2
32 GB Intel Optane
|Networking||Intel AC9560 Wi-Fi 5
Intel AX201 Wi-Fi 6
|Intel AX201 Wi-Fi 6||Gigabit LAN
Intel AC9560 Wi-Fi
Intel AC9560 Wi-Fi
|Power||65 W AC Adaptor|
|Battery||55 Wh Li-on||52 Wh Li-ion|
|Ports||1 x SD Card Reader
1 x USB 3.2 G2 Type-C
3 x USB 3.1 G2 Type-A
1 x 3.5 mm Phono/Mic
1 x HDMI 2.0
|1 x SD Card Reader
1 x USB 3.1 G2 Type-C
3 x USB 3.1 G1 Type-A
1 x 3.5 mm Phono/Mic
1 x Gigabit RJ45
1 x HDMI 2.0
|Dimensions (WxDxH)||15.71 x 10.20 x 0.76 inches||15.94 x 10.47 x 0.88 inches|
|Weight||6.02 lb||6.22 lb|
|Price (USD)||Starts at $950||$1250||Starts at $730||Starts at $950|
Every model in the new HP Envy 17 Intel 10th Generation refresh comes equipped with a 65 W AC adaptor, a multi-media SD Card Reader. The price of each model varies, with prices for the BTO models starting at $950 and ranging up to $2070 for the top-spec model, while the pre-configured 17-cg0013dx SKU for Best Buy is available for pre-order at $1250.
Tucked inside NVIDIA’s announcement of their spring refresh of their mobile GPU lineup, the company included a new low-end mobile part, the GeForce GTX 1650 GDDR6. Exactly as it says on the tin, this was a version of the company’s GTX 1650 accelerator, except with newer GDDR6 instead of the GDDR5 it launched with. Now, in one of NVIDIA’s more poorly kept secrets, their desktop product stack is getting a version of the card as well.
While not a launch (as NVIDIA likes to frame it), the desktop GTX 1650 GDDR6 has none the less finally become an official product this past Friday, with partners unveiling their cards and NVIDIA adding the specifications to their website. Sitting alongside the existing GDDR5 version, the GDDR6 version is intended to be a parallel, generally equal SKU. As NVIDIA makes the transition from GDDR5 to GDDR6 at the bottom edge of their product lineup, the updated card gets access to faster memory, but interestingly the GPU clockspeeds are also tapered back a bit.
|NVIDIA GeForce Specification Comparison|
|GTX 1660||GTX 1650 Super||GTX 1650 (G6)||GTX 1650 (G5)|
|Memory Clock||8Gbps GDDR5||12Gbps GDDR6||12Gbps GDDR6||8Gbps GDDR5|
|Memory Bus Width||192-bit||128-bit||128-bit||128-bit|
|Single Precision Perf.||5 TFLOPS||4.4 TFLOPS||2.85 TFLOPS||3 TFLOPS|
|Manufacturing Process||TSMC 12nm "FFN"||TSMC 12nm "FFN"||TSMC 12nm "FFN"||TSMC 12nm "FFN"|
By the numbers, the new GDDR6 version is largely the same as the GDDR5 version. Both are 75W cards based on NVIDIA’s entry-level Turing TU117 GPU. However the GDDR6 version of the card both gains some and loses some in the process. NVIDIA swaps out the GDDR5 for newer GDDR6 – and thereby finally confirming that TU117 is GDDR6-capable – however the cards also take a slight clockspeed nerf. As a result the GDDR6 version of the card has a whopping 50% more memory bandwidth – bringing it to 192GB/sec – but 5% lower GPUs clocks and throughput.
In discussing the matter with NVIDIA, we were told that the GPU clockspeed change was to equalize performance and power consumption between the two parts. Which makes sense to a degree – the GTX 1650 is a particularly special part in NVIDIA’s lineup since it’s the fastest card they offer that can be powered entirely by a PCIe slot, which is to say it can’t have a TDP over 75 Watts. So with the GDDR5 version already close to that limit, if the switch to GDDR6 memory drives up power consumption at all (be it the memory or the GPU’s memory controllers), then something else has to be dialed back to compensate.
Meanwhile, equalizing performance is something of a secondary goal in this situation, especially because of the potency of GDDR6 memory. NVIDIA doesn't intend for the GDDR6 version of the GTX 1650 to be its own product; the next card up after the GTX 1650 remains the GTX 1650 Super. But given what we’ve seen on other Turing parts such as the GTX 1660 series, where a similar switch netted a further 10% in performance, I would expect the GTX 1650 to see the same kind of modest benefits from the faster memory. This in turn would more than outweigh the 5% GPU clockspeed drop. So don’t be surprised if the GTX 1650 with GDDR6 turns out to be a bit faster than its pre-existing GDDR5 counterpart, though it shouldn’t be by very much.
Otherwise, the GTX 1650 GDDR6 will end up filling the same general role as the original GTX 1650. The entry-level card is the cheapest (and the slowest) of the Turing family, offering as much performance as NVIDIA can pack into a 75 Watt TDP. And while the cards should still be relatively small, I do find it interesting that NVIDIA lists the length for the (non-public) reference card at 5.7-inches, 0.6-inches longer than the GDDR5 version. GDDR6 cards require a new PCB, so this raises the curious question of whether GDDR6 designs can’t be made quite as compact as GDDR5 designs.
Overall, this low-key release should mark a more important turning point in the state of GDDR memory. If NVIDIA and its partners are now willing to release GDDR6 versions of low-end cards, then this is a strong indicator that GDDR6 has finally lost most of its new technology price premium, and that memory prices have fallen by enough to be competitive with 8Gbps GDDR5. GDDR6 prices were a sticking point for the profit-sensitive NVIDIA during the original Turing product stack launch, so while it has taken an extra year, the company is finally offering a top-to-bottom GDDR6-based product stack.
NVIDIA’s partners, in turn, are already rolling out their cards, with designs from Gigabyte, MSI, EVGA, and others. As with the original GTX 1650 cards, it looks like many of these will be factory overclocked, throwing out the 75W power limit in order to get some extra performance out of the TU117 GPU. Meanwhile, pricing for the GDDR6 cards appears to be identical to their GDDR5 counterparts, underscoring the transitionary nature of this release.
In a blink-and-you’ll-miss-it moment, AMD quietly dropped distribution and support for the company’s StoreMI software at the start of this month. The technology, launched back in 2018, was AMD’s answer to Apple’s Fusion Drive and other hybrid drive programs that allow a SSD and a HDD to be merged into a single logical volume. However it looks like AMD has decided to take a different direction with their hybrid drive efforts, as the company has dropped the software in favor of another program that’s expected to be launched this quarter.
In a product change advisory published to their website last month (but only noticed recently), AMD announced that they would be halting the distribution of and support for the StoreMI software. The software itself would continue to work, but starting March 31st, but AMD wouldn’t be providing the means for any new installations after that date, nor would they be providing support.
A relatively clean break like this is rather uncommon for most CPU vendor software, but given what we know about StoreMI, it’s not too surprising. StoreMi came out of an existing relationship between AMD and Enmotus, a software developer who had already created their similar FuzeDrive software that AMD was, for a time, recommending for use with their systems. So while it’s ultimately an internal matter for AMD, it looks like the company has decided to wrap up their relationship with Enmotus – which would mean that AMD would no longer have the rights to distribute the software.
In its place, the PCA reveals that the company is “focus[ing] its internal development resources on a replacement solution,” which is set to be released this quarter. The fact that AMD is explicitly noting the use of “internal” resources, in turn, strongly suggests that whatever the company is working on, it’s an in-house solution rather than a licensed solution like StoreMi. Which means AMD has presumably started from scratch here, but it would also be a lot cleaner with respect to ownership and all the associated issues that come with it (StoreMi famously only allowed SSD partitions up to 256GB, in order to not undermine Enmotus’s commercial software).
At any rate, barring any delays, we should be seeing the fruits of AMD’s software labors in the next couple of months.
Back in November last year, we reported that SK Hynix had developed and deployed its first DDR5 DRAM. Fast forward to the present, and we also know SK Hynix has recently been working on its DDR5-6400 DRAM, but today the company has showcased that it has plans to offer up to DDR5-8400, with on-die ECC, and an operating voltage of just 1.1 Volts.
WIth CPU core counts rising with the fierce battle ongoing between Intel and AMD in the desktop, professional, and now mobile markets, the demand to increase throughput performance is high on the agenda. Memory bandwidth by comparison has not been increasing as much, and at some level the beast needs to be fed. Announcing more technical details on its official website, SK Hynix has been working diligently on perfecting its DDR5 chips with capacity for up to 64 Gb per chip.
SK Hynix had previously been working on its DDR5-6400 DRAM, which has 16 Gb which is formed of 32 banks, with 8 bank groups, with double the available bandwidth and access potential when compared with DDR4-3200 memory. For reference, DDR4 uses 16 banks with 4 bank groups. The key solution to improve access throughout is the burst length, which has been doubled to 16 when compared with 8 on DDR4. Another element to consider is DDR4 can't by proxy run operations while it's refreshing. DDR5 is using SBRF (same bank refresh function) which allows the system the ability to use other banks while one is refreshing, which in theory improves memory access availability.
As we've already mentioned, SK Hynix already has DDR5-6400 in its sights which are built upon its second-generation 10nm class fabrication node. SK Hynix has now listed that it plans to develop up to DDR5-8400. Similar in methodology to its DDR5-6400 DRAM, DDR5-8400 requires much more forethought and application. What's interesting about SK Hynix's DDR5-8400 is the jump in memory banks, with DDR5-8400 using 32 banks, with 8 bank groups.
Not just content at increasing overall memory bandwidth and access performance over DDR4, the new DDR5 will run with an operating voltage of 1.1 V. This marks a 9% reduction versus DDR4's operating voltage which is designed to make DDR5 more power-efficient, with SK Hynix reporting that it aims to reduce power consumption per bandwidth by over 20% over DDR4.
To improve performance and increase reliability in server scenarios, DDR5-8400 will use on-die ECC (Error Correction) and ECS (Error Check and Scrub) which is a milestone in the production of DDR5. This is expected to reduce overall costs, with ECS recording any defects present and sends the error count to the host. This is designed to improve transparency with the aim of providing enhanced reliability and serviceability within a server system. Also integrated into the design of the DDR5-8400 DRAM is Decision Feedback Equalization (DFE), which is designed to eliminate reflective noise when running at high speeds. SK Hynix notes that this increases the speed per pin by a large amount.
In the above image from specification comparison between DDR4 and DDR5 from SK Hynix, one interesting thing to note is that it mentions DRAM chips with density up to 64 gigabit. We already know that the chip size of DDR5 is 65.22mm², with a data rate of 6.4 Gbps per pin, and uses its 1y-nm 4-metal DRAM manufacturing process. It is worth pointing out that the DDR5-5200 RDIMM we reported on back in November 18, uses 16 Gb DRAM chips, with further scope to 32 Gb reported. SK Hynix aims to double this to 64 Gb chips which do double the density, at lower power with 1.1 volts.
Head of DRAM Product Planning at SK Hynix, Sungsoo Ryu stated that:
"In the 4th Industrial Revolution, which is represented by 5G, autonomous vehicle, AI, augmented reality (AR), virtual reality (VR), big data, and other applications, DDR5 DRAM can be utilized for next-gen high-performance computing and AI-based data analysis".
SK Hynix if still on schedule with the current Coronavirus COVID-19 pandemic, looks set to enter mass production of DDR5 later this year.
As many of us are stuck at home these days and are
slowly quickly going mad, a couple of weeks ago we kicked off a race of sorts with our loyal opposition, Tom’s Hardware. Challenging each other to put an end to the very thing that’s keeping us at home – the novel coronavirus SARS-CoV-2 – we have been racing to see which team can contribute the most work towards the Folding@Home project’s coronavirus distributed computing research efforts. The popular project has already passed an exaFLOP per second in compute performance thanks to Team AnandTech, Tom’s Hardware, and numerous other contributors over the world, and there is still much work to be done for its important research tasks.
Meanwhile, as we’re now at just past the half-way point in our four-week race, I wanted to stop and take stock of things. To see how the humble Team Anandech was faring against the boastful brutes that are the Tom’s Hardware team. And after two weeks, it looks like things are coming up great for Team AnandTech.
Since the race started on March 18th, Team AnandTech has generated 2.45 billion points in work for the Folding@Home project. In the same time period, the Tom’s Hardware team has generated a sizable, but not quite as massive 2 billion points of work. This has put Team AnandTech 445 million points ahead of Tom’s Hardware, or to put this in terms of the ongoing rate, Team AnandTech has been turning in 1.2 points’ worth of work for every point that Tom’s Hardware turns in. Which in the big picture, is actually a rather close race.
As such, with two weeks to go, this race is far from over. Our loyal competition could still turn things around, and so Team AnandTech cannot rest on its laurels. That means we still need you! Both to help Team AnandTech cross the finish line, and to hopefully get out of our homes just that much sooner.
So please stop by the AnandTech Distributed Computing forum to see how you can download the Folding@Home client and join Team AnandTech.
Ultimately this race is for fun, but it’s also for a good cause. The SARS-CoV-2 virus is a world-changing event, and, along with the immediate medical risks of the virus, the containment measures it requires are intense. The Folding@Home project is working on several simulations to improve humanity’s understanding of the virus and the disease it causes, with a goal of jump-starting new treatments and to bring the virus under control. It’s a worthy cause, as a result I’d like to encourage everyone to take part in what’s left of our race over the next two weeks.
Carousel Image Courtesy of: CDC/Alissa Eckert, MS
Along with many other OEMs in the notebook segment at the moment, Razer has joined in the fray with the launch of two new models of its Blade 15 series of gaming notebooks. Building upon Intel's newly announced 10th generation Comet Lake-H processors, both models also include options for using NVIDIA's new RTX Super mobile GPUs.
Starting off with the new flagship Blade 15 Advanced model, Razer claims it to be the world's smallest laptop with with a 15.6" screen, with a weight of just 2.2 Kg. Included in the Advanced model is the new Intel Core i7-10875H eight-core Comet Lake-H processor, with a max turbo of up 5.1 GHz and a base clock of 2.3 GHz. Some of the core features include Intel Thunderbolt 3 Type-C and a USB 3.1 G2 Type-C port supporting USB-C 20 V PD 3.0 charging capabilities. Powering the laptop is a built-in 80 Wh rechargeable lithium-ion polymer battery, with a compact 230 W power adapter which is supplied with both models.
The advanced model is available with a choice between a 300 Hz HD TFT LCD for hardcore gamers, and a more creator-focused OLED 100% DCI-P3 4K touch panel with a 1 ms response time. Powering the display are NVIDIA's current lineup of notebook GPUs, with the top option being the GeForce RTX 2080 Super with 8 GB of GDDR6 memory. As for storage, Razer has equipped the Blade 15 with a PCIe 3.0 NVMe SSDs, with capacities up to 1 TB. Keeping the components cool in the advanced model is a vapor chamber design, while the base model uses a standard heat pipe design.
Meanwhile the base model comes equipped with the six-core Intel Core i7-10750H processor, while the GPU choice goes up NVIDIA GeForce RTX 2070, also using Optimus. Also available with two display types, the base model can come with either a 144 Hz Full HD display with a matte screen or with an OLED 100% DCI-P3 panel. Providing power is a slightly lower spec 65 Wh polymer battery, with Intel Thunderbolt 3 Type-C, an HDMI 2.0B video output, and dual USB 3.1 G1 Type-A ports.
Both models come finished with a black frame with a backlit green Razer logo and are equipped with 16 GB of dual-channel DDR4-2933 memory, benefit from an Intel AX201 Wi-Fi 6 adapter with BT 5.0 support, and include a precision glass touchpad.
Neither variants of the Razer Blade 15 are cheap, with the Basic starting at $1,600, while the Advanced model begins at $2,600. Both models look set to be available in retail channels in May.
It’s been a long couple of weeks, but the wait is now finally over. Today we’re ready to go on a deep dive into Samsung’s most important phones of 2020; the new Galaxy S20 series represents a huge jump for the Korean company, and also for the wider smartphone industry. The new devices have a lot of brand-new features premiering for the first time in mainstream flagship devices, and some cutting-edge capabilities that are outright new to the industry as a whole.
The S20 series are probably best defined by their picture capturing capabilities, offering a slew of new camera hardware that represents Samsung’s most ambitious smartphone camera update ever. From a “periscope” design telephoto lens with 4x optical magnification and up to a quoted 100x digital magnification, to a new and humongous 108MP main camera sensor with a brand-new pixel array setup, the new Galaxy S20 Ultra is definitely an exotic device when it comes to its photography features. The new Galaxy S20+ also sees some massive new upgrades, ranging from a new, larger main camera sensor, to the innovative use of a 64MP wide-angle module that allows for high magnification hybrid crop-zooming. Overall it too is a big step-up in the camera department and certainly shouldn’t be overshadowed by its Ultra sibling. The phones are not only the first smartphones able to capture 8K video – but they’re also amongst the first consumer grade hardware out on the market with the capability, which is certainly an eye-catching feature.
The new S20 series are also among the first devices to come with the latest generation of processors on the market, pioneering the usage of the new Snapdragon 865 as well as the new Exynos 990 SoCs. In recent years, it’s always been a contentious topic for Samsung’s flagship phones as the company continues to dual-source the SoCs powering its devices – with some years the differences between the two variants being larger than one would hope for. We have both chipset variants of the Galaxy S20 Ultra as well as an Exynos variant of the S20+ for today’s review, and we’ll be uncovering all the differences between the models.
AMD’s FreeSync Premium Pro certification promises quite a lot when it comes to features and quality, but unfortunately there are less than a dozen of such displays available on the market today. Thankfully, that market will be getting one more entry courtesy of ASUS, who recently announced its second FreeSync Premium Pro monitor, the ROG Strix XG27WQ. Touting support for superior capabilities, the 27-inch monitor is one of the most feature-packed FreeSync Premium Pro monitors to date, and it promises to be less expensive than some of its larger rivals.
The ASUS ROG Strix XG27WQ monitor relies on a curved 27-inch VA panel with a 2560×1440 resolution. All together, the monitor offers a peak brightness of 450 nits, a 3000:1 contrast ratio, 178°/178° horizontal/vertical viewing angles, a 1 ms MPRT response time, and a 165 Hz maximum refresh rate. The LCD offers one DisplayPort 1.2 inputs and two HDMI 2.0 to connect to its host and also has a dual-port USB 3.0 hub along with a headphone output.
AMD mandates FreeSync Premium Pro (previously FreeSync 2) monitors to support a wide variable refresh rate range (48 – 144 Hz or 48 – 165 Hz in case of the XG27WQ), feature Low Framerate Compensation, be capable of low-latency tone mapping to the monitor’s native color space, meet HDR brightness and and contrast requirements roughly equivalent to DisplayHDR 500, and reproduce at least 90% of the DCI-P3 color gamut (92% in the ROG's case). The capabilities of the ASUS ROG Strix XG27WQ monitor actually exceed AMD’s requirements, which makes it a rather potent choice for gamers.
In addition to VESA’s Adaptive-Sync/AMD’s FreeSync VRR, the display also supports ASUS’s Extreme Low Motion Blur (ELMB) that makes fast-paced scenes look sharper even when a variable refresh rate technology is enabled. The ROG Strix XG27WQ also supports a variety of genre-specific game modes, ASUS's Shadow Boost feature to make dark scenes look brighter, and enhancements like crosshair overlay for easier targeting in FPS titles.
Since we are dealing with an ASUS ROG-branded monitor, the model Strix XG27WQ not only features a stand that can adjust height, tilt, and swivel, but also one that has Aura Sync addressable RGB lighting as well as a projector that projects a logotype onto the table below.
|The ASUS ROG Strix XG27WQ|
|Native Resolution||2560 × 1440|
|Maximum Refresh Rate||165 Hz|
|Response Time||1 ms MPRT|
|Brightness||450 cd/m² (peak)|
|Viewing Angles||178°/178° horizontal/vertical|
|Color Gamut||125% sRGB/BT.709
|Dynamic Refresh Rate Tech||AMD FreeSync Premium Pro
DisplayPort: 48 - 165 Hz
HDMI: 48 - 144 Hz
|Pixel Pitch||0.2331 mm²|
|Pixel Density||108 PPI|
|Inputs||1 × DisplayPort 1.2
2 × HDMI 2.0
|Audio||3.5 mm output|
|USB Hub||2 × USB 3.0 Type-A connectors
1 × USB 3.0 Type-B input
|Stand||Swivel: -50° ~ +50°
Tilt: -5° ~ +20°
Height: 100 mm
Finally, it's worth keeping in mind that ASUS sometimes formally introduces its products well ahead of their actual release date. As things currently stand, the company has not revealed anything about an actual launch date or pricing for ROG Strix XG27WQ, so it remains to be seen when the monitor will actually hit the streets.
Alongside this morning’s launch of their new laptop SKUs, NVIDIA is also rolling out a couple of new technologies aimed at high-end laptops. Being placed under their Max-Q banner, the company is unveiling new features to better manage laptop TDP allocations, and for the first time, the ability to have G-Sync in an Optimus-enabled laptop. These new technologies are separate from the new hardware SKUs being launched today – they can technically be built into any future GeForce laptop – so I wanted to touch upon separately from the hardware itself.
With this week marking the launch of AMD’s Ryzen Mobile 4000 APUs and Intel’s Comet Lake-H mobile CPUs, this week is essentially the kick-off point for the next generation of laptops. OEMs and vendors across the spectrum are gearing up to roll out new and updated laptops based on the latest silicon, as they set themselves up for the next year or so of laptop sales.
Not one to be left out, NVIDIA is also using this week’s launches to roll out some new laptop graphics adapters, which partners will be pairing with those new Ryzen and Core processors. The company is also unveiling a rather important set of additions to their laptop technology portfolio, introducing new features to better manage laptop TDP allocations, and for the first time, the ability to have G-Sync in an Optimus-enabled laptop. Overall while this week is primarily focused on AMD and Intel, NVIDIA is making sure that they are giving partners (and consumers) something new for this generation of laptops.
First and foremost, NVIDIA is launching two new mobile graphics adapters this morning. The GeForce RTX 2080 Super and RTX 2070 Super, both of which were launched on the desktop last summer, are now coming to laptops. Like their desktop counterparts, the new adapters are based on NVIDIA’s existing TU104 silicon, so there aren’t any new GPUs to speak of today, but their launch gives OEMs additional options for dGPUs for their high-end gaming laptops.
As has been the case for NVIDIA throughout this generation, while the company doesn’t have distinct, mobile-labeled SKUs, the new laptop parts do have their own set of specifications. Specifically, while the mobile parts have the same CUDA core counts and memory support as their desktop brethren, they have different clockspeed and TDP profiles, owing to the limitations of the laptop form factor. All told, the new Super parts are designed for 80W+ laptops, with the flagship RTX 2080 Super approved for 150W (or more) designs, as vendors get the option to push the adapter just about as hard as they think they can get away with in the luggable desktops we commonly see in the broader market for ultra high powered laptops.
Otherwise, these are fairly typical GeForce RTX SKUs. Boost clocks will range from 1080MHz to 1560MHz, depending on what laptop vendors opt for in terms of power and performance. The RTX 2080 Super will have a fully-enabled, 3072 CUDA core TU104 GPU, while the RTX 2070 Super gets a 2560 core version of the same GPU.
Meanwhile, memory is the only other notable change here: while both adapters come with 8GB of GDDR6 memory, unlike the desktop RTX 2080 Super, the mobile version won’t come with 15.5Gbps GDDR6. Instead, it ships with 14Gbps memory like the rest of the RTX lineup. Overclocked VRAM is rather expensive in terms of power, so it’s not too surprising to see NVIDIA drop it here.
NVIDIA is also using this opportunity to roll out some smaller hardware updates to its laptop portfolio. On the memory front once more, the company has confirmed for the first time that it has been working with memory vendors on low voltage GDDR6 memory. Unfortunately the details here are slim – it’s not clear whether the low voltage RAM NVIDIA is using is any different than the 1.25v GDDR6 already offered by memory suppliers – but even 1.25v would be a notable decrease over normal 1.35v memory. NVIDIA pegs VRAM memory consumption at around 20 to 25 watts for their laptop solutions, so being able to shave off even 10% of that is a couple more watts that can be shifted over to the GPU itself for more performance.
And keeping with the power efficiency theme, NVIDIA tells us that they’ve also been working with partners to get better VRMs in laptops. This is another area where details are quite slim, but VRMs have been an ongoing focus area for the company. Voltage regulation is a game of efficiency – any power you lose is waste heat that eats into a laptop’s thermal budget – so the goal is always to maximize efficiency. Coupled with NVIDIA’s new Dynamic Boost technology, the need for more efficient VRMs (particularly high wattage solutions) is at an all-time high.
Alongside their new high-end hardware, NVIDIA is also launching a pair of new low-end SKUs for the mobile space. These are the GeForce GTX 1650 Ti, and a GDDR6 version of the GTX 1650.
The GTX 1650 Ti is a particularly interesting matter, as it has no desktop counterpart. Up until now, NVIDIA has been launching desktop parts first, and then having laptop parts launch in-concert with the desktop parts, or at a later time entirely. But for the GTX 1650 Ti, we have a purely mobile part, at least for the time being.
The hardware itself shouldn’t be too much of a surprise. Here NVIDIA is reusing its TU117 GPU, which is the same GPU that powered the original mobile GTX 1650. The big change here is that the Ti SKU gets much better definition: whereas the regular GTX 1650 has “up to” 1024 CUDA cores and comes with a couple of different memory types, the GTX 1650 Ti is guaranteed to have 1024 CUDA cores as well as GDDR6 memory. Coupled with a slightly higher maximum TDP of 55W, and it should deliver better performance. Though it’s still going to leave a noticeable gap between this fully-enabled TU117 part and the next part up in the stack, the TU116-based mobile GTX 1660 Ti.
Joining the GTX 1650 Ti will be another GTX 1650 SKU, the GTX 1650 with GDDR6. As alluded to in the name, this is a mobile GTX 1650 with GDDR6 memory instead of GDDR5. NVIDIA isn’t outlining any performance figures for the new part, so performance expectations will have to be left up to the reader’s imagination, but at otherwise equivalent specifications, this would be a 50% bump in memory bandwidth, from 8Gbps GDDR5 to 12Gbps GDD6.
However it’s going to be up to laptop vendors to decide what GTX 1650 configuration they’re using, as well as how to disclose it. The GDDR6 version isn’t getting its own canonical SKU name, so a laptop with it could have anything from an 896 core model with GDDR5 to a 1024 core model with GDDR6. Ultimately the minimum configuration hasn’t changed, but laptop OEMs now have another option for a slightly more powerful configuration. Or one could go with the GTX 1650 Ti and skip the uncertainty entirely.
With the addition of the new RTX 2080 Super, RTX 2070 Super, GTX 1650 Ti, and GTX 1650 (GDDR6) adapters to its portfolio, NVIDIA is using this week’s launch to rebalance the entire laptop product stack. As a result, some products are being discontinued, and others are being pushed down in price to fill spots previously covered by other parts.
First and foremost, like the desktop realm, the regular RTX 2080 is now gone from laptops as well. With the RTX 2080 Super taking up the flagship spot – and not being massively different from the original RTX 2080 – NVIDIA has excised the original entirely. The RTX 2070 Super is instead NVIDIA’s second-tier adapter for laptops.
The RTX 2070, on the other hand, is still staying around. Instead, it’s getting pushed down the product stack to the third-tier position. NVIDIA now expects RTX 2070 to start showing up in laptops as cheap as $1199.
The RTX 2060 is also along for the ride. And this one is a particularly notable shift, as the RTX 2060 will now be NVIDIA’s anchor SKU for $999 laptops. This spot was previously held by the GTX 1660 Ti, and while NVIDIA does not explicitly discuss laptop part pricing, reading between the lines it’s clear that the company has cut laptop adapter prices to make this new product stack happen. So, as NVIDIA likes to promote, RTX laptops now start at $999.
In fact of all the new mobile SKUs being launched today, the now lower-priced RTX 2060 is definitely getting the greatest focus from NVIDIA. The company’s OEM partners are announcing 5 new/updated laptops with the part, and the promise of more to come. As in the desktop space, NVIDIA is eager to dislodge its own legacy parts and entice gamers to upgrade to a laptop with a newer GeForce SKU, and while NVIDIA is certainly delivering the goods there, their case isn’t being helped by the relatively stagnant Intel. Thankfully AMD’s new Zen 2-based APUs have just launched, and while the market isn’t going to shift overnight, it gives the Green Team some new performance opportunities with the Black Team (or is that ex-Green Team?).
Finally, the new and updated GeForce GTX 1650 SKUs will be flushing out the low-end of the NVIDIA laptop product stack. The Pascal-based GTX 1050, the last GeForce GTX-branded holdover from the previous generation, is now on its way out. In its place, the GTX 1650 is being shifted down to take over. GTX 1650 laptops, in turn, will be hitting the market for as little as $699. In between that and the RTX 2060 will be the GTX 1660 Ti, as well as the new GTX 1650 Ti. And below $699 we’ll see the usual mismash of last-generation laptops, as well as NVIDIA’s entry-level, non-GTX laptop parts, the GeForce MX3xx series.
Wrapping things up, as with this week’s laptop CPU launches, laptops featuring the new and updated GeForce SKUs are set to hit the market shortly. While the ongoing coronavirus pandemic has thrown a spanner into exact release dates, AMD Ryzen Mobile 4000 laptops are already shipping. Meanwhile, Intel Comet Lake-S laptops should be shipping soon. Accordingly, we’re already seeing ASUS Ryzen laptops shipping with GeForce dGPUs, while Comet Lake-H laptops with the new parts should hit the market in a couple of weeks.
To coincide with today’s launch of both the latest 10th generation Intel Core H-Series parts, as well as NVIDIA’s launch of their new RTX Super laptop GPUs, MSI is announcing a trio of new models to cover a wide-spectrum of the market, with two gaming-focused models in the GS66 Stealth and GE66 Raider, as well as the content-creator focused Creator 17.
|MSI 10th Gen Intel Core Launch Lineup|
|GS66 Stealth||GE66 Raider||Creator 17|
64 GB Max
|GPU||NVIDIA RTX 2060 6GB
NVIDIA RTX 2070 Max-Q 8GB
NVIDIA RTX 2070 Super Max-Q 8GB
NVIDIA RTX 2080 Super Max-Q 8GB
|NVIDIA RTX 2070 8GB
NVIDIA RTX 2070 Super 8GB
NVIDIA RTX 2080 Super Max-Q 8GB
|NVIDIA RTX 2060 6 GB
NVIDIA RTX 2070 Max-Q 8GB
NVIDIA RTX 2070 Super Max-Q
NVIDIA RTX 2080 Super Max-Q
|Display||15.6-inch 1920x1080 144 Hz sRGB IPS-Level
1920x1080 240 Hz
1920x1080 300 Hz
|15.6-inch 1920x1080 240 Hz
1920x1080 300 Hz
|17.3-inch 1920x1080 thin-bezel IPS-level 144 Hz sRGB
3840x2160 HDR1000 mini LED 60 Hz P3
|Storage||512 GB - 1 TB NVMe||512 GB - 2 TB NVMe|
|Networking||Intel AX201 Wi-Fi 6
Killer E3100 Ethernet
|Killer AX1650 Wi-Fi 6
Killer E3100 Ethernet
|Intel 9560 WiFi 5
Intel I225 Ethernet
180-230W slim adapter
230W Slim adapter
|Ports||Thunderbolt 3 x 1
USB 3.2 Gen 2 x 3
|USB Type-C Gen2 x 1
USB 3.2 Gen 1 x 2
USB 3.2 Gen 2 x 1
SD Card Reader
SPDIR ESS Sabre HiFI
|Thunderbolt 3 x 1
USB 3.2 Gen2 x 3
SteelSeries per-key RGB keyboard
Dynaudio 2Wx2 speakers
SteelSeries per-key RGB keyboard
Dynaudio Speaker with passive raditor x 2
white backlight keyboard (84-key)
|Dimensions||14.17 x 9.65 x 0.71 inches||14.09 x 10.51 x 0.92 inches||15.59 x 10.21 x 0.8 inches|
|Weight||4.63 lbs||5.25 lbs||5.29-5.51 lbs|
|Available||Available April 15|
Gaming laptops tend to be flashy affairs, and there is certainly a segment of the market that would prefer the same performance and capabilities, but with a more understated look. Meet the MSI GS66 Stealth. Featuring a sandblasted finish, the all-black GS66 Stealth features a design which does its name proud. This is the ultimate sleeper from MSI. At 4.63 lbs and 0.71-inches thick, the 15.6-inch laptop is also very portable, and despite the small size, MSI has crammed in a 99.9 Wh battery, which is the largest allowed as carry-on in an airplane. The chassis still features the SteelSeries per-key RGB keyboard, which is one of the best in the gaming market, so you can still turn on a bit of bling if you are in the mood.
The GS66 Stealth offers up to a Core i9-10980HK and up to 32 GB of DDR4, expandable to 64 GB. On the GPU side MSI has tapped the brand new NVIDIA GeForce RTX Super lineup as options, with the RTX 2080 Super Max-Q at the top end, RTX 2070 Super Max-Q, RTX 2070 Max-Q, or RTX 2060 options as well. Due to the thin and light design, Max-Q is a necessity despite the new Cooler Boost Trinity+ system which MSI has designed with 0.1 mm fan blades.
MSI has also stepped up to the new 300 Hz display territory with the GS66, although the base model offers “just” 144 Hz, and mid-tier features a 240 Hz 1920x1080p IPS-Level display.
Rounding out the features, MSI offers NVMe storage up to 1 TB, Wi-Fi 6 thanks to the Intel AX201, Ethernet featuring the Killer E3100, USB Type-C with Thunderbolt 3, and three USB 3.2 Gen2 ports.
The new GS66 Stealth is available for pre-order today starting at $1599, and will be shipping on April 15th.
If the Stealth was too laid back in the styling department to suit your tastes, don’t worry. MSI has you covered. The new GE66 Raider is a larger, heavier, and flashier version of the GS66 Stealth. The cool-touch aluminum chassis features the MSI Mystic Light panoramic RGB light bar on the front, offering 16.7 million colors. The bottom of the laptop showcases dragon armor carving with hexagons, offering more grip and style, and MSI has tweaked the GE66 Raider’s hinge as well to make it more durable.
If you like really flashy laptops, MSI will be offering a Star Wars themed “Dragonshield Edition”, designed in-part with Industrial Light and Magic veteran Colie Wertz. This theming isn’t just skin deep either. The design is actually laser etched into the laptop, providing contrast and texture.
The GE66 Raider offers much of the same internal offerings as the GS66 Stealth, with up to a Core i9-10980HK, and up to 32 GB of RAM, but thanks to the chassis being a bit thicker (0.92” vs 0.71”) and slightly heavier (5.25 lbs vs 4.61 lbs) MSI was able to skip the Max-Q on the RTX 2070 and RTX 2070 Super, although the top-end offering still needs the Max-Q thermals for the RTX 2080 Super Max-Q.
The 15.6-inch laptop offers either a 240 Hz 1920x1080, or a 300 Hz on the higher-tier models.
This laptop offers the Killer Double Shot feature with the Killer E3100 Gigabit Ethernet coupled with the Intel based Killer AX1650 wireless, and although it offers USB Type-C, unlike the Stealth, there is no Thunderbolt 3 support. This laptop also ships with the same 99.9 Wh battery, so despite the powerful internals, battery life should be reasonable.
The GE66 Raider will be available on April 15th starting at $1799.
MSI has seen a large growth segment in the creator market, and has found that many content creators have been purchasing their gaming laptops to get access to the more-powerful CPUs and beefy GPUs that gaming laptops offer. The company has started to offer models targeted at this crowd now, and their latest model is the Creator 17, which features the first Mini LED display in a laptop.
The 17-inch Creator 17 offers a 144 Hz 1920x1080 IPS panel offering sRGB on the base models, but the top models step up to a 3840x2160 resolution mini LED display, offering P3 color gamut, and HDR1000. The mini LED display offers 240 zones of local dimming, 1000 nits brightness, and a 100,000:1 contrast ratio thanks to the new backlighting. The laptop somewhat surprisingly offers user-choice of both the DCI-P3 as well as the P3 D65 color space. The majority of devices marketed as DCI-P3 are actually P3 D65, whereas the DCI-P3 color space is the one used in digital cinema, so offering both on a device like this is a smart move. In addition to the True Color gamut selection, the laptop will feature per-unit factory calibration which is verified by CalMAN – the same software we leverage for our laptop reviews.
On the CPU side, MSI is only offering the Core i7-10875H, which is an eight-core, sixteen-thread processor which can turbo up to 5.1 GHz. This should offer plenty of muscle for most content tasks, and for GPU-accelerated workflows, MSI will offer the NVIDIA GeForce RTX 2060, RTX 2070 Max-Q, RTX 2070 Super Max-Q, and RTX 2080 Super Max-Q, so you can pick your performance level depending on your GPU needs. The base model ships with 16 GB of DDR4, and MSI offers 32 GB on the higher-tier units, and all models support up to 64 GB.
Creators need storage. MSI is shipping up to 2 TB of NVMe storage, along with micro SD, and for external storage there is a Thunderbolt 3 port. The Thunderbolt port can also be used to charge the laptop in a pinch, and provides 27-Watts of power for charging external devices.
Despite the impressive performance inside, the Creator 17 still comes in at a starting weight of 5.29 lbs, although the mini LED model adds another 0.22 lbs to the total, and the laptop is just 0.8” thick. For a 17-inch laptop, that is quite reasonable.
The MSI Creator 17 will be available on April 15th starting at $1799.
Lenovo is announcing some updated products today featuring the new 10th generation Intel Core H-Series and NVIDIA RTX Super mobile GPUs, and Lenovo is taking advantage of the new NVIDIA Advanced Optimus as well, allowing better battery life while still providing G-SYNC.
The Lenovo Legion 7i and Legion 5i are replacing the Legion Y740 and Y540 models, with the 7i being a 17-inch gaming laptop, and the 5i being a 15-inch version. Both will feature the new NVIDIA Advanced Optimus, which means they will offer G-SYNC on their displays, but be able to switch off the dGPU for battery savings when needed. For those unfamiliar, one of the drawbacks of G-SYNC previously was that it required the dGPU to be directly connected to the display, which removed the capability of using NVIDIA’s Optimus to leverage the iGPU for light-duty tasks to save power. Some manufacturers worked around this by offering a multiplexer, but the added complexity and cost, coupled with the fact that the user would need to reboot the laptop to turn it on or off, meant it was a useful, but niche solution. Lenovo will be one of the first to offer the new dynamic switching of Advanced Optimus which no longer has the reboot requirement, so we should hopefully see more laptops offering this along with G-SYNC.
Both laptops will offer 10th generation Intel Core H-Series, meaning the 45-Watt processors, but Lenovo hasn’t indicated what exact models they will be offering. On the GPU side, the 15-inch Legion 5i will have up to a NVIDIA RTX 2060, and the larger 17-inch Legion 7i will go all the way up to the RTX 2080 Super Max-Q.
Although details are a bit light at the moment, Lenovo is coming in with some very reasonable pricing for the new laptops which will be coming out later this year. The Legion 5i with RTX 2060 will start at just $999, and the Legion 7i with RTX 2070 starts at $1199.
Two of the big announcements out of CES this year were both mobile related: Intel and AMD announced they would be launching new gaming laptop processors into the market in the first half of this year. 45 W parts, also known as H-series in the business, provide the basis for productivity and gaming notebooks that use additional graphics to give some oomph. These systems span from thin and light with GPU requirements, through ‘luggables’ that are just about portable, all the way up to desktop replacement designs. Intel’s newest 10th Gen H-Series are based on the Comet Lake family, the fifth iteration of Intel’s 14nm Skylake designs, and they’re going all the way up to 5.3 GHz*.
With the announcement of the latest Intel Core H-Series and NVIDIA’s RTX Super lineup, Acer is announcing a refresh today of a couple of their gaming laptop models. Both make the jump to the 10th generation Intel Core lineup of processors, and the Triton 500 also gets the new RTX Super GPUs.
We got a chance to review this laptop back in 2019, and it offered quite a bit of performance in a very small and light chassis, with some unique features as well. Today Acer is refreshing the lineup with even more performance with the latest CPUs from Intel, and GPUs up to the NVIDIA RTX 2080 Super Max-Q. But Acer has also added a few new features as well, including an optional 300 Hz IPS display, up from 144 Hz last year, and Wi-Fi 6 thanks to the Killer AX1650i. And as a bonus, the new model offers a per-key RGB backlit keyboard, stepping up from the zoned keyboard backlighting last year.
One of the key features of the Predator Triton 500 was its portable design, and luckily Acer hasn’t had to made the device any thicker or heavier. It still weighs just 4.63 lbs, and is only 0.7-inches thick which is the same dimensions as last year. But to help with cooling, Acer has tweaked the cooling with their Vortex Flow design, offering three fans, 4th generation AeroBlade 3D fans with serrated edges, and five heat pipes. Overall, Acer says they are getting 33% better thermal performance than the 2019 model.
The updated Triton 500 will be available in May starting at $2199.99 USD.
We’ve also reviewed the Acer Nitro 5 last year, although the AMD powered model, and the Nitro 5 is all the way at the other end of the spectrum compared to Acer’s Triton 500, but still offers great performance in a much less expensive design. For 2020, Acer is adding some nice upgrades which should help address some of the shortcomings of the previous model.
On the CPU side, Acer will offer up to a Core i7-10750H, which offers six cores, twelve threads, and up to 5 GHz of frequency. This coupled with the GeForce GTX 1650, 1650 Ti, and RTX 2060, should offer some great gaming performance in this price range. There are two M.2 PCIe slots, as well as a 1 TB HDD offering, and up to 32 GB of DDR4 which is user-replaceable.
One key shortcoming of the 2019 model was the display, but the 2020 model is shipping with two new display panels which will hopefully address the color gamut. What it does add is high-refresh, with both 120 Hz and 144 Hz IPS panels at 1920x1080 resolution, which Acer claim are 3 ms and 300 nit capable.
Acer has also tweaked the cooling, with a new dual-fan design. There are four heat vents, and overall the new cooling system offers a 25% improvement over the 2019 model, which is not insignificant.
The 2020 version also features the Intel AX201 WiFi 6 network card, and Killer E2600 Ethernet.
The best part of the Nitro 5 is its price, and for 2020 it continues to be one of the easiest ways into a gaming laptop. The new Nitro 5 will be available in May starting at $749.99.
As AMD’s latest Ryzen 3000/X570 platforms with PCIe 4.0 support become more widespread on the market, SSD vendors are continuing to ramp up the releaes of their matching PCIe 4.0-based SSDs. Joining the party, KINGMAX, a known maker of components for enthusiasts, has revealed its first PCIe 4.0 SSD family, the PX4480.
The KINGMAX Zeus PX4480 SSDs are based on the Phison PS5016-E16 controller paired with 3D TLC NAND memory, and are available in 500 GB, 1 TB, and 2 TB configurations. A surprising thing about these drives is the fact that unlike most Phison PS5016-E16-based SSDs, KINGMAX’s PX4480 devices are not equipped with a heat sink, but come with a sticker made of a plastic-like material, which improves their phsyical compatibility, but might affect their performance under high loads.
Speaking of performance, KINGMAX says that the PX4480 drives are rated for up to 5000 MB/s sequential read speeds, up to 4400 MB/s sequential write speeds (when pSLC caching is enabled), and up to 600K/500K random read/write speeds, which is in-line with competing devices that use the same controller.
As far as endurance is concerned, KINGMAX rates its ‘4x4’ SSDs for up to 3600 terabytes to be written (TBW) depending on the exact model. Meanwhile, the drives are backed by a three-year warranty.
|KINGMAX's PX4480 SSDs|
|Capacity||500 GB||1 TB||2 TB|
|Controller||Phison PS5016-E16 (PCIe 4.0 x4)|
|NAND Flash||3D TLC NAND|
|Form-Factor, Interface||M.2-2280, PCIe 4.0 x4, NVMe 1.3|
|Sequential Read||5000 MB/s|
|Sequential Write||2500 MB/s||4400 MB/s|
|Random Read IOPS||400K||600K IOPS|
|Random Write IOPS||500K||500K IOPS|
|DRAM Buffer||?||1 GB||2 GB|
|TCG Opal Encryption||No|
|Power Consumption||6.3 W||6.5 W||7 W|
|MTBF||1.7 million hours|
|TBW||850 TB||1800 TB||3600 TB|
Considering that the PX4480 SSDs are powered by a widespread controller and the fact that KINGMAX already lists its PX4480 drives on its website, expect them on the market shortly. Prices should be comparable to similar products from competing suppliers.
As part of the company's second quarter financial earnings call, Micron has revealed that it is about to start volume production of its 4th Generation 3D NAND memory devices. Based around the company's new replacement gate (RG) architecture, the memory manufacturer is gearing up to begin production in the current fiscal quarter (Q3'FY20), with commercial shipments set to begin in the fourth quarter. Overall, this will mark the start of a major technology transition for the manufacturer.
As previously detailed by Micron, the company’s 4th Gen 3D NAND features up to 128 active layers and uses replacement gate (RG) technology, which replaces the traditional floating gate technology that has been used by Intel and Micron for years. The switch is a substantial design change, and an important one going forward, as it's at the core of Micron's long-term technology plans. It also happens to be the company’s first flash memory technology in quite some time that has been designed solely by Micron, and not in conjunction with former partner Intel. Micron hopes that switching to gate replacement will enable it to reduce die sizes, lower costs, improve performance, and enable easier transition to next-generation nodes presumably with more active layers.
Micron does not have plans to transit all of its products to its 4th Generation RG-based 3D NAND technology, and it has already warned its investors not to expect a meaningful company-wide cost-per-bit reduction this year as result of this technology transition. Nonetheless, it is tremendously important to kick off volume production as early as possible because learning how to produce replacement gate 3D NAND with decent yields is important for Micron’s subsequent generation 3D NAND that is projected to be deployed broadly in FY2021 (starts in late September, 2020).
Micron said that it plans to start shipments of its 128-layer replacement gate-based 3D NAND products in the fourth quarter of its FY2020, which means this summer. Meanwhile, Micron yet has to disclose which products it plans to build using this technology.
Sanjay Mehrotra, CEO and president of Micron, said the following:
In NAND, we made significant progress on our replacement gate, or RG, transition and expect to begin volume production in our current quarter, with revenue shipments to follow in our FQ4. We expect replacement gate production to be a meaningful portion of our total NAND supply by the end of this calendar year.
MSI has announced its first display that uses a Fast IPS panel, which boasts a 240 Hz refresh rate. Like many gaming LCDs, the Optix MAG251RX is NVIDIA G-Sync compatible as well as VESA DisplayHDR 400 certified. Meanwhile, unlike most gaming monitors, the new product comes with a USB-C input.
Based on a 24.5-inch 8-bit+FRC IPS panel, the MSI MAG251RX features a 1920×1080 resolution, 400 nits brightness, a 1000:1 contrast ratio, 178°/178° viewing angles, a 1 ms response time, and a maximum refresh rate of 240 Hz. The monitor supports VESA’s Adaptive-Sync variable refresh rate technology and is NVIDIA G-Sync-compatible certified.
The monitor can display 1.07 billion colors and can reproduce 107% of the sRGB as well as 84% of the DCI-P3 color gamut, which is slightly better color reproduction than most other monitors based on a Fast IPS panel. In addition, the LCD is VESA DisplayHDR 400 certified, so it also supports HDR10 transport. Last but not least, the monitor supports various gaming modes as well as the so-called Night Vision technology that enhances dark scenes.
For connectivity, the MSI MAG251RX uses one DisplayPort 1.2a input, two HDMI 2.0 port, and one USB Type-C port. In addition, the monitor has a triple-port USB 2.0 hub and a headphone output.
One of the advantages of the MSI MAG251RX advertised by the manufacturer is the company’s Gaming OSD App 2.0, which allows users to easily configure display settings using a keyboard and mouse. Also, the app supports hotkey options to quickly switch settings in-between titles.
In a bid to provide users the right viewing angles, the MSI MAG251RX monitor has a stand that can adjust height and tilt. As an added bonus, the backside of the LCD is equipped with addressable RGB LEDs for further customization.
|The MSI Optix MAG 24.5-Inch IPS LCD with
240 Hz Refresh Rate
|Panel||24.5-inch class IPS|
|Native Resolution||1920 × 1080|
|Maximum Refresh Rate||240 Hz|
|Dynamic Refresh||Technology||VESA Adaptive-Sync
NVIDIA G-Sync Compatible
|Viewing Angles||178°/178° horizontal/vertical|
|Response Time||1 ms GtG|
|Pixel Pitch||~0.2825 mm²|
|Color Gamut Support||107% sRGB
|Stand||Height: +/- 130 mm,
Tilt: 5° to 20°
Built in cable management
MSI’s Optix MAG251RX is now available from retailers like Amazon for $359.99.
Intel has announced that it will be discontinuing some of its Lynx Point based chipsets which are most commonly associated with its Haswell processors on socket LGA 1150. Along with the long-standing H81 chipset, other Intel Lynx-point chipsets entering the End-of-Life cycle include Q87, C226, QM87, and HM86.
Originally introduced to the market back in 2013, Intel's H81 chipset is the latest casualty of Intel's product discontinuance strategy. The H81 chipset along with others entering product discontinuance are all based on its 32 nm lithography. The H81 chipset was built for Intel 4th generation Haswell processors and acted as the budget-conscious version of the Z87 chipset, minus some of its premium features including overclocking support.
Intel states that although its product discontinuance program support began on March 30, 2020, customers will still be able to place orders of the H81, Q87, C226, QM87, and HM86 chipsets until March 31, 2021. The last shipment will be distributed on September 30, 2021, while orders not cancelled before March 31, 2021 will become non-cancelable. The H81 chipset is notably a desktop chipset, while C226 is from its server portfolio, and QM87 and HM86 are part of its mobile segment. Both the QM87 and HM86 chipsets were both expected to enter discontinuance in Q4 15 but lasted nearly five years longer than anticipated.
Directly related to the above, Intel announced last year that it was resurrecting its previously discontinued Haswell based Intel Pentium G3420 processor which was seemingly due to an increase in customer demand.
Customers looking for a low-cost long term chipset are advised to look towards such chipsets as H310 designed for Intel's Coffee Lake CPUs.
As with any processor vendor, having a detailed list of what the processor does and how to optimize for it is important. Helping programmers also plan for what’s coming is also vital. To that end, we often get glimpses of what is coming in future products by keeping track of these updates. Not only does it give detail on the new instructions, but it often verifies code names for products that haven’t ‘officially’ been recognized. Intel’s latest update to its ISA Extensions Reference manual does just this, confirming Alder Lake as a future product, and identifies what new instructions are coming in future platforms. Perhaps the biggest news of this is actually the continuation of BFLOAT16 support, originally supposed to be Cooper Lake only (and bearing in mind, Cooper Lake will have a limited launch), but will now also be included in the upcoming Sapphire Rapids generation, set for deployment in the Aurora supercomputer in late 2021.
According to a report from Reuters, Samsung Display will cease production of traditional LCD displays by the end of the year. The move comes as the company is apparently turning its full efforts away from traditional liquid crystal displays and towards the company's portfolio of quantum dot technology. Building off of the Reuters report, ZDNet is reporting that Samsung is dropping LCD production entirely – including its quantum dot-enhanced "QLED" LCDs – and that their retooled efforts will focus on QD-enhanced OLED displays. A decision with big ramifications for the traditional LCD market, this means that by the end of the year, the LCD market will be losing one of its bigger (and best-known) manufacturers.
As recently as last year, Samsung Display had two LCD production facilities in South Korea and another two LCD plants in China. Back in October, 2019, the company halted production one of the South Korean factories, and now plans to suspend production of LCDs at the remaining three facilities due to the low profitability and oversupply of traditional LCDs.
Instead, the company will be turning its attention towards the quantum dot-enhanced OLED displays. A new technology for Samsung, this would be distinct from the company's current QLED displays, which use quantum dots to enhance LCD displays. Samsung previously announced their plans to invest a whopping $11 billion in QD-OLED production, and now those plans are moving one step closer to completion as the company gets ready to wind-down traditional LCD production.
To that end, one of the two South Korean LCD lines will be converted to produce displays and TVs featuring quantum dot-enhanced OLED panels. Samsung Display hopes that their sizable investment will pay off as the new technology promises unprecedented image quality and lower cost compared to regular OLED panels. Meanwhile, Samsung’s longer-term plans include building of two QD-OLED lines, though it's unclear for now whether this will include any of the company's Chinese facilities, or what may happen to those lines once they shut down at the end of the year.
Overall, Samsung is not the first nor the only LCD panel manufacturer to reduce their production. LG Display has converted as least one of its LCD factories to an OLED facility, whereas Panasonic last year decided to cease LCD manufacturing by 2021.
AKiTiO has introduced a new Thunderbolt 3 eGFX enclosure that has been designed specifically with professional users in mind. The Node Titan can house power-hungry professional-grade graphics cards due to its 650 W power supply unit.
AKiTiO was among the first companies to introduce a TB3 eGFX chassis for video cards back in late 2016. A little over three years later, after learning from its customers about their needs, AKiTiO comes up with its Node Titan that upgrades the original Node in every possible way. The new enclosure is somewhat more compact, yet it can house full-length (32 cm) full-height (17 cm) 2.5-wide (6 cm) graphics cards that consume up to 500 W of power and need two 8-pin PCIe power connectors. In particular, the box can accommodate all the latest video cards from AMD and NVIDIA and is certified for high-end professional boards, including NVIDIA Quadro RTX 4000.
To ensure that the cards used inside AKiTiO’s Node Titan get enough cooling, the enclosure is equipped with two fans: one is used for the PSU and the other cools down the board itself. Meanwhile, the enclosure has a handle to make it easier to carry it around. As for dimensions, the enclosure measures 35.7 × 13.5 × 26.6 cm (14.06 × 5.31 × 10.47 inches), so it is actually more compact than the predecessor. Still, since the box is made of stainless steel, not aluminum, so it is not exactly lightweight.
|Comparison of Thunderbolt 3 eGFX Chassis|
|Chassis Dimensions||Length||42.8 cm
|Max Dimension of Compatible Graphics Card||Length||32 cm
|Maximum GPU Power||300 W (?)||500 W|
|PSU||Wattage||400 W||650 W|
|Cooling Fans||1 × 120 mm||2 × ?? mm|
|Connectivity||Thunderbolt||1 × TB3||1 × TB3|
AKiTiO’s Node Titan is available directly from the company as well as from its partners. Notably, the Node Titan is a pure eGFX enclosure and does not feature a GbE port or a USB hub, so it is relatively cheap by eGFX chassis standards at $334.75.
The latest among a surprisingly busy week for PC hardware, Maingear has released a new and improved version of its RUSH gaming system. Catering to the high-end gaming market, Maingear is launching models with both Intel and AMD desktop/HEDT processors. Furthermore the company has partnered with ASUS to certify its RGB LED capabilities for better integration and seamless support through the system.
The latest RUSH systems are built inside the highly customizable Lian Li PC-011D XL chassis. Maingear is also offering a custom painting services which users can have their RUSH system coated in a luxury automotive paint within its custom workshop. Each custom RUSH system is advertised as being hand-crafted and built by a 'single master craftsman' for a unique take which Maingear state as "One man, one machine".
Touching on the specifications, Maingear allows buyers to customize RUSH systems with a variety of CPU and chipset options, with both AMD and Intel systems available. These options range from desktop parts up to the AMD Ryzen 3950X (X570) and Intel Core i9-9900K (Z390), This also stretches to the more powerful HEDT platforms, including the AMD Threadripper series featuring the 3990X (TRX40) and Intel's Core i9-10980XE (X299), which of course bumps the price up massively. Keeping in mind the ASUS collaboration, each configuration of the RUSH regardless of chipset and platform selected is based around an ASUS ROG motherboard, for maximum compatibility with its ROG Aura RGB ecosystem.
For graphics, users can select an AMD or NVIDIA setup including up to dual NVIDIA GeForce Titan RTX 24 GB graphics cards, as well as up to a dual AMD Radeon VII 16 GB setup. As for memory, all setups can be configured to run up to 128 GB of DDR4 memory, with AMD's TRX40 for Threadripper offering up to 256 GB. The storage options vary – being dependent on the motherboard chipset – but most allow for up to two NVMe SSDs to be installed, with up to seven SATA 2.5" drives, or four SATA 3.5" drives.
The most notable aspect of the new RUSH gaming system is can be configured to Maingear's profound Apex liquid cooling solution. The Apex is a fully custom cooling solution which features an integrated pump designed for silent operation, with flow-rate sensing and a high capacity reservoir. We reported on the Apex integrated cooling solution back at CES 2018 when Maingear refreshed its F131 system. It uses a custom milled acrylic baseplate for striking aesthetics, with a parallel graphics card bridge and a custom radiator bridge. This encompasses ASUS's ROG certification which all of the components used feature, including the Lian Li PC-O11D XL chassis.
The new and updated RUSH series from Maingear starts from $1899 for the base models, while for those with especially deep pockets, configurations adding custom paint jobs and ultra-high-end hardware such as the AMD Threadripper 3970X and NVIDIA GeForce Titan RTX graphics card run for over $15000.
Originally announced back at CES 2020, AMD this week has finally launched its new "Renoir" Ryzen Mobile 4000 APUs. And with it, AMD's laptop partners have begun rolling out their first wave of Ryzen 4000 laptops.
While we're still working on our full review for next Monday, we wanted to take a moment to take stock of the laptop market thus far, and look at the Ryzen Mobile 4000 laptops that have been released this week or are due in the coming weeks. So far, Acer, ASUS, Dell, and MSI have introduced their notebooks, and between the four OEMs, they're aiming for a wide range of the consumer market.
Acer was among the first to introduce its AMD Ryzen Mobile 4000-based laptops earlier this year, and this month, Acer finally started sales of its new notebooks, which are available in 14 and 15-inches.
The Acer Swift 3 (SF314-42) is a 14-inch ultraportable laptop that weighs 1.17 kilograms and runs (up to) AMD’s eight-core AMD Ryzen 7 4700U APU that is paired with 8 GB of LPDDR4 memory as well as an SSD. The PC has everything that one comes to expect from a 2020 ultrathin notebook, including Bluetooth 5.0, Wi-Fi 6, USB 3.2 Gen 2 ports, and a fingerprint scanner.
The laptop comes with an IPS Full-HD display panel with thin bezels, so it is pretty portable. Since the Swift 3 is designed primarily with roadwarriors in mind, it can work for 11.5 hours on one charge, according to the manufacturer. The Swift 3 SF314-42 will be available this April at a price starting at $629.99
Acer’s Aspire 5 (A515-44) is aimed at those looking for something bigger and less portable. This machine is equipped with a Full-HD IPS 15.6-inch LCD and uses AMD’s six-core Ryzen 5 4500U mobile CPU that is accompanied by up to 24 GB of RAM, up to 1 TB PCIe SSD, and a 2 TB hard drive. This system will hit the market in June at an MSRP starting at $519.99.
Among gaming notebook vendors, ASUS was the first company to start using AMD’s desktop Ryzen CPUs with eight cores inside its ROG laptop. So it is not surprising that the company is also among the first with its high-end ROG Zephyrus G14 notebook powered by AMD’s Ryzen 9 4900HS and Ryzen 7 4800HS mobile APUs.
The eight-core Ryzen Mobile 4000-series processor works together with up to 32 GB of DDR4-3200 RAM, an up to 1 TB M.2 NVMe SSD, and NVIDIA’s GeForce RTX 2060 or GTX 1660 Ti discrete graphics processor. The powerful guts are accompanied by rather decent connectivity technologies, including Wi-Fi 6, Bluetooth 5.0, USB 3.1 Gen 1/2 Type-A/Type-C ports, and a DisplayPort 1.4 output.
The ASUS ROG Zephyrus G14 is obviously meant for gamers on the go, and so ASUS has set out to strike a balance between performance and portability. As the name suggests, the laptop comes with a 14-inch display featuring a 2560x1440 or 1920x1080 resolution as well as a 60 Hz or 120 Hz refresh rate with VESA Adaptive-Sync on top. Interestingly, select SKUs even come with Pantone Validated LCDs to appeal to those who want to do color-critical workloads on their Republic of Gamers laptop. The machine weighs 1.7 kilograms and is 1.79 cm – 1.99 cm thick depending on the version.
The ROG Zephyrus G14 is not ASUS’s only AMD Ryzen Mobile 4000-series-based notebook aimed at gamers and performance-demanding enthusiasts. The company also has lower-tier TUF Gaming A15 machine, which also brings decent specifications and performance.
The ASUS TUF Gaming A15 is based on AMD’s Ryzen 7 4800H and Ryzen 5 4600H processors that are paired with NVIDIA’s GeForce RTX 2060 or GTX 1660 Ti discrete GPUs, up to 32 GB of DDR4-3200 memory, an SSD up to 1 TB in capacity, and a 1 TB 5400 RPM HDD. On the I/O side of things, the laptop has Wi-Fi 5, USB 3.2 Gen 1/2 Type-A/Type-C, a GbE port, and an HDMI output.
As per its name, the TUF Gaming A15 is equipped with a 15.6-inch Full-HD IPS panel with a 60 Hz or a 144 Hz refresh rate that is supported by VESA’s Adaptive-Sync technology.
One interesting thing to note about the TUF Gaming A15 laptops is that in addition to being ruggedized, these machines will be available in two different finishes: one Fortress Gray looks minimalistic, whereas another — Bonfire Black — looks futuristic.
The ASUS TUF Gaming A15 is already available from retailers like Amazon starting at prices of $999.99.
Dell introduced its G5 15 SE gaming laptop ahead of all of its rivals back at CES 2020. What is, perhaps, more important is that this machine uses key components only from AMD, so along with a Ryzen 4000 APU it also comes with AMD’s Radeon RX 5600M discrete GPU (Navi architecture). The notebook is currently the only PC that supports AMD’s SmartShift technology that dynamically shift power and thermal headroom between the CPU and the GPU to maximize performance.
The 15.6-inch G5 15 Special Edition Ryzen gaming notebook is equipped with a Full-HD panel with a 144Hz maximum refresh rate as well as variable refresh support. Meanwhile, the system comes with DDR4 DRAM, a SSD up to 1TB in size, and a 2 TB 5400 RPM HDD. As far as I/O is concerned, the mobile PC features Wi-Fi, Bluetooth, GbE, USB-A, USB-C, mDP, HDMI, SD card reader, a 3.5-mm audio jack, and a webcam with IR sensors.
Dell’s G5 15 Special Edition Ryzen yet has to make it to the market, but back in January it was said that the notebook is due in early April. As for pricing, it is expected that the machine will cost starting at $799.
MSI is a yet another company that uses AMD’s latest six-core Ryzen 5 4600H and eight-core Ryzen 7 4800H APUs paired with the company’s latest Radeon RX 5500M discrete GPU. Though it is unclear whether the latest Bravo 15 notebook actually supports SmartShift technology.
MSI’s Bravo 15 laptops that are currently available for pre-order are equipped with 16 GB of DDR4 memory as well as a 512 GB NVMe SSD, which is in line with what we expect from sub-$1000 gaming notebooks. Meanwhile, the systems are equipped with a 15.6-inch Full-HD IPS LCD panel featuring a variable refresh rate of up to 120 Hz with VESA’s Adaptive-Sync on top.
So far, PC makers have introduced several higher-end midrange gaming laptops based on AMD’s Ryzen Mobile 4000 processors. And given AMD's ongoing success with the similar Zen 2-based Ryzen 3000 CPUs on the desktop, the company is certainly putting its best foot forward for the mobile space as well. So as supplies ramp up (and Coronavirus ramps down) expect more computer manufacturers introduce Ryzen 4000 notebooks in the coming months.
Traditionally, AMD has done well with gamers, so it is likely that at some point we are going to see true desktop replacement notebooks featuring the company’s latest processors paired with top-of-the-range GPUs. Meanwhile, what remains to be seen is how successful will AMD be with ultraportables, which is a traditional Intel stronghold. To date, only Acer has unveiled an ultrathin Ryzen 4000 notebook, and companies like Lenovo should catch up shortly.
Sources: AMD, Acer, ASUS, Dell, MSI
Transcend has unveiled a new series of microSD memory cards that support pseudo-SLC caching to boost burst write speeds. The new USD230I memory cards offer data transfer speeds of up to 100 MB/s as well as random read/write performance of up to 3,400 IOPS.
Transcend’s USD230I lineup includes microSD cards featuring 8 GB, 16 GB, 32 GB, and 64 GB capacities. The cards carry the A1 as well as the V30 badges, so they can be used to install Google Android applications and guarantee a minimum write speed of up to 30 MB/s, which is good enough for 4K video shooting.
Pseudo-SLC caching was introduced into the standard by the SD Association back in early 2017, but so far no actual memory cards have used this technology. Meanwhile, since Transcend’s USD230I use 3D TLC NAND memory, the only way to boost their writing performance is indeed through pSLC caching. Unfortunately, the manufacturer does not specify the sizes of its pSLC cache.
As far as endurance is concerned, the 8 GB model is rated for 36 terabytes to be written (TBW), the 16 GB/32 GB models are speced for 70 TBW, whereas the 64 GB variant is rated for 140 TBW.
ASUS brought its TUF Gaming sub-brand to the market a couple of years ago to address needs of mainstream gamers. But as requirements evolve, the company has added premium features to TUF Gaming-branded products every now and then. This time around ASUS has introduced a new TUF-branded 27-inch curved monitor that boasts with AMD’s FreeSync Premium certification, a wider-than-sRGB color gamut, and a 165 Hz refresh rate.
The ASUS TUF Gaming VG27VH1B monitor is based on a 27-inch curved VA panel featuring a 1920×1080 resolution, 250 nits luminance, a 3000:1 contrast ratio, 178°/178° viewing angles, a 1 ms MPRT response time, and a 165 Hz maximum refresh rate. The LCD can reproduce 120% of the sRGB as well as 90% of the DCI-P3 color gamuts, which is rather good for a monitor that is supposed to be (at least relatively) inexpensive.
One of the key selling points of the TUF Gaming VG27VH1B is that the monitor features a scaler that supports VESA’s Adaptive-Sync variable refresh rate technology. The display is also certified to meet AMD’s FreeSync Premium requirements, which, as you'd expect for a high refresh rate display, means it officially supports low framerate compensation (LFC) mode. All told, the monitor supports refresh rates from 50 Hz up to 165 Hz.
As for other technologies, the TUF Gaming VG27VH1B also fully supports ASUS’s ELMB (extreme low motion blur) technology, which is designed to make fast-action scenes look sharper. What is particularly important about this ELMB implementation is that it can work together with Adaptive-Sync/FreeSync, so that it isn't an either/or situation. Other notable capabilities of the new TUF monitor include in-game enhancements techniques like Shadow Boost, GamePlus modes (Crosshair, Timer, FPS Counter, Display Alignment), and GameVisual genre-tailored modes.
One interesting thing to note about the TUF Gaming VG27VH1B is its set of inputs that includes one D-Sub connector for legacy PCs as well as two HDMI 2.0 ports to connect modern PCs, but there aren't any DisplayPort inputs. On the audio side of things, the monitor has 2W stereo speakers along with a line-in and a headphone out connector.
As for ergonomics, the ASUS VG27VH1B comes with a stand that can adjust tilt and swivel, but not height. Also, the display has VESA 100×100 mounting holes.
|The ASUS TUF VG27VH1B Monitor|
|TUF Gaming VG27VH1B|
|Native Resolution||1920 × 1080
|Refresh Rate||165 Hz|
|Dynamic Refresh Rate||Technology||AMD FreeSync Premium
|Range||HDMI 50 Hz - 165 Hz|
|Response Time||1 ms MPRT|
|Color Gamut||125% sRGB
|Viewing Angles||178°/178° horizontal/vertical|
|Inputs||1 × D-Sub
2 × HDMI 2.0
|Audio||2 W stereo speakers|
|Proprietary Enhancements||GamePlus: Crosshair/Timer/FPS Counter/Display Alignment
GameVisual: Scenery/Racing/Cinema/RTS/RPG/FPS/sRGB Modes/MOBA Mode
|Tilt||+23° ~ -5°|
|Swivel||+15° ~ -15°|
|Power Consumption||Idle||0.5 W|
ASUS already lists its TUF Gaming VG27VH1B monitor on its website, so expect it to hit the market in the foreseeable future (COVID-19 willing).
In a second SSD snafu in as many years, Dell and HPE have revealed that the two vendors have shipped enterprise drives with a critical firmware bug, one will eventually cause data loss. The bug, seemingly related to an internal runtime counter in the SSDs, causes them to fail once they reach 40,000 hours runtime, losing all data in the process. As a result, both companies have needed to issue firmware updates for their respective drives, as customers who have been running them 24/7 (or nearly as much) are starting to trigger the bug.
Ultimately, both issues, while announced/documented separately, seem to stem from the same basic flaw. HPE and Dell both used the same upstream supplier (believed to be SanDisk) for SSD controllers and firmware for certain, now-legacy, SSDs that the two computer makers sold. And with the oldest of these drives having reached 40,000 hours runtime (4 years, 206 days, and 16 hours), this has led to the discovery of the firmware bug and the need to quickly patch it. To that end, both companies have begun rolling out firmware
As reported by Blocks & Files, the actual firmware bug seems to be a relatively simple off-by-one error that none the less has a significant repercussion to it.
The fault fixed by the Dell EMC firmware concerns an Assert function which had a bad check to validate the value of a circular buffer’s index value. Instead of checking the maximum value as N, it checked for N-1. The fix corrects the assert check to use the maximum value as N.
Overall, Dell EMC shipped a number of the faulty SAS-12Gbps enterprise drives over the years, ranging in capacity from 200 GB to 1.6 TB. All of which will require the new D417 firmware update to avoid an untimely death at 40,000 hours.
Meanwhile, HPE shipped 800 GB and 1.6 TB drives using the faulty firmware. These drives were, in turn, were used in numerous server and storage products, including HPE ProLiant, Synergy, Apollo 4200, Synergy Storage Modules, D3000 Storage Enclosure, and StoreEasy 1000 Storage, and require HPE's firmware update to secure their stability.
As for the supplier of the faulty SSDs, while HPE declined to name its vendor, Dell EMC did reveal that the affected drives were made by SanDisk (now a part of Western Digital). Furthermore, based on an image of HPE’s MO1600JVYPR SSDs published by Blocks & Files, it would appear that HPE’s drives were also made by SanDisk. To that end, it is highly likely that the affected Dell EMC and HPE SSDs are essentially the same drives from the same maker.
Overall, this is the second time in less than a year that a major SSD runtime bug has been revealed. Late last year HPE ran into a similar issue at 32,768 hours with a different series of drives. So as SSDs are now reliable enough to be put into service for several years, we're going to start seeing the long-term impact of such a long service life.
IPS technology has recently evolved to the point where 240 Hz refresh rates have started enter the territory of displays for hardcore gamers that were previously dominated by TN panels. However, TN technology still has a trick up its sleeve, and that is a very low grey-to-grey response times. Taking advantage of this last technical superiority, BenQ this week introduced its latest gaming display for e-sports professionals, the Zowie XL2746S. As expected from a Zowie monitor, it has a host of features aimed at gamers, going beyond just capabilities of its panel.
BenQ’s Zowie XL2746S LCD uses a 27-inch Full-HD TN panel featuring up to 320 nits brightness, a 1000:1 contrast ratio, a 240 Hz maximum refresh rate, and a 0.5 ms GtG response time. Otherwise the TN-type gaming-focused monitor is nothing to write home about with respect to viewing angles, and the backlighting only provides a wide enough gamut to cover the sRGB color space.
The Zowie XL2746S monitor supports VESA’s Adaptive-Sync technology and carries AMD’s FreeSync badge. In addition, the display supports DyAc+ technology that makes fast-paced action scenes look less blurry (keep in mind that this cannot co-exist with FreeSync/Adaptive-Sync), Black eQualizer to enhance dark scenes, and Color Vibrance to adjust color tones to make scenes more defined.
Designed specifically for hardcore gamers and e-sports athletes, BenQ’s Zowie monitors feature a special hood to reduce distractions and possible light glare, and also provide some protection against prying eyes during tournaments. They also come with a stand that can be adjusted in height, swivel, and tilt; and they are equipped with a hockey puck-shaped controller pad that can activate an appropriate profile quickly.
As for connectivity, the Zowie XL2746S has a DisplayPort 1.2a, a DVI-D DL, and two HDMI (2.0 and 1.4) inputs. In addition, the LCD also has audio connectors, as well as a dual-port USB 3.0 hub.
|BenQ's Display w/ a 240 Hz Refresh & 0.5 ms Response Time|
|The Zowie XL2746S|
|Panel||27-inch class TN|
|Native Resolution||1920 × 1080|
|Maximum Refresh Rate||240 Hz|
|Viewing Angles||170°/160° horizontal/vertical|
|Response Time||0.5 ms GtG|
|Pixel Pitch||~0.3113 mm²|
|Pixel Density||~81 PPI|
|Color Gamut Support||sRGB (?)|
BenQ’s Zowie XL2746S monitor is now available in Europe directly from the manufacturer for €629.
JEDEC still has not published the DDR5 specification officially, yet it looks like DRAM makers and SoC designers are preparing for the DDR5 launch at full steam. Cadence, which was vocal about the new technology back in 2018, and has since released provisional DDR5 IP (the DDR5 controller and PHY) commercially, this week presented some additional information about the upcoming DDR5 market release as well as the technology's progress.
On the SoC side of matters, we already know that AMD’s EPYC ‘Genoa’ as well as Intel’s Xeon Scalable ‘Sapphire Rapids’ will support DDR5 DRAM when they launch in the 2021 ~ 2022 timeframe. What is noteworthy, is that Cadence’s provisional DDR5 IP has ‘over a dozen design-ins’, so there are over 12 SoCs supporting DDR5 in various stages of development right now. Some of these system-on-chips will come earlier and some will be available later, but it is evident that there is a serious interest towards the technology among developers of SoCs.
Cadence is confident that its DDR5 controller and PHY are compliant to the formal JEDEC specification, so SoCs that use its IP will be compatible with upcoming DDR5 memory modules.
Here is what Marc Greenberg, director of DRAM IP marketing at Cadence, said:
“Close participation in the JEDEC working groups is an advantage. We get insight into how the standard will develop. We are a controller and PHY vendor and can anticipate any potential changes on the way to final standardization. In the early days of the standardization, we were able to adopt standard elements under development and work together with our partners to get very early working silicon. As we approach the release of the standard, we get more proof points to indicate that our IP will support DDR5 devices compliant to the standard.”
Transition to DDR5 represents a major challenge for DRAM makers because the chips are set to increase capacity, rise data transfer rates, increase effective performance (per clock and per channel), and lower power consumption all at the same time (read more here and here). In addition, DDR5 is expected to make it easier to stack multiple DRAM devices, which will allow to increase DRAM capacity in servers (from what we have today).
Micron and SK Hynix have already announced sampling to partners of their DDR5 memory modules based on their 16 Gb chips. Samsung has not formally confirmed any sampling, but we know from its ISSCC 2019 announcement that the company has been preparing and evaluating its 16 Gb DDR5 devices and modules on internally for a while now. Anyhow, DDR5 will likely be available at launch from all three major DRAM producers.
Cadence is confident that DDR5 ramp will begin with 16 Gb DRAMs at 4800 MT/sec/pin data transfer rate (something that was indirectly confirmed by SK Hynix’s DDR5-4800 module showcase at CES 2020). From there, DDR5 will evolve in two directions: capacity and performance. Capacity wise, DDR5 will grow to 24 Gb (so expect DDR5 modules of odd capacity like 24 GB, 48 GB, etc.) and then to 32 Gb. As for performance, Cadence expects DDR5 to evolve to 5200 MT/sec/pin data rate in 12 – 18 months after DDR4-4800 launch and then to 5600 MT/s in another 12 – 18 months, so performance progress of DDR5 in servers will occur in a pretty much regular cadence.
On the client side, a lot will depend on controllers and memory module vendors, but enthusiast-grade DIMMs will certainly be faster than those used in servers.
Mr. Greenberg, said the following:
“DDR4 went to 3200 just this year. Adoption of DDR speed grades happens quite slowly. DDR5 is the next step. It is a big leap in bit rate performance. But it will then hang there for 12-18 months, then go up to 5200, and 5600 after that. We are back on the treadmill of one speed grade every 12-18 months.”
In fact, the step from DDR4-3200 to DDR5-4800 will bring a huge performance bump, but it does not end there for servers. Because of 16 Gb chips, internal DDR5 architecture optimizations, new server architectures, and usage of RDIMMs instead of LRDIMMs, single-socket systems with 256 GB DDR5 modules will get a nice performance increase in terms of latency (vs. today’s LRDIMMs).
Here is what Mr. Greenberg said:
“A lot of these machines have 8 channels on a processor [socket], each [channel] with 512 GB, making a 4 TB memory machine where you can access any byte in under 100 ns. If a database index is 4 TB, you can imagine how big a database could be supported. Quite a beast.”
Keeping in mind that AMD’s EPYC ‘Rome’ CPUs already have eight memory channels and support up to 4 TB of DDR4 DRAM per socket using 256 GB RDIMMs, one can take advantage of low latency (vs. LRDIMMs) even today, but not at DDR5’s speeds. Meanwhile, systems with LRDIMM support can have up to 4.5 TB per socket, but at a cost of additional latency.
As noted above, AMD’s Genoa and Intel’s Sapphire Rapids are not due until very late 2021, or rather early 2022, but Cadence seems to be optimistic and believes that ‘2020 will be the year of DDR5’. From Cadence’s perspective, this might mean tapeouts of actual DDR5-supporting SoCs (which is about time), but the company’s internal analysis shows that it expects DRAM vendors to actually start shipments of DDR5 memory this year.
Memory makers tend to start volume shipments of new types of DRAM ahead of general availability of platforms. Meanwhile, shipping a year before AMD’s Genoa and Intel’s Sapphire Rapids seems a bit early, but has several reasonable explanations: AMD’s and Intel’s DDR5-supporting processors are closer than communicated by the two companies, there are DDR5-supporting SoCs that are coming to market well ahead of those from AMD and Intel, system makers need time to test DDR5 modules and stock them ahead of major product launches.
In any case, if the DDR5 specification is at the Final Draft stage, it is possible for major DRAM makers to kick off volume production even without a published standard. Theoretically, SoC developers can also send their designs to manufacturing at this stage. Meanwhile, it is hard to imagine DDR5 to capture any sizeable market share in 2020 – 2021 timeframe without support from the major CPU vendors.
One of the world's largest DRAM memory manufacturers TeamGroup has unveiled its first DDR4 memory kits featuring 32 GB sticks under its gaming-focused T-Force brand. The T-Force Vulcan Z and T-Force Dark Z will the first from the brand to be offered in 32 GB x 2 kits in dual-channel kits.
Starting with its T-Force Vulcan Z range, TeamGroup intends to release two different speeds with its 32 GB single stick options. It will be made available in DDR4-2666 and DDR4-3000 32 GB x 2 kits, which can operate in both single and dual-channel. The T-Force Vulcan Z features an aluminium heat spreader which is available in red or silver, with TeamGroup claiming that it uses selected memory IC chips for stability and performance.
Looking at the latency timings, the new T-Force Vulkan 2 x 32 GB kits, the DDR4-2666 kit has latency timings of CL 18-18-18-43 with an operating voltage of 1.2 V, while the DDR4-3000 kit has timings of CL 16-18-18-38 at 1.35 V.
The T-Force Vulkan 2 x 32 GB will be available in just DDR4-3000, with CL 16-18-18-38 latency timings with an operating voltage of 1.35 V. Like the Vulkan Z, the Dark Z also features aluminium heat spreaders, with an armoured design and individually selected memory ICs. The Vulkan Z range will also be available with a choice of two colors, grey and red.
TeamGroup doesn't specifically go into detail about which memory ICs its new 2 x 32 GB kits will feature, which opens the door for the manufacturer to change which vendor memory chips it uses. All of TeamGroup's DDR4 kits support XMP 2.0 memory profiles, with the T-Force Vulkan Z and Dark Z kits compatible with both Intel and AMD platforms.
Each T-Force Vulcan Z and Dark Z memory kit across its range has a lifetime warranty. At present, TeamGroup hasn't stated when stock will hit retail channels, nor has it stated its intended pricing structure for the new 2 x 32 GB memory kits.
Greenliant revealed on Wednesday that it has started shipments of its new industrial-grade ArmourDrive M.2 SSDs. The enhanced-durability drives are rated to operate in a much wider range of temperatures than commercial drives and are available in both NVMe and SATA formats, with capacities from 240 GB up to 1.92 TB.
Greenliant’s ArmourDrive 88PX-series NVMe M.2-2280/PCIe 3.0 x4 SSDs and ArmourDrive 87PX-series SATA M.2-2280 SSDs are designed to operate in temperatures between -40°C and +85°C. The drives use 3D TLC NAND memory, feature a DRAM cache, and are based on an unknown/unlisted controller that support LDPC-based ECC, end-to-end data protection, dynamic and static wear leveling, AES-256/TCG OPAL encryption, and Secure Erase capabilities.
As far as performance is concerned, the Greenliant ArmourDrive 88PX NVMe SSDs are rated for up to 3400 MB/s sequential read speeds as well as up to 1100 MB/s sequential write speeds. Meanwhile, the Greenliant ArmourDrive 87PX SATA SSDs offer up to 550 MB/s sequential read speeds as well as up to 520 MB/s sequential write speeds.
|Greenliant's ArmourDrive 88PX and 87PX-Series SSDs|
|Capacity||240 GB||480 GB||960 GB||1920 GB|
|Controller||NVMe 1.3 or AHCI
End-to-End Data Protection
Dynamic and Static Wear Leveling
|NAND Flash||3D TLC NAND|
|Form-Factor, Interface, Protocol||M.2-2280, PCIe 3.0 x4 or SATA|
|Sequential Read||PCIe||up to 3400 MB/s|
|SATA||up to 550 MB/s|
|Sequential Write||PCIe||up to 1100 MB/s|
|SATA||up to 520 MB/s|
|DRAM Buffer||Yes, capacity unknown|
|Encryption||TCG Opal 2.0
|Power Consumption||PCIe||Active mode:
1.92TB: 5,200 mW
960GB: 5,000 mW
480GB: 4,100 mW
240GB: 3,900 mW
Idle mode: < 2,000 mW
1.92TB: < 2,100mW
960GB: < 2,000mW
480GB: < 1,800mW
240GB: < 1,500mW
Idle mode: < 900mW
Greenliant is not the first company to ship TLC-based M.2 drives that can work in extreme environments, but it is among the first suppliers to start selling 1.92 TB drives for industrial temperature ranges. Building high-capacity SSDs for industrial applications is not particularly easy since they use multi-layered chips all of which should work fine when it is extremely cold or extremely hot.
The company does not disclose prices of its ArmourDrive 88PX NVMe and ArmourDrive 87PX SATA SSDs, as prices depend on the quantity ordered as well as other factors.
PowerColor this week has announced that it is extending its warranties to existing customers by three months. The second manufacturer this month to extend its existing prodcut warranties, PowerColor is making the extension due to the SARS-CoV-2 Coronavirus global pandemic and all of the resulting lockdown-related restrictions on non-essential shipping.
With the novel Coronavirus affecting many daily aspects of life and industry, provisions of basic necessities are being prioritized over items classified as non-essential. With more emphasis on much of the world focused on remaining at home during these times, PowerColor has announced a three-month warranty extension program for customers whose warranties were due to expire in the next few months (March to June 2020).
The brief announcement from PowerColor doesn't specify which products would benefit from this three-month warranty extension, but it is likely to stretch across its entire product portfolio. The most notable products in PowerColor portfolio are its AMD Radeon graphics cards with aftermarket coolers. With delays on shipping likely due to the ongoing coronavirus outbreak, as well as workplace restrictions in place, this is beneficial to users currently in the process of an RMA.
"With the global crisis of COVID-19, we understand that these are critical times for everyone. This has impacted all aspects of our lives, and we understand that during these times, priorities are placed on more health-concerned matters. Many in the process of RMAs may find difficulty in shipping out cards for repair or service at this time, so we will be adding a 3-month extension to customers with warranties expiring between March through June 2020. PowerColor remains committed to deliver great products and services to our customers, and want to assure that we will continue to do so during these trying times.
Wishing health and safety for all The PowerColor team."
Whilst the Mi 10 and Mi 10 Pro haven’t been secret devices, having been launched in China over a month ago, today Xiaomi is catching up on what was originally planned to be a MWC2020 global product reveal and global launch event.
The new Mi 10 and Mi 10 Pro represent Xiaomi’s mainline flagship devices for 2020, featuring the latest Snapdragon 865 SoC, as well as a slew of different camera hardware, including the famed HMX 108MP camera sensor that was developed in collaboration between Samsung and Xiaomi.
Bundled in their latest earnings call, Micron has revealed that later this year the company will finally introduce its first HBM DRAM for bandwidth-hungry applications. The move will enable the company to address the market for high-bandwidth devices such as flagship GPUs and network processors, which in the last five years have turned to HBM to meet their ever-growing bandwidth needs. And as the third and final of the "big three" memory manufacturers to enter the HBM market, this means that HBM2 memory will finally be available from all three companies, introducing a new wrinkle of competition into that market.
Overall, while Micron has remained on the cutting-edge of memory technologies, the company has been noticeably absent from HBM thus far. Previous efforts have instead focused on GDDR5X, as well as a different take on fast-and-stacked memory with Hybrid Memory Cube (HMC). First announced back in 2011 as a joint effort with Samsung and IBM, HMC was a similar stacked DRAM type for bandwidth hungry applications, which featured a low-width bus & extremely high data rates to offer memory bandwidth that by far exceeded that of then-standard DDR3. As a competing solution to HBM, HMC did see some usage in the market, particularly in products like accelerators and supercomputers. Ultimately, however, HMC lost the battle against more widespread HBM/HBM2 and Micron folded the project in 2018 in favor of GDDR6 and HBM.
In the end, is has taken Micron around two years to develop its first HBM2 memory devices, and these will finally become available in 2020. Given the broad, financial nature of the call, Micron isn't disclosing the specifications of its first HBM2 devices at this time, though it is a safe bet that the underlying DRAM cells will be produced using the company’s 2nd or 3rd Generation 10 nm-class process technologies (1y or 1z). Meanwhile, Micron will obviously do its best to be competitive against Samsung and SK Hynix both in terms of performance and capacity.
Sanjay Mehrotra, president and chief executive officer, had the following to say:
“In FQ2, we began sampling 1Z-based DDR5 modules and are on track to introduce high-bandwidth memory in calendar 2020. We are also making good progress on our 1-alpha node.”
Plugable this week has become the latest peripheral manufacturer to start producing 2.5 Gigabit Ethernet dongles, with the release of their own adapter. Designed to add support for faster networking speeds to PCs with USB 3.0 Type-A and Type-C ports, Plugable is pushing the "inexpensive" aspect of the network adapter hard, launching it at just $30.
Like most other 2.5GbE adapters we've seen to date, the Plugable 2.5G USB Ethernet Adapter (USBC-E2500) is based on Realtek's RTL8156 controller, which supports 2.5GBASE-T and on down, all over standard Cat5e cabling. The Realtek chip supports such features as 9k Jumbo frame support, auto MDI-X (crossover detection and correction), and IEEE 802.1Q VLAN. Since some of these capabilities require OS support, the dongle comes with drivers for Apple MacOS (10.12 and newer), Microsoft Windows 7/8/10, and Linux (kernel 3.2).
Meanwhile, recognizing that the industry as a whole is in the middle of a transition from USB Type-A to USB Type-C, the USB-C native Plugable 2.5G USB Ethernet Adapter comes with a USB-C to USB-A adapter that's conveniently tethered to the dongle's cable. For USB dongles that even bother to account for both port types, we normally see loosely packed adapters, so this is an interesting choice that should make the adapter a lot harder to lose. Otherwise, the device is made of plastic and looks fairly small, so it should be lightweight and plenty easy to carry around.
The Plugable 2.5G USB Ethernet Adapter is now available directly from the company as well as from leading retailers. The official MSRP of the device is $39.99, but for a limited time the product will be available for $29.99 from Amazon, either via an instant $10 coupon. The adapter is being released in the US, UK, EU, Australia, Canada, and Japan.
With the COVID-19 outbreak and work from home initiatives being enforced around the globe, this might not be the best time to introduce 2.5G Ethernet dongles that are primarily meant for offices. None the less, we're happy to see the continued proliferation of faster Ethernet controllers and dongles – and hope that cheaper network switches will catch up soon.
EIZO this week expanded the availability of its 21.6-inch 4K OLED Foris Nova display. The display was originally launched back in October as a limited-edition product for the Japanese market. Overall, just 500 units were to be made from that production run. However it would seem that EIZO has modified their plans since then, as according to a press release issued by EIZO China, the Foris Nova is now available globally.
The EIZO Foris Nova uses a 21.6-inch printed OLED panel with a 3840×2160 resolution. The display offers a typical/peak brightness range of 132 - 330 nits, a contrast ratio of 1,000,000:1, and a black-white-black response time of 0.04 ms. The monitor can display 1.07 billion colors, covers 80% of the BT.2020 color space and supports the HDR10 and HLG HDR formats. As for connectivity, the Foris Nova connects to hosts using two HDMI 2.0 inputs, it also has 1 W stereo speakers, one headphone output, and one line out.
EIZO is officially positioning the Foris Nova as a personal entertainment display, though its support for HLG and BT.2020 color gamut makes it handy in professional use cases as well.
Meanwhile, the company's plans to expand the availability of the monitor are a bit odd. As previously noted, when EIZO first announced the monitor they stated they would only make 500 units; but they've yet to actually announce a change to this cap (the official EIZO website still says '500 units' to be made). None the less, the monitor is set to become available to a much larger audience, with the global launch making it available in China and beyond.
|EIZO Foris Nova Specifications|
|Native Resolution||3840 × 2160|
|Maximum Refresh Rate||60 Hz|
|Response Time||0.04 ms (black-white-black)|
|Brightness||minimum: 0.0005 cd/m²
typical: 132 cd/m²
maximum: 330 cd/m²
|Viewing Angles||178°/178° horizontal/vertical|
|Pixel Pitch||0.1245 mm²|
|Pixel Density||204 ppi|
|Display Colors||1.07 billion|
|Color Gamut Support||DCI-P3: ?
sRGB/Rec 709: ?
Adobe RGB: ?
SMPTE C: ?
|Stand||Tilt and height adjustable|
|Inputs||2 × HDMI (2.0a? 2.0b?)|
|Global Price & Date||Q2 2019|
The worldwide release of EIZO’s Foris Nova monitor could be a good news for the manufacturer of its 21.6-inch printed OLED panel, JOLED (a division of Japan Display Inc., JDI). Expanded availability of the product could indicate that JOLED has started volume production of its 21.6-inch 4K printed OLED panels, which is why EIZO can now expand availability to China and other markets.
As part of its Snapdragon Elite Gaming initiative, Qualcomm previously announced its intentions to release quarterly driver updates for its Adreno GPUs. And now at long last, the first update is set to arrive. In addition, the company has developed an Android GPU Inspector tool to help game designers to optimize their applications for better performance.
While standalone driver updates are still a new concept to smartphones, they are a tried and true aspect of PCs. As a result of being able to deliver periodic driver updates separate from the OS, PC GPU vendors have been able to boost gaming performance and fix bugs in games at a fairly rapid pace, to the benefit of PC gamers everywhere. Now, as part of their Snapdragon Elite Gaming program, Qualcomm wants to bring those same benefits to smartphones, shipping their own driver regular updates to phones so that these performance and feature updates are more readily available to smartphone gamers.
Overall, Qualcomm has stated that it wants to release new drivers for its Snapdragon SoCs every quarter for two to three years after launch. However, it should be noted that the company will not be going around handset vendors in delivering driver updates; the drivers will be sent to smartphone manufacturers, who in turn have to push them to the Google Play Store (or app stores in China). Which means that while Qualcomm hopes that their OEM partners will stick to the quarterly release schedule, it does not have control over what the OEMs ultimately do.
The first SoCs to get quarterly GPU driver are the current-generation Snapdragon 865 and Snapdragon 765/765G, as well as previous-generation Snapdragon 855. The first smartphones to be updated, in turn, will be the Samsung Galaxy S10, Samsung Galaxy Note 10, and Google’s Pixel 4 series. Meanwhile other handsets will be updated later.
In addition to drivers set to be updated quarterly, Qualcomm has also teamed up with Google to create the Android GPU Inspector tool, which promises to help discover performance optimization opportunities. According to Qualcomm, the tool helped Google and an unnamed game developer find an optimization that ‘saved the game 40% in GPU utilization’ on the Pixel 4 XL, which enabled smoother gameplay and longer battery life.
And this kind of close collaboration with game designers will not end with the Android GPU Inspector tool. Select game studios will get beta versions of Adreno GPU software driver in a bid to provide feedback to Qualcomm and, possibly, optimize their titles better.
Today Huawei launched its latest generation of photography focused smartphone: the P40 series. This series consists of the P40, the P40 Pro, and the P40 Pro+, starting at €799 for the cheapest going up to €1399 for the high-end model, which features a 40W wireless charge mode, a 6.58-inch OLED 90 Hz display, 10x optical zoom, up to 100x zoom, Wi-Fi 6, and a range of new photography features to get the best shot.
After the launch, Huawei’s Consumer Business Group (CBG) CEO Richard Yu invited the press to a group question and answer session. There were two main topics that dominated the session - how the prevalence of COVID-19 is affecting Huawei’s strategy, but also how the continuation of the US ban on Huawei interacting with US companies is affecting users and in particular the available apps on Huawei’s own App Gallery that can’t use Google’s services.
Today, Huawei is doubling down on its efforts to regain western market share, revealing brand-new hardware as well as expanding the company’s AppGallery app store, introducing the new P40, P40 Pro as well as the P40 Pro+.
The trio of phones are successors to the company’s photography-focused P series, yet again pushing the envelope in terms of innovative camera hardware, adding to the mix some new exclusive sensors, including a new large 1/1.28” 52MP RYYB unit, as well as coming with an array of various other modules – including an expansive telephoto module selection, and the first ever 10x optical zoom module in the industry.
Folding@home has announced that cumulative compute performance of systems participating in the project has exceeded 1.5 ExaFLOPS, or 1,500,000,000,000,000,000 floating point operations per second. The level of performance currently available from Folding@home participants is by an order of magnitude higher than that of the world’s most powerful supercomputer.
Right now, cumulative performance of active CPUs and GPUs (which have returned Work Units within the last 50 days) participating in the Folding@home project exceeds 1,5 ExaFLOPS, which is 10 times faster than performance of IBM’s Summit supercomputer benchmarked for 148.6 PetaFLOPS. To get there, Folding@Home had to employ 4.63 million CPU cores as well as nearly 430 thousand GPUs. Considering the nature of distributed computing, not all CPU cores and GPUs are online at all times, so performance available for Folding@home projects varies depending on availability of hardware.
|Folding@home Active CPUs & GPUs
Reported on Wed, 25 Mar 2020 23:04:31 GMT
|AMD GPUs||NVIDIA GPUs||CPUs||CPU Cores||TFLOPS||x86 TFLOPS|
|Note:||CPUs and GPUs which have returned Work Units within the last 50 days are considered Active.|
The outbreak of COVID-19 has been taxing for a number of computational biology and chemistry projects. IBM recently formed its COVID-19 High Performance Computing Consortium that pools together major supercomputers run by various research institutions and technology companies in the USA to run research simulations in epidemiology, bioinformatics, and molecular modeling. Cumulative performance of supercomputers participating in IBM’s COVID-19 HPC Consortium is 330 PetaFLOPS.
Folding@home distributed computing project uses compute capabilities to run simulations of protein dynamics in a bid to better understand them and find cures for various diseases. Recently F@H started to run projects simulating theoretically druggable protein targets from SARS-CoV-2, which attracted a lot of attention as SARS-CoV-2 and COVID-19 are clearly the hottest topics these days.
We at AnandTech also have our Folding@Home team, which are currently in a race against our sister site Tom's Hardware. If you have a GPU spare that's not too old, think about joining us in our battle. We are Team 198.
Source: Folding@Home Twitter
Following the cancellation earlier this year of the 2020 Mobile World Congress trade show, GSMA, the organizer behind the even, has finally disclosed details regarding the compensation packages that it will provide to attendees and exhibitors who had already paid to attend the show. The organization will refund price of tickets to individual visitors, while exhibitors will have two options, depending on how much they've spent.
OWC has announced a new version of its Mercury Elite Pro DAS, the company's entry-level external storage box. The refreshed DAS can house one 3.5-inch hard drive, allowing it to provide capacities of up to 16 TB using today's HDDs.
The OWC Mercury Elite Pro DAS is available in 1 TB, 2 TB, 4 TB, 6 TB, 8 TB, 12 TB, 14 TB, and 16 TB versions. The devices can be stacked, so those who need greater capacities can easily get it. All the SKUs are powered by 7200 RPM hard drives, so they offer a rather decent level of performance, up to 283 MB/s, which is good enough for music, videos, photos, and business files. Externally, the DAS has a USB 3.2 Gen 1 interface with up to 5 Gbps throughput.
The Mercury Elite Pro DAS comes in a brushed aluminum chassis with venting, so it does not rely on active cooling, making the hard drive inside the only major noise source.
OWC’s new entry-level DAS is compatible with Apple macOS X, Microsoft Windows, Linux, Sony PlayStation 4, Xbox consoles, and Smart TVs. In addition, they are support Apple Time Machine and Windows File History backups.
OWC has already started sales of the Mercury Elite Pro. Just the enclosure itself is priced at $49, a 2 TB SKU costs $129, whereas the top-of-the-range 16 GB module carries a $579 price tag.
The workstation and server markets are big business for not only chip manufacturers such as Intel and AMD, but for motherboard vendors too. Since AMD's introduction of its Zen-based EPYC processors, its prosumer market share has been slowly, but surely, creeping back. One example of a single socket solution available on the market is the GIGABYTE MZ31-AR0. With support for AMD's EPYC family of processors, the MZ31-AR0 has some interesting components including its 2 x SFP+ 10 G Ethernet ports powered by a Broadcom BCM57810S controller, and four SlimSAS slots offering up to sixteen SATA ports.
It was recently brought to our attention that three new Ice Lake CPUs were listed on Intel’s online ARK database of products: the Core i7-1060NG7, the Core i5-1030NG7, and the Core i3-1000NG4. These differ from the ‘consumer’ released products by having an ‘N’ in them, and specification-wise these CPUs have a slightly higher TDP along with a slightly higher base clock, as well as being in a smaller package. We reached out to Intel, but in the meantime we also noticed that the CPUs line up perfectly with what Apple is providing in its latest Macbook Air.
Intel’s Ice Lake family is the first generation of 10nm processors that the company has made widely available. We’ve covered Intel’s ups and downs with the 10nm process, and last year it launched Ice Lake as part of its 10th Generation Core family, focusing more on premium products that need graphics horsepower or AI acceleration. In the initial announcement, Intel stated that there would be nine different Ice Lake processors coming to market, however we learned that the lower-power parts would take longer to arrive.
These three new CPUs actually fall under that ‘lower power’ bracket, meaning they were meant to be coming out about this time, but are labelled differently to the processors initially announced. This is because these new CPUs are officially listed as ‘off-roadmap’, which is code for ‘not available to everyone’. Some OEMs, particularly the big ones like Apple, or sometimes HP and others, will make a request to Intel to develop a special version of their products just for them. This product is usually the same silicon as before, but binned differently, often to tighter constraints: it might differ in frequency, TDP, core count, or the way it is packaged. This more often happens in the server space, but can happen for notebooks as well, assuming you can order a larger amount.
|Intel Ice Lake-Y Variants|
|Cores / Threads||4 / 8||4 / 8||4 / 8||4 / 8||2 / 4||2 / 4|
|L3 Cache||8 MB||8 MB||6 MB||6 MB||4 MB||4 MB|
|Base Freq (GHz)||1.20||1.00||1.10||0.80||1.10||1.10|
|Turbo Freq (GHz)||3.80||3.80||3.50||3.50||3.20||3.20|
|TDP||10 W||9 W||10 W||9 W||9 W||9 W|
|GPU Freq (MHz)||1100||1100||1050||1050||900||900|
These new CPUs are different because they have an ‘N’ in the name. This translates, in the case of the Core i7, to +1W on the TDP, +200 MHz on the base frequency, and a much smaller package size. They are all classified as Iris Plus graphics, and the G7 indicates 64 EUs while the G4 indicates 48 EUs. Interestingly the new CPUs have Intel’s TXT and Optane Memory Support disabled. Increasing the TDP by 11% and the base frequency by 20% is probably very reasonable – ultimately the TDP affects more for the sustained performance, for which customers that want custom versions are probably optimizing for quite well.
Another aspect is the smaller package size. Intel for the Ice CPUs traditionally has two packages - a Type 3 at 50x25mm, and a Type 4 at 26.5 x 18.5 mm. With Type 4, the CPU and IO chips are close together and have a shim to stiffen the package. This new package seems to be off-roadmap as well, without the shim - a 'Type 5' package if you will. The smaller package also helps in designing the system, leaving more room for other components. Arguably this is the biggest change with these CPUs, reducing the package from 26.5 mm by 18.5 mm to 22.0 mm by 16.5 mm, a 26% size reduction.
We suspect these are the CPUs in the most recent updates to Apple’s Macbook Air line. Apple historically does not list exactly which processors it uses in its devices, but the website shows the following:
These specifications line up. Two of the three CPUs already have Geekbench benchmark results submitted to the online database.
When we approached Intel asking what these CPUs were, and the official line is:
“The ‘N’ notes a slightly differentiated, customer-specific version of those SKUs. Those slight differences require a signifier for our internal SKU management and ordering systems. The N is not a new subfamily or directly connected to a specific set of features, for example.”
This goes in line with what we stated above about customer-specific binning. Apple will no doubt be ordering a few million of these CPUs, so Intel is prepared to add an extra binning step just for the business.
Samsung is on track to start volume production of DDR5 and LPDDR5 memory next year using a manufacturing technology that will take advantage of extreme ultraviolet lithography (EUVL). In fact, Samsung has been playing with EUV-enabled DRAM fabrication process for a while and has already validated DDR4 memory with select partners.
To date, Samsung has produced and shipped a million of DDR4 DRAM modules based on chips made using the company’s D1x process technology that uses EUV lithography. These modules have completed customer evaluations, which proves that Samsung’s 1st Generation EUV DRAM technology enables to build fine circuits. Samsung’s D1x is an experimental EUVL fabrication process that was used to make experimental DDR4 DRAMs, though it will not be used any further, the company said.
Instead, to produce DDR5 and LPDDR5 next year, the company will use its D1a, a highly-advanced 14 nm-class process with EUV layers. This technology is expected to double per-wafer productivity (DRAM bit output) when compared to D1x technology, which indicates that it uses thinner geomtries. Samsung did not reveal whether its D1a also uses other innovations (in addition to EUVL), such as pillar cell capacitors and dual work function layers for buried wordline gates, as anticipated by analysts from TechInsights who believe that scaling DRAM cell transistors and capacitor structures offer limited capability to scale further from current levels.
|Timeline of Samsung DRAM Milestones|
|2021||4th-gen 10nm-class (1a) EUV-based
16Gb DDR5/LPDDR5 mass production
|March 2020||4th-gen 10nm-class (1a) EUV-based DRAM development|
|September 2019||3rd-gen 10nm-class (1z) 8Gb DDR4 mass production|
|June 2019||2nd-gen 10nm-class (1y) 12Gb LPDDR5 mass production|
|March 2019||3rd-gen 10nm-class (1z) 8Gb DDR4 development|
|November 2017||2nd-gen 10nm-class (1y) 8Gb DDR4 mass production|
|September 2016||1st-gen 10nm-class (1x) 16Gb LPDDR4/4X mass production|
|February 2016||1st-gen 10nm-class (1x) 8Gb DDR4 mass production|
|October 2015||20nm (2z) 12Gb LPDDR4 mass production|
|December 2014||20nm (2z) 8Gb GDDR5 mass production|
|December 2014||20nm (2z) 8Gb LPDDR4 mass production|
|October 2014||20nm (2z) 8Gb DDR4 mass production|
|February 2014||20nm (2z) 4Gb DDR3 mass production|
|February 2014||20nm-class (2y) 8Gb LPDDR4 mass production|
|November 2013||20nm-class (2y) 6Gb LPDDR3 mass production|
|November 2012||20nm-class (2y) 4Gb DDR3 mass production|
|September 2011||20nm-class (2x) 2Gb DDR3 mass production|
|July 2010||30nm-class 2Gb DDR3 mass production|
|February 2010||40nm-class 4Gb DDR3 mass production|
|July 2009||40nm-class 2Gb DDR3 mass production|
Usage of EUVL will enable Samsung (and eventually other memory makers) to reduce (or eliminate) usage of multi patterning, which enhances patterning accuracy and therefore improves performance and yields. The latter will be beneficiary for production of high-performance high-capacity DDR5 chips as they are meant to increase both performance (up to DDR4-6400) and capacity (up to 32 Gbps). Samsung has not officially revealed how many EUV layers do its D1x and D1a process technologies use.
In addition to revealing its EUV-related achievements, Samsung also said that in the second half this year its P2 fab near Pyeongtaek, South Korea, will begin operations later this year. Initially, the facility will ‘make next-generation premium DRAMs’.
Jung-bae Lee, executive vice president of DRAM Product & Technology at Samsung Electronics, said the following:
"With the production of our new EUV-based DRAM, we are demonstrating our full commitment toward providing revolutionary DRAM solutions in support of our global IT customers. This major advancement underscores how we will continue contributing to global IT innovation through timely development of leading-edge process technologies and next-generation memory products for the premium memory market."