Alongside Intel’s Skylake Core CPU architecture, Intel’s other CPU workhorse architecture for the last few years has been the Goldmont Plus Atom core. First introduced in 2017 as part of the Gemini Lake platform, Goldmont Plus was a modest update to Intel’s Atom architecture that has served as the backbone of the cheapest Intel-based computers since 2017. However Goldmont Plus’s days have been numbered since the announcement of the Tremont Atom architecture, and now Goldmont Plus is taking another step out the door with the announcement that Intel has begun End-Of-Life procedures for the Gemini Lake platform.
Intel’s bread and butter budget platform for the last few years, Gemini Lake chips have offered two or four CPU cores, as well as the interesting UHD Graphics 600/605 iGPUs, which ended up incorporating a mix of Intel’s Gen9 and Gen10 GPU architectures. These chips have been sold under the Pentium Silver and Celeron N-series brands for both desktop and mobile use, with TDPs ranging from 6W to 10W. All told, Gemini Lake doesn’t show up in too many notable PCs for obvious reasons, but it has carved out a bit of a niche in mini-PCs, where its native HDMI 2.0 support and VP2 Profile 2 support (for HDR) have given it a leg up over intel’s 7th/8th/9th generation Core parts.
None the less, after almost 3 years in the market Gemini Lake’s days are numbered. In the long-run, it’s set to be replaced with designs using Intel’s new Tremont architecture. Meanwhile in the short-run Intel’s budget lineup will be anchored by the Gemini Lake Refresh platform, which Intel quietly released in late 2019 as a stopgap for Tremont. As a result, Intel has started the process for retiring the original Gemini Lake platform, as the OG platform has become redundant.
Under a set of Product Change Notifications published yesterday, Intel has laid out a pretty typical EOL plan for the processors. Depending on the specific SKU, customers have until either October 23rd or January 22nd to make their final chip orders. Meanwhile those final orders will ship by April 2nd of 2021 or July 9th of 2021 respectively.
All told, this gives customers roughly another year to wrap up business with a platform that itself was supplanted the better part of a year ago.
Announced a couple of weeks ago, the new AMD Ryzen 3000XT models with increased clock frequencies should be available today in primary markets. These new processors offer slightly higher performance than their similarly named 3000X counterparts for the same price, with AMD claiming to be taking advantage of a minor update in process node technology in order to achieve slightly better clock frequencies.
Speakers aren’t traditionally part of our coverage, but today’s announcement of xMEMS’ new speaker technology is something that everybody should take note of. Voice coil speakers as we know them and have been around in one form or another for over a hundred years and have been the basis of how we experience audio playback.
In the last few years, semiconductor manufacturing has become more prevalent and accessible, with MEMS (Microelectromechanical systems) technology now having advanced to a point that we can design speakers with characteristics that are fundamentally different from traditional dynamic drivers or balanced armature units. xMEMS’ “Montara” design promises to be precisely such an alternative.
xMEMS is a new start-up, founded in 2017 with headquarters in Santa Clara, CA and with a branch office in Taiwan. To date the company had been in stealth mode, not having publicly released any product till today. The company’s motivations are said to be breaking decades old speaker technology barriers and reinventing sound with new innovative pure silicon solutions, using extensive experience that its founders have collected over years at different MEMS design houses.
The manufacturing of xMEMS’ pure silicon speaker is very different to that of a conventional speaker. As the speaker is essentially just one monolithic piece manufactured via your typical lithography manufacturing process, much like how other silicon chips are designed. Due to this monolithic design aspect, the manufacturing line has significantly less complexity versus voice coil designs which have a plethora of components that need to be precision assembled – a task that is quoted to require thousands of factory workers.
The company didn’t want to disclose the actual process node of the design, but expect something quite crude in the micron range – they only confirmed that it was a 200mm wafer technology.
Besides the simplification of the manufacturing line, another big advantage of the lithographic aspect of a MEMS speaker is the fact that its manufacturing precision and repeatability are significantly superior to that of a more variable voice coil design. The mechanical aspects of the design also has key advantages, for example higher consistency membrane movement which allows higher responsiveness and lower THD for active noise cancellation.
xMEMS’ Montara design comes in an 8.4 x 6.06 mm silicon die (50.9mm²) with 6 so-called speaker “cells” – the individual speaker MEMS elements that are repeated across the chip. The speaker’s frequency response covers the full range from 10Hz to up to 20KHz, something which current dynamic driver or balanced armature drivers have issues with, and why we see multiple such speakers being employed for covering different parts of the frequency range.
The design is said to have extremely good distortion characteristics, able to compete with planar magnetic designs and promises to have only 0.5% THD at 200Hz – 20KHz.
As these speakers are capacitive piezo-driven versus current driven, they are able to cut power consumption to fractions of that of a typical voice coil driver, only using up 42µW of power.
Size is also a key advantage of the new technology. Currently xMEMS is producing a standard package solution with the sound coming perpendicularly out of the package which has the aforementioned 8.4 x 6.05 x 0.985mm footprint, but we’ll also see a side-firing solution which has the same dimensions, however allows manufacturers to better manage internal earphone design and component positioning.
In the above crude 3D printed unit with no optimisations whatsoever in terms of sound design, xMEMS easily managed to design an earphone of similar dimensions to that of current standard designs. In fact, commercial products are likely to looks much better and to better take advantage of the size and volume savings that such a design would allow.
One key aspect of the capacitive piezo-drive is that it requires a different amplifier design to that of classical speaker. Montara can be driven up to 30V peak-to-peak signals which is well above the range of your existing amplifier designs. As such, customers wishing to deploy a MEMS speaker design such as the Montara requires an additional companion chip, such as Texas Instruments’ LM48580.
In my view this is one of the big hurdles for more widespread adoption of the technology as it will limit its usage to more integrated solutions which do actually offer the proper amplifier design to drive the speakers – a lot of existing audio solutions out there will need an extra adapter/amp if any vendor actually decides to actually make a non-integrated “dumb” earphone design (As in, your classical 3.5mm ear/headphones).
TWS (True wireless stereo) headphones here obviously are the prime target market for the Montara as the amplifier aspect can be addressed at design, and such products can fully take advantage of the size, weight and power advantages of the new speaker technology.
In measurements, using the crude 3D-printed earphone prototype depicted earlier, xMEMS showcases that the Montara MEMS speaker has significantly higher SPL than any other earphone solution, with production models fully achieving the targeted 115dB SPL (The prototype only had 5 of the 6 cells active). The native frequency response here is much higher in the higher frequencies – allowing vendors headroom in order adapt and filter the sound signature in their designs. Filtering down is much easier than boosting at these frequencies.
THD at 94dB SPL is also significantly better than even an unnamed pair of $900 professional IEMs – and again, there’s emphasis that this is just a crude design with no audio optimisations whatsoever.
In terms of cost, xMEMS didn’t disclose any precise figure, but shared with us that it’ll be in the range of current balanced armature designs. xMEMS’ Montara speaker is now sampling to vendors, with expected mass production kicking in around spring next year – with commercial devices from vendors also likely to see the light of day around this time.
Just shy of a year ago, SK Hynix threw their hat into the ring, as it were, by becoming the second company to announce memory based on the HBM2E standard. Now the company has announced that their improved high-speed, high density memory has gone into mass production, offering transfer rates up to 3.6 Gbps/pin, and capacities of up to 16GB per stack.
As a quick refresher, HBM2E is a small update to the HBM2 standard to improve its performance, serving as a mid-generational kicker of sorts to allow for higher clockspeeds, higher densities (up to 24GB with 12 layers), and the underlying changes that are required to make those happen. Samsung was the first memory vendor to ship HBM2E with their 16GB/stack Flashbolt memory, which runs at up to 3.2 Gbps in-spec (or 4.2 Gbps out-of-spec). This in turn has led to Samsung becoming the principal memory partner for NVIDIA’s recently-launched A100 accelerator, which was launched using Samsung’s Flashbolt memory.
Today’s announcement by SK Hynix means that the rest of the HBM2E ecosystem is taking shape, and that chipmakers will soon have access to a second supplier for the speedy memory. As per SK Hynix’s initial announcement last year, their new HBM2E memory comes in 8-Hi, 16GB stacks, which is twice the capacity of their earlier HBM2 memory. Meanwhile, the memory is able to clock at up to 3.6 Gbps/pin, which is actually faster than the “just” 3.2 Gbps/pin that the official HBM2E spec tops out at. So like Samsung’s Flashbolt memory, it would seem that the 3.6 Gbps data rate is essentially an optional out-of-spec mode for chipmakers who have HBM2E memory controllers that can keep up with the memory.
At those top speeds, this gives a single 1024-pin stack a total of 460GB/sec of memory bandwidth, which rivals (or exceeds) most video cards today. And for more advanced devices which employ multiple stacks (e.g. server GPUs), this means a 6-stack configuration could reach as high as 2.76TB/sec of memory bandwidth, a massive amount by any measure.
Finally, for the moment SK Hynix isn’t announcing any customers, but the company expects the new memory to be used on “next-generation AI (Artificial Intelligence) systems including Deep Learning Accelerator and High-Performance Computing.” An eventual second-source for NVIDIA’s A100 would be among the most immediate use cases for the new memory, though NVIDIA is far from the only vendor to use HBM2. If anything, SK Hynix is typically very close to AMD, who is due to launch some new server GPUs over the next year for use in supercomputers and other HPC systems. So one way or another, the era of HBM2E is quickly ramping up, as more and more high-end processors are set to be introduced using the faster memory.
For the past eighteen months, Intel has paraded its new ‘Lakefield’ processor design around the press and the public as a paragon of new processor innovation. Inside, Intel pairs one of its fast peak performance cores with four of its lower power efficient cores, and uses novel technology in order to build the processor in the smallest footprint it can. The new Lakefield design is a sign that Intel is looking into new processor paradigms, such as hybrid processors with different types of cores, but also different stacking and packaging technologies to help drive the next wave of computing. With this article, we will tell you all you need to know about Lakefield.
We still haven’t had any official announcements from Samsung regarding the Note20 series as of yet, expecting the company to only reveal the new phone series sometime in early to mid-August if past release dates are any indications. Yet in a surprise blunder, the company has managed to publicly upload two product images of the upcoming Note20+ or Ultra (naming uncertain) on one of its Ukrainian pages.
Whilst we usually don’t report on leaks or unofficial speculations as part of our editorial standards – a first party blunder like this is very much an exception to the rule.
The leak showcases the seemingly bigger sibling of the Note20 series as it features the full camera housing and seemingly same modules as the Galaxy S20 Ultra. There’s been a design aesthetic change as the cameras are now accentuated by a ring element around the lenses, making the modules appear more consistent with each other, even though there’s still clearly different sized lenses along with the rectangular periscope zoom module. The images showcase actual depth on the part of the ring elements, so they may extend in three dimensions.
The new gold/bronze colour also marks a return for Samsung for such a more metallic option.
We expect the Note20 series to be a minor hardware upgrade over the S20 devices, with the most defining characteristic naturally being the phone’s integrated S-Pen stylus.
Back in April, Intel released its Z490 chipset for its 10th generation Comet Lake processors with a choice of over 44 models for users to select from. One of the more enthusiast-level models for Z490 was announced by ASUS via its ROG Maximus Apex, with solid overclocking focused traits, but equally with enough features for performance users and gamers too. ASUS has announced that the ROG Maximus Apex is now available to purchase with some of the most prominent features including three PCIe 3.0 x4 M.2 slots, a 16-phase power delivery, an Intel 2.5 GbE Ethernet controller and an Intel Wi-Fi 6 wireless interface.
Not all motherboards are created equal, and not all conform to fit a specific purpose e.g content creation, gaming, or workstation. One of ASUS's most distinguished brands is the Republic of Gamers series, with its blend of premium controllers, aesthetics, and the models are generally full of features. The Apex series is the brands overclocking focused models, and there have been some fantastic Apex models across the chipsets. ASUS has just put the new ROG Maximus XII Apex into North American retail channels.
Some of the most notable features of the ASUS ROG Maximus XII Apex include support for up to three PCIe 3.0 x4 M.2 drives, with the use of an included ROG DIMM.2 module included in the accessories bundle. Looking at storage, the Apex includes eight SATA ports which use a friendly V-shaped design to allow easier installation of SATA drives. Despite this board being ATX, ASUS includes just two memory slots with support for up to 64 GB of DDR4-4800 memory, which is likely to improve latencies and overall memory performance when overclocking memory. There are two full-length PCIe 3.0 slots which operate at x16 and x8/x8, with a half-length PCIe 3.0 x4 slot and a single PCIe 3.0 x1 slot. On the rear panel are a load of USB connectivity with four USB 3.2 G2 Type-A, one USB 3.2 G2 Type-C, and five USB 3.2 G1 Type-A ports. For networking, ASUS includes an Intel I225-V 2.5 GbE ethernet controller and an Intel AX201 Wi-Fi 6 interface which also includes support for BT 5.1 devices. The board also includes a SupremeFX S1220A HD codec which adds five 3.5 mm audio jacks and a single S/PDIF optical output on the rear.
Underneath the large power delivery heatsink is a big 16-phase setup with sixteen TDA21490 90 A power stages, with an ASP1405I PWM controller operating in 7+1 mode. This is due to ASUS opting to use teamed power stages with fourteen for the CPU, and two for the SoC, with teamed designed to improve transient response when compared to setups that use doublers. Providing power to the CPU is a pair of 12 V ATX CPU power inputs, while a 4-pin Molex is present to provide additional power to the PCIe slots.
The ASUS ROG Maximus XII Apex is currently available to purchase at Digital Storm and Cyberpower in the US, with stock expected to land at both Amazon and Newegg very soon. Stockists and retailers such as Scan Computers in the UK also have stock at present.
Samsung's second-generation QLC NAND is here, but it's still held back by a SATA interface. The new Samsung 870 QVO is probably big enough to be your only SSD, but may not be fast enough to satisfy.
Today Qualcomm is making a big step forward in its smartwatch SoC offerings by introducing the brand-new Snapdragon Wear 4100 and Wear 4100+ platforms. The new chips succeed the aging two 2018 originating Wear 3100 platforms and significantly upgrading the hardware specifications, bringing to the table all new IPs for CPU, GPU and DSPs, all manufactured on a newer lower power process node.
One of the two leading manufacturers of tape cartridge storage, FujiFilm, claims that they have a technology roadmap through to 2031 which builds on the current magnetic tape paradigm to enable 400 TB per tape.
Following last week’s release of NVIDIA’s first Hardware-Accelerated GPU Scheduling-enabled video card driver, AMD this week has stepped up to the plate to do the same. The Radeon Software Adrenalin 2020 Edition 20.5.1 Beta with Graphics Hardware Scheduling driver (version 20.10.17.04) has been posted to AMD’s website, and as the name says on the tin, the driver offers support for Windows 10’s new hardware-accelerated GPU scheduling technology.
As a quick refresher, hardware acceleration for GPU scheduling was added to the Windows display driver stack with WDDM 2.7 (shipping in Win10 2004). And, as alluded to by the name, it allows GPUs to more directly manage their VRAM. Traditionally Windows itself has done a lot of the VRAM management for GPUs, so this is a distinctive change in matters.
Microsoft has been treating the feature as a relatively low-key development – relative to DirectX 12 Ultimate, they haven’t said a whole lot about it – meanwhile AMD’s release notes make vague performance improvement claims, stating “By moving scheduling responsibilities from software into hardware, this feature has the potential to improve GPU responsiveness and to allow additional innovation in GPU workload management in the future”. As was the case with NVIDIA’s release last week, don’t expect anything too significant here, otherwise AMD would be more heavily promoting the performance gains. But it’s something to keep an eye on over the long term.
In the meantime, AMD seems to be taking a cautious approach here. The beta driver has been published outside their normal release channels and only supports products using AMD’s Navi 10 GPUs – so the Radeon 5700 series, 5600 series, and their mobile variants. Support for the Navi 14-based 5500 series is notably absent, as is Vega support for both discrete and integrated GPUs.
Additional details about the driver release, as well as download instructions, can be found on AMD’s website in the driver release notes.
Finally, on a tangential note, I'm aiming to sit down with The Powers That Be over the next week or so in order to better dig into hardware-accelerated GPU scheduling. Since it's mostly a hardware developer-focused feature, Microsoft hasn't talked about it much in the consumer context or with press. So I'll be diving into more on the theory behind it: what it's meant to do, future feature prospects, and as well as the rationale for introducing it now as opposed to earlier (or later). Be sure to check back in next week for that.
It’s been a couple of months since OnePlus released the new OnePlus 8 & OnePlus 8 Pro, and both devices have received plenty of software updates improving the device’s experiences and camera qualities. Today, it’s time to finally go over the full review of both devices, which OnePlus no longer really calls “flagship killers”, but rather outright flagships.
The OnePlus 8, and especially the OnePlus 8 pro are big step-up redesigns from the company, significantly raising the bar in regards to the specifications and features of the phones. The OnePlus 8 Pro is essentially a check-marked wish-list of characteristics that were missing from last year’s OnePlus 7 Pro as the company has addressed some of its predecessors’ biggest criticisms. The slightly smaller and cheaper regular OnePlus 8 more closely follows its predecessors’ ethos as well as competitive pricing, all whilst adopting the new design language that’s been updated with this year’s devices.
It was recently announced that the Fugaku supercomputer, located at Riken in Japan, has scored the #1 position on the TOP500 supercomputer list, as well as #1 positions in a number of key supercomputer benchmarks. At the heart of Fugaku isn’t any standard x86 processor, but one based on Arm – specifically, the A64FX 48+4-core processor, which uses Arm’s Scalable Vector Extensions (SVE) to enable high-throughput FP64 compute. At 435 PetaFLOPs and 7.3 million cores, Fugaku beat the former #1 system by 2.8x in performance. Currently Fugaku has been used for COVID-19 related research, such as modelling tracking rates or virus in liquid droplet dispersion.
The Fujitsu A64FX card is a unique piece of kit, offering 48 compute cores and 4 control cores, each with monumental bandwidth to keep the 512-bit wide SVE units fed. The chip runs at 2.2 GHz, and can operate in FP64, FP32, FP16 and INT8 modes for a variety of AI applications. There is 1 TB/sec of bandwidth from the 32 GB of HBM2 on each card, and because there are four control cores per chip, it runs by itself without any external host/device situation.
It wasn’t ever clear if the A64FX module would be available on a wider scale beyond supercomputer sales, however today confirms that it is, with the Japanese based HPC Systems set to offer a Fujitsu PrimeHPC FX700 server that contains up to eight A64FX nodes (at 1.8 GHz) within a 2U form factor. Each note is paired with 512 GB of SSD storage and gigabit Ethernet capabilities, with room for expansion (Infiniband EDR etc). The current deal at HPC Systems is for a 2-node implementation, at a price of ¥4,155,330 (~$39000 USD), with the deal running to the end of the year.
The A64FX card already has listed support for quantum chemical calculation software Gaussian16, molecular dynamics software AMBER, non-linear structure analysis software LS-DYNA. Other commercial packages in the structure and fluid analysis fields will be coming on board in due course. There is also Fujitsu’s Software Compiler Package v1.0 to enable developers to build their own software.
The arrival of the AMD B550 chipset is an exciting prospect for PC builders, as it’s the first to bring the potential of PCIe 4.0 to the forefront for mainstream builders. ASUS has a diverse selection of new motherboards to choose from with this chipset, and this useful B550 motherboard guide will help you figure out which one is right for you.
In ASUS B550 motherboards, the main PCIe x16 and M.2 slots are PCIe 4.0-capable. They also feature up to four USB 3.2 Gen 2 ports that clock in with a maximum supported speed of 10Gbps each. The chipset’s built-in lanes now have PCIe 3.0 connectivity as well, which is great to see. Additionally, AMD has noted that future CPUs built on the Zen 3 architecture will be fully compatible with B550 motherboards, making them a safe and long-lasting investment for people who wish to upgrade to those new processors down the line.
Absent from the discrete GPU space for over 20 years, this year Intel is set to see the first fruits from their labors to re-enter that market. The company has been developing their new Xe family of GPUs for a few years now, and the first products are finally set to arrive in the coming months with the Xe-LP-based DG1 discrete GPU, as well as Tiger Lake’s integrated GPU, kicking off the Xe GPU era for Intel.
But those first Xe-LP products are just the tip of a much larger iceberg. Intending to develop a comprehensive top-to-bottom GPU product stack, Intel is also working on GPUs optimized for the high-power discrete market (Xe-HP), as well as the high-performance computing market (Xe-HPC).
That high end of the market, in turn, is arguably the most important of the three segments for Intel, as well as being the riskiest. The server-class GPUs will be responsible for broadening Intel’s lucrative server business beyond CPUs, along with fending off NVIDIA and other GPU/accelerator rivals, who in the last few years have ridden the deep learning wave to booming profits and market shares that increasingly threaten Intel’s traditional market dominance. The server market is also the riskiest market, due to the high-stakes nature of the hardware: the only thing bigger than the profits are the chips, and thus the costs to enter the market. So under the watchful eye of Raja Koduri, Intel’s GPU guru, the company is gearing up to stage a major assault into the GPU space.
That brings us to the matter of this week’s teaser. One of the benefits of being a (relatively) upstart rival in the GPU business is that Intel doesn’t have any current-generation products that they need to protect; without the risk of Osborning themselves, they’re free to talk about their upcoming products even well before they ship. So, as a bit of a savvy social media ham, Koduri has been posting occasional photos of Intel's Xe GPUs, as Intel brings them up in their labs.
BFP - big ‘fabulous’ package😀 pic.twitter.com/e0mwov1Ch1— Raja Koduri (@Rajaontheedge) June 25, 2020
Today’s teaser from Koduri shows off a tray with three different Xe chips of different sizes. While detailed information about the Xe family is still limited, Intel has previously commented that the Xe-HPC-based Ponte Vecchio would be taking a chiplet route for the GPU, using multiple chiplets to build larger and more powerful designs. So while Koduri's tweets don't make it clear what specific GPUs we're looking at – if they're all part of the Xe-HP family or a mix of different families – the photo is an interesting hint that Intel may be looking at a wider use of chiplets, as the larger chip sizes roughly correlate to 1x2 and 2x2 configurations of the smallest chip.
And with presumably multiple chiplets under the hood, the resulting chips are quite sizable. With a helpful AA battery in the photo for reference, we can see that the smaller packages are around 50mm wide, while the largest package is easily approaching 85mm on a side. (For refence, an Intel desktop CPU is around 37.5mm x 37.5mm).
Finally, in a separate tweet, Koduri quickly talks about performance: “And..they let me hold peta ops in my palm(almost:)!” Koduri doesn’t go into any detail about the numeric format involved – an important qualifier when talking about compute throughput on GPUs that can process lower-precision formats at higher rates – but we’ll be generous and assume INT8 operations. INT8 has become a fairly popular format for deep learning inference, as the integer format offers great performance for neural nets that don’t need high precision. NVIDIA’s A100 accelerator, for reference, tops out at 0.624 PetaOPs for regular tensor operations, or 1.248 PetaOps for a sparse matrix.
And that is the latest on Xe. With the higher-end discrete parts likely not shipping until later in 2021, this is likely not going to be the last word from Intel and Koduri on their first modern family of discrete GPUs.
Update: A previous version of the article called the large chip Ponte Vecchio, Intel's Xe-HPC flagship. We have since come to understand that the silicon we're seeing is likely not Ponte Vecchio, making it likely to be something Xe-HP based
One of the stories bubbling away in the background of the industry is the AMD self-imposed ‘25x20’ goal. Starting with performance in 2014, AMD committed to itself, to customers, and to investors that it would achieve an overall 25x improvement in ‘Performance Efficiency’ by 2020, which is a function of raw performance and power consumption. At the time AMD was defining its Kaveri mobile product as the baseline for the challenge – admittedly a very low bar – however each year AMD has updated us on its progress. With this year being 2020, the question on my lips ever since the launch of Zen2 for mobile was if AMD had achieved its goal, and if so, by how much? The answer is yes, and by a lot.
In this article we will recap the 25x20 project, how the metrics are calculated, and what this means for AMD in the long term.
NVIDIA sends word this morning that the company has posted their first DirectX 12 Ultimate-compliant driver. Published as version 451.48 – the first driver out of NVIDIA’s new Release 450 driver branch – the new driver is the first release from the company to explicitly support the latest iteration of DirectX 12, enabling support for features such as DXR 1.1 ray tracing and tier 2 variable rate shading. As well, this driver also enables support for hardware accelerated GPU scheduling.
As a quick refresher, DirectX 12 Ultimate is Microsoft’s latest iteration of the DirectX 12 graphics API, with Microsoft using it to synchronize the state of the API between current-generation PCs and the forthcoming Xbox Series X console, as well as to set a well-defined feature baseline for future game development. Based around the capabilities of current generation GPUs (namely: NVIDIA Turing) and the Xbox Series X’s AMD RDNA2-derrived GPU, DirectX 12 Ultimate introduces several new GPU features under a new feature tier (12_2). This includes an updated version of DirectX’s ray tracing API, DXR 1.1, as well as tier 2 variable rate shading, mesh shaders, and sampler feedback. The software groundwork for this has been laid in the latest version of Windows 10, version 2004, and now is being enabled in GPU drivers for the first time.
|DirectX 12 Feature Levels|
(Introduced as of)
|NVIDIA: Maxwell 2
|NVIDIA: Maxwell 2
|Variable Rate Shading
|Raster Order Views||Yes||Yes||No|
|Typed UAV Load||Yes||Yes||Yes|
In the case of NVIDIA’s recent video cards, the underlying Turing architecture has supported these features since the very beginning. However, their use has been partially restricted to games relying on NVIDIA’s proprietary feature extensions, due to a lack of standardized API support. Overall it’s taken most of the last two years to get the complete feature set added to DirectX, and while NVIDIA isn’t hesitating to use this moment to proclaim their GPU superiority as the first vendor to ship DirectX 12 Ultimate support, to some degree it’s definitely vindication of the investment the company put in to baking these features into Turing.
In any case, enabling DirectX 12 Ultimate support is an important step for the company, though one that’s mostly about laying the groundwork for game developers, and ultimately, future games. At this point no previously-announced games have confirmed that they’ll be using DX12U, though this is just a matter of time, especially with the Xbox Series X launching in a few months.
Perhaps the more interesting aspect of this driver release, though only tangential to DirectX 12 Ultimate support, is that NVIDIA is enabling support for hardware accelerated GPU scheduling. This mysterious feature was added to the Windows display driver stack with WDDM 2.7 (shipping in Win10 2004), and as alluded to by the name, it allows GPUs to more directly manage their VRAM. Traditionally Windows itself has done a lot of the VRAM management for GPUs, so this is a distinctive change in matters.
At a high level, NVIDIA is claiming that hardware accelerated GPU scheduling should offer minor improvements to the user experience, largely by reducing latency and improving performance thanks to more efficient video memory handling. I would not expect anything too significant here – otherwise NVIDIA would be heavily promoting the performance gains – but it’s something to keep an eye out for. Meanwhile, absent any other details, I find it interesting that NVIDIA lumps video playback in here as a beneficiary as well, since video playback is rarely an issue these days. At any rate, the video memory handling changes are being instituted at a low level, so hardware scheduling is not only for DirectX games and the Windows desktop, but also for Vulkan and OpenGL games as well.
Speaking of Vulkan, the open source API is also getting some attention with this driver release. 451.48 is the first GeForce driver with support for Vulkan 1.2, the latest version of that API. An important housekeeping update for Vulkan, 1.2 is promoting a number of previously optional feature extensions into the core Vulkan API, such as Timeline Semaphores, as well as improved cross portability support by adding full support for HLSL (i.e. DirectX) shaders within Vulkan.
Finally, while tangential to today’s driver release, NVIDIA has posted an interesting note on its customer support portal regarding Windows GPU selection that’s worth making note of. In short, Windows 10 2004 has done away with the “Run with graphics processor” contextual menu option within NVIDIA’s drivers, which prior to now has been a shortcut method of forcing which GPU an application runs on it an Optimus system. In fact, it looks like control over this has been removed from NVIDIA’s drivers entirely. As noted in the support document, controlling which GPU is used is now handled through Windows itself, which means laptop users will need to get used to going into the Windows Settings panel to make any changes.
As always, you can find the full details on NVIDIA’s new GeForce driver, as well as the associated release notes, over on NVIDIA’s driver download page.
Western Digital is introducing a new high-end enterprise NVMe SSD, the Ultrastar DC SN840, and a NVMe over Fabrics 2U JBOF using up to 24 of these SSDs.
The Ultrastar DC SN840 uses the same 96L TLC and in-house SSD controller as the SN640, but the SN840 offers more features, performance and endurance to serve a higher market segment than the more mainstream SN640. The SN840 uses a 15mm thick U.2 form factor compared to 7mm U.2 (and M.2 and EDSFF options) for the SN640, which allows the SN840 to handle much higher power levels and to accommodate higher drive capacities in the U.2 form factor. The controller is still a PCIe 3 design so peak sequential read performance is barely faster than the SN640, but the rest of the performance metrics are much faster than the SN640: random reads now saturate the PCIe 3 x4 link and write performance is much higher across the board. Power consumption can reach 25W, but the SN840 provides a range of configurable power states to limit it to as little as 11W.
|Western Digital Ultrastar DC
Enterprise NVMe SSD Specifications
|Ultrastar DC SN840||Ultrastar DC SN640
|Ultrastar DC SN340|
|Form Factor||2.5" 15mm U.2||2.5" 7mm U.2||2.5" 7mm U.2|
|Interface||PCIe 3 x4
or x2+x2 dual-port
|PCIe 3 x4||PCIe 3 x4|
|NAND Flash||Western Digital 96L BiCS4 3D TLC|
|Write Endurance||1 DWPD||3 DWPD||0.8 DWPD||2 DWPD||0.3 DWPD|
|Sequential Read||3.3 GB/s||3.1 GB/s||3.1 GB/s|
|Sequential Write||3.1 GB/s||3.2 GB/s||2 GB/s||1.4 GB/s|
|Random Read IOPS||780k||472k||473k||429k|
|Random Write IOPS||160k||257k||65k||116k||7k (32kB writes)|
|Random 70/30 Mixed IOPS||401k||503k||194k||307k||139k (32kB writes)|
|Active Power||25 W||12 W||6.5 W|
The SN840 supports dual-port PCIe operation for high availability, a standard feature for SAS drives but usually only found on enterprise NVMe SSDs that are top of the line or special-purpose models. Other enterprise-oriented features include optional self-encrypting drive capability and support for configuring up to 128 NVMe namespaces.
The SN840 will be available in two endurance tiers, rated for 1 drive write per day (DWPD) and 3 DWPD—fairly standard, but a step up from the 0.8 DWPD and 2 DWPD tiers offered by the SN640. The high-endurance tier will offer capacities from 1.6 TB to 6.4 TB, while the lower-endurance tier has slightly higher usable capacities at each level, and adds a 15.36 TB capacity at the top. (The SN640 is due to get a 15.36 TB option in the EDSFF form factor only.)
Between the SN840, SN640 and SN340, Western Digital's enterprise NVMe SSDs now cover a wide range of use cases, all with their latest 96L 3D TLC NAND and in-house controller designs. Shipments of the SN840 begin in July.
Using the new Ultrastar DC SN840 drives, Western Digital is also introducing a new product to its OpenFlex family of NVMe over Fabrics products. The OpenFlex Data24 is a fairly simple Ethernet-attached 2U JBOF enclosure supporting up to 24 SSDs (368TB total). These drives are connected through a PCIe switch fabric to up to six ports of 100Gb Ethernet, provided by RapidFlex NVMeoF controllers that were developed by recent WDC acquisition Kazan Networks. The OpenFlex Data24 is a much more standard-looking JBOF design than the existing 3U OpenFlex F3100 that packs its storage in 10 modules with a proprietary form factor; the Data24 also has a shorter depth to fit into more common rack sizes. The OpenFlex Data24 will also be slightly cheaper and much faster than their Ultrastar 2U24 SAS JBOF solution.
The OpenFlex Data24 will launch this fall.
During Intel's unveiling of the Z490 chipset and Intel Core 10th generation Comet Lake processors, Intel also announced its series of Xeon W-1200 processors. To accompany this announcement, without much fanfare, Intel also launched the W480 chipset which also features an LGA1200 socket. Aiming for a more professional feel for processors with ECC support, vendors have announced a variety of W480 models. Some target content creators, and others for workstation environments. These boards are paired solely with W-1200, and support both ECC and non-ECC DDR4 memory.
Western Digital originally launched their Red lineup of hard disk drives for network-attached storage devices back in 2012. The product stack later expanded to service professional NAS units with the Red Pro. These drives have traditionally offered very predictable performance characteristics, thanks to the use of conventional magnetic recording (CMR). More recently, with the advent of shingled magnetic recording (SMR), WD began offering drive-managed versions in the direct-attached storage (DAS) space for consumers, and host-managed versions for datacenters.
Towards the middle of 2019, WD silently introduced WD Red hard drives (2-6TB capacities) based on drive-managed SMR. There was no fanfare or press-release, and the appearance of the drives in the market was not noticed by the tech press. Almost a year after the drives appeared on the shelves, the voice of customers dissatisfied with the performance of the SMR drives in their NAS units reached levels that WD could no longer ignore. In fact, as soon as we heard about the widespread usage of SMR in certain WD Red capacities, we took those drives off our recommended HDDs list.
Finally, after starting to make amends towards the end of April 2020, Western Digital has gone one step further at last, and cleaned up their NAS drive branding to make it clear which drives are SMR-based. Re-organizing their Red portfolio, the vanilla WD Red family has become a pure SMR lineup. Meanwhile a new brand, the Red Plus, will encompass the 5400 RPM CMR hard drives that the WD Red brand was previously known for. Finally, the Red Pro lineup remains unchanged, with 7200 RPM CMR drives for high performance configurations.
WD NAS Hard Drives for Consumer / SOHO / SMB Systems (Source: Western Digital Blog)
While Western Digital (and consumers) should have never ended up in this situation in the first place, it's nonetheless an important change to WD's lineup that restores some badly-needed clarity to their product lines. The technical and performance differences between CMR and SMR drives are significant, and having the two used interchangeably in the Red line – in a lineup that previously didn't contain any SMR drives to begin with – was always going to be a problem.
In particular, a look at various threads in NAS forums indicates that most customers of these SMR Red drives faced problems with certain RAID and ZFS operations. The typical consumer use-case for NAS drives – even just 1-8 bays – may include RAID rebuilds, RAID expansions, and regular scrubbing operations. The nature of drive-managed SMR makes it unsuitable for those types of configurations.
It was also not clear what WD hoped to achieve by using SMR for lower-capacity drives. Certain capacity points, such as the 2TB and 4TB, have one less platter in the SMR version compared to the CMR, which should result in lowered production costs. But the trade-offs associated with harming drive performance in certain NAS configurations – and subsequently ruining the reputation of Red drives in the minds of consumers – should have been considered.
In any case, it seems probable that the lower-capacity SMR WD Red drives were launched more as a beta test for the eventual launch of SMR-based high-capacity drives. Perhaps, the launch of these drives under a different branding – say, Red Archive, instead of polluting the WD Red branding, would have been better from a marketing perspective.
As SMR became entrenched in the consumer space, it was perhaps inevitable that NAS drives utilizing the technology would appear in the market. However in the process, WD has missed a golden chance to educate consumers on situations where SMR drives make sense in NAS units.
For our part, while the updated branding situation is a significant improvement, we do not completely agree with WD's claim about SMR Reds being suitable for SOHO NAS units. This may lead to non-tech savvy consumers using them in RAID configurations, even in commercial off-the-shelf (COTS) NAS units such as those from QNAP and Synology. Our recommendation is to use these SMR Reds for archival purposes (an alternative to tape backups for the home - not that consumers are doing tape backups today!), or, in WORM (Write-Once Read-Many) scenarios in a parity-less configuration such as RAID1 or RAID10. It is not advisable to subject these drives to RAID rebuilds or scrubbing operations, and ZFS is not even in the picture. The upside, at least, is that in most cases users contemplating ZFS are tech-savvy enough to know the pitfalls of SMR for their application.
All said, WD has one of the better implementations of SMR (in the DAS space), as we wrote earlier. But that is for direct-attached storage, which gives SMR drives plenty of time to address the 'garbage-collection' needs. It is just that consumer NAS behavior (that is not explicitly user-triggered) may not be similar to that.
Consumers considering the WD Red lineup prior to the SMR fiasco can now focus on the Red Plus drives. We do not advise consumers to buy the vanilla Red (SMR) unless they are aware of what they are signing up for. To this effect, consumers need to become well-educated regarding the use-cases for such drives. Seagate's 8TB Archive HDD was launched in 2015, but didn't meet with much success in the consumer market for that very reason (and had to be repurposed for DAS applications). The HDD vendors' marketing teams have their task cut out if high-capacity SMR drives for consumer NAS systems are in their product roadmap.
One of the key drivers in the Arm server space over the last few years has been the cohesion of the different product teams attempting to build the next processor to attack the dominance of x86 in the enterprise market. A number of companies and products have come and gone (Qualcomm’s Centriq) or been acquired (Annapurna by Amazon, Applied Micro by Ampere), with varying degrees of success, some of which is linked to the key personnel in each team. One of our readers has recently highlighted us to a recent movement in this space: Gopal Hegde, the VP/GM of the ThunderX Processor Business Unit at Marvell, has now left the company.
Today at the Next@Acer conference, Acer is announcing an updated version of their compact gaming desktop, the Predator Orion 3000, and the company was able to send us a pre-production unit for a hands-on. As this is a pre-production unit, final performance is not yet fine-tuned, but we can go over the new chassis design, as well as the internals of this mid-sized tower PC.
|Acer Predator Desktop|
|CPU||10th Generation Intel Core i5 Processor
10th Generation Intel Core i7 Processor
|GPU||NVIDIA GeForce GTX Options:
NVIDIA RTX Options:
RTX 2060 Super
RTX 2070 Super
|RAM||Up to 64 GB DDR4-2666|
|Storage||PCIe NVMe Options:
128 GB / 256 GB / 512 GB / 1 TB M.2 2280
2 x 3.5-inch SATA bays
Up to 2 x 3 TB HDD
|Networking||Killer E2600 Gigabit Ethernet
|Cooling||Dual Predator Frostblade RGB fans|
|I/O - Rear||4 x USB 3.2
2 x USB 2.0
3.5 mm audio
|I/O - Front||1 x USB Type-A
1 x USB Type-C
|Dimensions||15.4 x 6.8 x 15.2 inches (HxWxD)|
Acer’s updated Orion 3000 chassis is a well-thought out design, with some excellent features, and a compact and stylish design that would fit well on any gaming desk. Acer offers the Orion 3000 with a black perforated side panel, or you can opt for an EMI compliant tempered glass side if you want to check out the RGB-lit interior. At 18 Liters, the Orion 3000 is also surprisingly compact considering the powerful components inside.
Keeping everything cool are two Predator “Frostblade” fans, with 16.7 million colors to choose from in the PredatorSense App. The RGB also continues with two accent lights along the front of the case, and with or without the clear side panel, the lighting is plenty to create a glow around the system. Powering up the system was impressive, not only because of the random RGB color scheme, but also because the Frostblade fans were tuned for a very low noise level. The system, even as a pre-production sample, was nearly silent at idle.
The Orion 3000 isn’t just about style though. Acer has some wonderful functional elements to their design as well. The top of the case houses a built-in carrying handle, which makes the small desktop very easy to move around, and although I am not sure if Acer came up with the idea of including a headset holder built into the chassis, but it’s a brilliant idea and one I wish my own case offered. The power button is very prominent and easy to access, and for the new design Acer has moved the front panel ports behind a small door to keep them concealed when not in use. Whether or not you’d like them behind a door probably depends on how often you use them, but the door looks like it could be removed without too much effort.
As this is a pre-production unit, the cable management will likely be adjusted somewhat in the next couple of months, but even so it did not impede airflow at all.
The case has room for two 3.5-inch SATA drives, as well as an NVMe slot for the built-in storage, of which Acer is offering up to 1 TB for the boot drive. The system will have a single PCIe x16 slot for the GPU, so any expansion will have to be over USB. There’s onboard Gigabit Ethernet and Wi-Fi 6 to cover any networking needs.
Acer will be offering a wide-range of performance, with Core i5 and Core i7 models, and up to 64 GB of DDR4-2666 memory. On the GPU front, Acer is offering the NVIDIA GeForce GTX 1650 and 1660, and RTX 2060, 2060 Super, and 2070 Super options. The sample we were provided featured a 500-Watt power supply, which should be plenty to handle everything Acer is offering.
The redesigned Predator Orion 3000 will be available in September, starting at $999.99 USD.
With the advent of higher performance Arm based cloud computing, a lot of focus is being put on what the various competitors can do in this space. We’ve covered Ampere Computing’s previous eMag products, which actually came from the acquisition of Applied Micro, but the next generation hardware is called Altra, and after a few months of teasing some high performance compute, the company is finally announcing its product list, as well as an upcoming product due for sampling this year.
After many months of rumors and speculation, Apple confirmed this morning during their annual WWDC keynote that the company intends to transition away from using x86 processors at the heart of their Mac family of computers. Replacing the venerable ISA – and the exclusively-Intel chips that Apple has been using – will be Apple’s own Arm-based custom silicon, with the company taking their extensive experience in producing SoCs for iOS devices, and applying that to making SoCs for Macs. With the first consumer devices slated to ship by the end of this year, Apple expects to complete the transition in about two years.
The last (and certainly most anticipated) segment of the keynote, Apple’s announcement that they are moving to using their own SoCs for future Macs was very much a traditional Apple announcement. Which is to say that it offered just enough information to whet developers (and consumers’) appetites without offering too much in the way of details too early. So while Apple has answered some very important questions immediately, there’s also a whole lot more we don’t know at the moment, and likely won’t known until late this year when hardware finally starts shipping.
What we do know, for the moment, is that this is the ultimate power play for Apple, with the company intending to leverage the full benefits of vertical integration. This kind of top-to-bottom control over hardware and software has been a major factor in the success of the company’s iOS devices, both with regards to hard metrics like performance and soft metrics like the user experience. So given what it’s enabled Apple to do for iPhones, iPads, etc, it’s not at all surprising to see that they want to do the same thing for the Mac. Even though the OS itself isn’t changing (much), the ramifications of Apple building the underlying hardware down to the SoC means that they can have the OS make full use of any special features that Apple bakes into their A-series SoCs. Idle power, ISPs, video encode/decode blocks, and neural networking inference are all subjects that are potentially on the table here.
One of the key metrics we’ve been waiting for since AMD launched its Zen architecture was when it would re-enter the top 10 supercomputer list. The previous best AMD system, built on Opteron CPUs, was Titan, which held the #1 spot in 2012 but slowly dropped out of the top 10 by June 2019. Now, in June 2020, AMD scores a big win for its Zen 2 microarchitecture by getting to #7. But there’s a twist in this tale.
Amongst today’s Apple’s WWDC historic announcements, such as the company’s switch from x86 to Arm processor architectures, we also saw the launch of the new iOS 14 and iPadOS 14 which bring new features to the company’s mobile devices.
High performance computing is now at a point in its existence where to be the number one, you need very powerful, very efficient hardware, lots of it, and lots of capability to deploy it. Deploying a single rack of servers to total a couple of thousand cores isn’t going to cut it. The former #1 supercomputer, Summit, is built from 22-core IBM Power9 CPUs paired with NVIDIA GV100 accelerators, totaling 2.4 million cores and consuming 10 MegaWatts of power. The new Fugaku supercomputer, built at Riken in partnership with Fujitsu, takes the top spot on the June 2020 #1 list, with 7.3 million cores and consuming 28 MegaWatts of power.
While COVID may have put a crimp on the tech industry, for Apple the show must still go on. Join us at 10am Pacific/17:00 UTC for our live blog coverage of this year's Apple WorldWide Developer's Conference (WWDC), which like so many other shows is taking a uniquely virtual tack this year.
The morning keynote for the developer-focused show is typically a rapid-fire two-hour run through Apple's ecosystem, covering everything from macOS and iOS to individual Apple applications and more, and it sounds like Apple will be sticking to that strategy for their virtual show. Meanwhile there's always the lingering question over whether we'll also see a new hardware announcement this year – Apple tends to be about 50/50 with hardware at WWDC – something which has taken on an even greater significance this year as Apple is widely believed to be working on transitioning the Mac platform to its own Arm-based SoCs. Even if we don't get hardware details at this year's WWDC, even confirmation of that project and Apple's transition plans would mark the kick-off point for a huge shift in the Apple ecosystem, and an event that could reverberate into the PC ecosystem as well.
So join us at 10am Pacific to see just what Apple is working on for this year and beyond.
This year, at the international VLSI conference, Intel’s CTO Mike Mayberry gave one of the plenary presentations, which this year was titled ‘The Future of Compute’. Within the presentation, a number of new manufacturing technologies were discussed, including going beyond FinFET to Gate-All-Around structures, or even to 2D Nano-sheet structures, before eventually potentially leaving CMOS altogether. In the Q&A at the end of the presentation, Dr. Mayberry stated that he expects nanowire transistors to be in high volume production within five years, putting a very distinctive mark in the sand for Intel and others to reach.
With the launch of their Ampere architecture and new A100 accelerator barely a month behind them, NVIDIA this morning is announcing the PCIe version of their accelerator as part of the start of the now-virtual ISC Digital conference for high performance computing. The more straight-laced counterpart to NVIDIA’s flagship SXM4 version of the A100 accelerator, the PCie version of the A100 is designed to offer A100 in a more traditional form factor for customers who need something that they can plug into standardized servers. Overall the PCIe A100 offers the same peak performance as the SXM4 A100, however with a lower 250 Watt TDP, real-world performance won’t be quite as high.
The obligatory counterpart to NVIDIA’s SXM form factor accelerators, NVIDIA’s PCIe accelerators serve to flesh out the other side of NVIDIA’s accelerator lineup. While NVIDIA would gladly sell everyone SXM-based accelerators – which would include the pricey NVIDIA HGX carrier board – there are still numerous customers who need to be able to use GPU accelerators in standard, PCIe-based rackmount servers. Or for smaller workloads, customers don’t need the kind of 4-way and higher scalability offered by SXM-form factor accelerators. So with their PCIe cards, NVIDIA can serve the rest of the accelerator market that their SXM products can’t reach.
The PCIe A100, in turn, is a full-fledged A100, just in a different form factor and with a more appropriate TDP. In terms of peak performance, the PCIe A100 is just as fast as its SXM4 counterpart; NVIDIA this time isn’t shipping this as a cut-down configuration with lower clockspeeds or fewer functional blocks than the flagship SXM4 version. As a result the PCIe card brings everything A100 offers to the table, with the same heavy focus on tensor operations, including the new higher precision TF32 and FP64 formats, as well as even faster integer inference.
|NVIDIA Accelerator Specification Comparison|
|FP32 CUDA Cores||6912||6912||5120||3584|
|Memory Clock||2.4Gbps HBM2||2.4Gbps HBM2||1.75Gbps HBM2||1.4Gbps HBM2|
|Memory Bus Width||5120-bit||5120-bit||4096-bit||4096-bit|
|Single Precision||19.5 TFLOPs||19.5 TFLOPs||14.1 TFLOPs||9.3 TFLOPs|
|Double Precision||9.7 TFLOPs
(1/2 FP32 rate)
(1/2 FP32 rate)
(1/2 FP32 rate)
(1/2 FP32 rate)
|INT8 Tensor||624 TOPs||624 TOPs||N/A||N/A|
|FP16 Tensor||312 TFLOPs||312 TFLOPs||112 TFLOPs||N/A|
|TF32 Tensor||156 TFLOPs||156 TFLOPs||N/A||N/A|
|Relative Performance (SXM Version)||90%||100%||N/A||N/A|
12 Links (600GB/sec)
12 Links (600GB/sec)
4 Links (200GB/sec)
4 Links (160GB/sec)
|Manufacturing Process||TSMC 7N||TSMC 7N||TSMC 12nm FFN||TSMC 16nm FinFET|
|Interface||PCIe 4.0||SXM4||PCIe 3.0||SXM|
But because the dual-slot add-in card form factor is designed for lower TDP products, offering less room for cooling and typically less access to power as well, the PCIe version of the A100 does have to ratchet down its TDP from 400W to 250W. That’s a sizable 38% reduction in power consumption, and as a result the PCIe A100 isn’t going to be able to match the sustained performance figures of its SXM4 counterpart – that’s the advantage of going with a form factor with higher power and cooling budgets. All told, the PCIe version of the A100 should deliver about 90% of the performance of the SXM4 version on single-GPU workloads, which for such a big drop in TDP, is not a bad trade-off.
And on this note, I should give NVIDIA credit where credit is due: unlike the PCIe version of the V100 accelerator, NVIDIA is doing a much better job of documenting these performance differences. This time around NVIDIA is explicitly noting the 90% figure in their their specification sheets and related marketing materials. So there should be a lot less confusion about how the PCIe version of the accelerator compares to the SXM version.
Other than the form factor and TDP changes, the only other notable deviation for the PCIe A100 from the SXM version is how NVLink connections work. For their PCIe card NVIDIA is once again using NVLink bridges connected across the top of A100 cards, allowing for two (and only two) cards to be linked together. The upshot is that with 3 NVLink connectors, all 12 of the GA100's GPU physical links are being exposed, meaning that the card has full access to its NVLink bandwidth. So although you can only talk to one other PCIe A100 card, you can do so at a speedy 300GB/sec in each direction, 3x the rate a pair of V100 PCIe cards communicated at.
Otherwise the PCIe A100 comes with the usual trimmings of the form factor. The card is entirely passively cooled, designed to be used with servers with powerful chassis fans. And though not pictured in NVIDIA’s official shots, there are sockets for PCIe power connectors. Meanwhile, with the reduced usage of NVLink in this version of the card, A100’s native PCIe 4 support will undoubtedly be of increased importance here, underscoring the advantage that an AMD Epyc + NVIDIA A100 pairing has right now since AMD is the only x86 server vendor with PCIe 4 support.
Wrapping things up, while NVIDIA isn’t announcing specific pricing or availability information today, the new PCIe A100 cards should be shipping soon. The wider compatibility of the PCIe card has helped NVIDIA to line up over 50 server wins at this point, with 30 of those servers set to ship this summer.
One of the interesting elements about NVIDIA’s A100 card is the potential compute density offered, especially for AI applications. There is set to be a strong rush to enable high-density AI platforms that can take advantage of all the new features that A100 offers in the PCIe form factor, and GIGABYTE was the first in my inbox with news of its new G492 server systems, built to take up to 10 new A100 accelerators. These machines are built on AMD EPYC, which allows for PCIe Gen4 support, as well as offering GPU-to-GPU direct access, direct transfers, and GPUDirect RDMA.
The G492 servers use dual EPYC CPUs, allowing for 128 PCIe 4.0 lanes in total, however in order to expand support to 10 GPUs as well as up to 12 additional NVMe storage drives, PCIe 4.0 switches are used (Broadcom PEX9000 in the G492-Z51, Microchip in the G492-Z50). This also allows an additional three PCIe x16 links and an OCP 3.0 slot for add-on upgrade cards for SAS drives or networking, such as Ethernet or Mellanox Infiniband.
The use of dual EPYC CPUs, up to 280W each, also allows for up to 8 TiB of DDR4-3200 memory support. The system comes with three 2200W 80 PLUS Platinum redundant power supplies. The 10 GPU slots are rated for 250W TDP a piece, and the system comes equipped with dual 10 GBase-T and AST2500 management as standard.
Gigabyte customers interested in deploying G492 should get in contact with their local distributor.
When today you’re picking a flagship smartphone, you generally get more or less the same fundamental formula no matter the vendor you chose. It’s a glass slab with a screen, and more often than not even the internal hardware powering the phones isn’t all that different, with just a few exceptions. Whilst most vendors try to differentiate themselves in their designs and ergonomics, some with more success than others, the one area where smartphones can seemingly still be very different from each other is the camera department.
This year we’ve seen smartphones with more variety than ever in terms of their camera setups. The last few years has seen an explosion of fast-paced innovation in the image capture abilities of smartphones, with vendors focusing on this last aspect of a phone where they can truly differentiate themselves from others, and try to one-up the competition.
We’re halfway through 2020, and almost all vendors have released their primary flagship devices – many of which we still had yet to cover in full reviews. This was a perfect opportunity to put all of the new generation devices against each other and compare their cameras systems to really showcase just how different (or similar) they are to each other. Today’s article is a battle-royale for smartphone photography, providing an apples-to-apples comparison across the most important devices available today.
Intel has yet to launch their first CPUs supporting PCIe 4.0, but other parts of the business are keeping pace with the transition: network controllers, FPGAs, and starting today, SSDs. The first PCIe 4.0 SSDs from Intel are based on their 96-layer 3D TLC NAND flash memory, slotting into Intel's product line just below Optane products and serving as Intel's top tier of flash-based SSDs. The new Intel D7-P5500 and D7-P5600 are codenamed Arbordale Plus, a codename Intel revealed last fall without providing any other information except that the original Arbordale product was never released.
The two new SSD product lines are the first to fall into the D7 tier under the new naming scheme adopted by Intel in 2018. The P5500 and P5600 are closely-related products that differ primarily in their overprovisioning ratios and consequently their usable capacities, write speed and write endurance. The P5500 is the 1 drive write per day (DWPD) lineup with capacities ranging from 1.92 TB up to 7.68 TB, while the P5600 is the 3 DWPD tier with capacities from 1.6 TB to 6.4 TB. These serve as the successors to the P4510 and P4610 Cliffdale Refresh drives, and as such we expect some follow-on models to introduce the EDSFF form factor options and QLC-based drives that are still due for an update.
|Intel PCIe 4.0 Enterprise SSDs|
|Form Factor||U.2 2.5" 15mm|
|Interface||PCIe 4.0 NVMe 1.3c|
|NAND||Intel 96L 3D TLC|
|Sequential Read||7000 MB/s|
|Sequential Write||4300 MB/s|
|Random Read (4 kB)||1M IOPS|
|Random Write (4 kB)||130k IOPS||260k IOPS|
|Write Endurance||1 DWPD||3 DWPD|
The switch to PCIe 4.0 enables a big jump in maximum throughputs supported: from 3.2 GB/s up to 7 GB/s for sequential reads, while sequential writes show a more modest increase from 3.2 GB/s to 4.3 GB/s. Random reads now hit 1M IOPS compared to about 651k IOPS from the previous generation, and random writes are still bottlenecked by the flash itself with a peak of 260k IOPS from the new P5600.
Intel hasn't shared information about the internal architecture of the new SSDs, so we don't know if they're still using a 12-channel controller design like their previous generation. Intel does tout improved QoS and a handful of new features, including a re-working of their TRIM implementation to reduce its interference with the performance of more important IO commands.
We’ve known about Intel’s Cooper Lake platform for a number of quarters. What was initially planned, as far as we understand, as a custom silicon variant of Cascade Lake for its high-profile customers, it was subsequently productized and aimed to be inserted into a delay in Intel’s roadmap caused by the development of 10nm for Xeon. Set to be a full range update to the product stack, in the last quarter, Intel declared that its Cooper Lake platform would end up solely in the hands of its priority customers, only as a quad-socket or higher platform. Today, Intel launches Cooper Lake, and confirms that Ice Lake is set to come out later this year, aimed at the 1P/2P markets.
SilverStone is a well-known name amongst advanced users and enthusiasts. The company earned its reputation from its first PSUs and original case designs, and soon diversified towards cooling related products. Their products usually are designed to be cost-effective, with a focus on practicality and quality instead of extravagant aesthetics. That tactic served SilverStone very well in the past, some of their CPU tower coolers have become very good values for the price.
Given SilverStone's success with air coolers, today we are switching tracks to liquid coolers and taking a look at SilverStone’s latest all-in-one (AIO) “Permafrost” cooler series. With multiple models covering the most popular cooler sizes, SilverStone is looking to tap into what has continued to be a popular market for alternative high-performance coolers. And with the inclusion of Addressable RGB (ARGB) lighting, SilverStone is perhaps bowing to a bit to market pressures as well by including RGB lighting in their new AIO coolers.
At the high-end of Lenovo’s ThinkPad designs, where professionals need server-grade features like ECC and graphics focused on compute or rendering, we get the P1 model which is updated for 2020 as the P1 Gen3. This notebook refresh is a 15.6-inch design, offering an OLED display, choice of Intel 10th Gen or Xeon processors, and Quadro-level graphics. The underlying design of the chassis is carbon fiber, aiming to be sturdy yet lightweight, with a fingerprint resistant finish to enhance the aesthetic of a premium system.
The ThinkPad P1 Gen3 is a 15.6-inch design with options that include a 3840x2160 OLED touch display at HDR500, a 3840x2160 LCD IPS variant up to 600 nits, or a 1920x1080 IPS 500nit HDR lower-cost option. Under the hood it supports Intel’s 10th Gen Core mobile 45 W processors, or their Xeon equivalents, which extends support to up to 64 GB of ECC for the Xeons via two SoDIMM slots. Graphics are available up to an NVIDIA Quadro T2000. There are two M.2 drives in the system, allowing for up to 4 TB of NVMe SSDs in RAID 0/1, and the system comes with an 80 Wh battery. Two power supplies are available – a base 135 W slim model or a 170 W slim model. Operating system options include Windows 10 Home, Pro, Pro for Workstations, Ubuntu, Red Hat (certified), or Fedora.
For professional users, the P1 Gen3 supports TPM, has a touch fingerprint reader for easy log-in, and a shutter mechanism for the 720p webcam. There is also an optional separate Hybrid IR camera. On the connectivity side, Intel’s AX201 Wi-Fi 6 solution is included as standard, but a CAT16 LTE smartphone modem is an optional extra, which comes in the M.2 form factor. The system is certified for a number of software vendors, such as AutoCAD, CATRIA, NX, SolidWorks, Revit, Creo, Inventor, etc.
From the design, the unit comes with the usual ThinkPad bells and whistles. The keyboard includes the TrackPoint in the middle of the keyboard, and the track pad at the bottom has physical keys above it. The keyboard is backlit and spill resistant. Ports on the side include two USB 3.2 Gen 1 Type-A ports, two USB-C Thunderbolt 3 ports, a HDMI 2.0 video output, a 3.5mm jack, and an SD Card Reader.
The P1 Gen3 comes with Lenovo’s ThinkShield software, and will also be the recipient of Lenovo’s new Ultra Performance Mode that allows the user to adjust the performance settings in order to achieve a desired performance or thermal characteristics of the system. Lenovo believes this is mostly relevant to users who need full turbo to get a project completed on time, or for those who use the system with VR and require a minimum standard of performance without any potential thermal disruptions.
The P1 Gen3 starting weight is 3.75 lbs (1.7 kg), which will add on with the addition of a graphics card / more memory / more storage etc. The Lenovo ThinkPad P1 Gen3 will be available from July, starting at $2019.
For the high-performance commercial space, the line of Extreme Lenovo ThinkPads is a popular choice. For the current generation, Lenovo is unveiling a new X1 Extreme Gen3 version specifically for those commercial users that need performance from both the CPU and GPU in a typical ThinkPad style design. Highlights include the Intel Core-H series processor, the optional NVIDIA GeForce 1650Ti graphics, a 600-nit 15.6-inch display, as well as Wi-Fi 6 capabilities and an optional Cat16 LTE modem.
The new ThinkPad rotates out to a full 180-degree stance, with the bottom of the screen helping left the laptop to create airflow when at a more userfriendly angle. The X1 Extreme Gen3 will have a multitude of ports, including a card reader, two Type-A ports, two Type-C ports, a built in full-sized HDMI port, a 3.5mm jack, and Lenovo’s custom power connector. The company hasn’t released all the specifications yet, so we might expect to see Thunderbolt 3 support and additional Type-C charging perhaps.
New to some of Lenovo’s designs is its new Ultra Performance Mode, which is set to be exclusive to the Extreme and the ThinkPad P series. This will cause the system to enable higher power limits and higher thermal limits, as well as locking the hardware in at high frequencies, such that when it is critical for a render to complete on time or for a VR experience to not drop, these systems are capable (noise and thermals permitting). It will be interesting to see what this does above the standard Windows Ultimate Performance power mode.
Unfortunately Lenovo isn’t releasing too many details on its new X1 Extreme Gen3 just yet, indicating that it was perhaps planning to show an early engineering sample at what would have been the traditional Computex trade show a couple of weeks ago. The company states that the ThinkPad X1 Extreme Gen3 will be available from July, Price TBD.
Today Qualcomm is announcing an update to its robotics platform, upgrading the aging RB3 with a Snapdragon 845-based SoC with the new RB5 platform which is based on a newer Snapdragon 865 chipset. Qualcomm is aiming for gaining market share in the fast-growing industry that’s projected to reach $170B by 2025.
What’s might be more interesting for AnandTech users, is the RB5’s potential as an Arm single-board computer platform, as the inclusion of the newest silicon here should represent a significant advantage for designs based on the RB5 platform and the Snapdragon 865 derived “QRB5165” chipset.
The design of the RB5 platform comes in the form of a carrier board and a system-on-module board. The SOM contains the actual SoC alongside core components such as RAM, NAND storage chip, the PMIC powering the SoC and board components, as well as a Wi-Fi/BT module.
The module sits on a carrier board, in this case the Qualcomm Robotics RB5 carrier board features a ton of connectivity, supporting 4x HDMI outputs, SD card slot via SDIO, USB 3.1 hub, USB-C connector, Gigabit-Ethernet, DSI and CSI connectors for attaching displays and cameras, and various other general purpose I/O. We also see inclusion of two lanes of PCIe included.
It’s possible to extend the “core kit” with various other add-ons in the form of extra mezzanine boards. Qualcomm will be offering a Vision, Sensor, Motor Control, Industrial, and Communications mezzanine boards for expanding the capabilities of the system.
The interesting aspect of the platform of course is its software support. Qualcomm will be offering OS support for both Ubuntu and Yocto Linux. The company will be maintaining its own downstream embedded variants of the operating systems, as well as offering upstream open-source versions. The QRB5165 will be seeing long life software support which extends to Linux.
These latter aspects of the platform make it a quite interesting value proposition for anybody who’s looking for an Arm development system. The Snapdragon 865 and the Cortex-A77 cores are extremely capable and would certainly give similar other SBCs offerings such as Nvidia’s Jetson dev kits a run for their money. Qualcomm hasn’t mentioned any pricing yet, but partner vendors such as Thundercomm are offering the previous generation RB3 basic kit for $449 – so we’d hope the RB5 would see similar pricing.
Today Qualcomm is extending its 5G SoC portfolio down to the Snapdragon 600-series, introducing the new Snapdragon 690 platform and chip. The new design is a more significant upgrade to the 600-series, not only upgrading the cellular capabilities, but also upgrading some of the cornerstone IPs to the newest generation available.
|Qualcomm Snapdragon 600-Range SoCs|
|SoC||Snapdragon 660||Snapdragon 662||Snapdragon 665||Snapdragon 670||Snapdragon 675||Snapdragon 690|
|CPU||4x Kryo 260 (CA73)
4x Kryo 260 (CA53)
|4x Kryo 260 (CA73)
4x Kryo 260 (CA53)
|4x Kryo 260 (CA73)
4x Kryo 260 (CA53)
|2x Kryo 360 (CA75)
6x Kryo 360 (CA55)
|2x Kryo 460 (CA76)
6x Kryo 460 (CA55)
2x Kryo 560 (CA77)
6x Kryo 560 (CA55)
|GPU||Adreno 512||Adreno 610||Adreno 615||Adreno 612||Adreno 619L|
|DSP||Hexagon 680||Hexagon 683||Hexagon 686||Hexagon 686||Hexagon 685||Hexagon
25MP single / 16MP dual
25MP single / 16MP dual
25MP single / 16MP dual
25MP single / 16MP dual
48MP single / 32+16MP dual
|Memory||2x 16-bit @ 1866MHz
|2x 16-bit @ 1866MHz
1MB system cache
|Integrated Modem||Snapdragon X12 LTE||Snapdragon X11 LTE
DL = 390Mbps
2x20MHz CA, 256-QAM
UL = 150Mbps
2x20MHz CA, 64-QAM
|Snapdragon X12 LTE
DL = 600Mbps
3x20MHz CA, 256-QAM
UL = 150Mbps
2x20MHz CA, 64-QAM
( LTE )
DL = 1200 Mbps
UL = 210 Mbps
( 5G NR
DL = 2500 Mbps
UL = 1200 Mbps
H.264 & H.265
H.264 & H.265
H.264 & H.265
|Mfc. Process||14nm LPP||11nm LPP||11nm LPP||10nm LPP||11nm LPP||8nm LPP|
Although the new Snapdragon 690 maintains its CPU configurations in terms of big and little cores in a 2+6 setup, Qualcomm has managed to include the newest Cortex-A77 IP for the big CPU cores, resulting in a 20% performance uplift thanks to the microarchitectural improvements. The clock speeds remain the same as found in other recent 600-series designs, meaning 2GHz on the big cores and 1.7GHz for the A55 cores.
On the GPU side, we see the shift to a new Adreno 619L design sees a much bigger shift with an up to 60% increase in performance compared to the previous generation Snapdragon 675.
Memory-wise, it’s still a LPDDR4X SoC with dual 16-bit channel support, which is plenty for the bandwidth requirements at this performance segment.
Qualcomm is also trickling down some of the newer higher end multimedia features to the 600-series, such as the newer generation Spectra iSP which is able to support up to 192MP still pictures or up to 48MP sensors with multi-frame noise reduction, or a dual-camera setup in tandem of 32+16MP sensors. The chip has a 10-bit capture and display pipeline, allowing it 4K HDR capture and display – although we didn’t see mention of 4K60 recording.
The key feature of the Snapdragon 690 is its shift towards a 5G modem platform. The integrated X51 modem now adds support for 5G sub-6GHz with global band support. The speeds here scale up to 2500Mbps downstream and 1200Mbps upstream on sub-6 networks, utilising up to 100MHz of spectrum bandwidth. The chip seemingly makes without mmWave connectivity, and this makes a lot of sense given the price range that the 600-series is meant to be used in, as well as the general lack of mmWave adoption in most markets.
“This new platform is designed to make 5G user experiences even more broadly available around the world. Snapdragon 690 also supports remarkable on-device AI and vibrant entertainment experiences. HMD Global, LG Electronics, Motorola, SHARP, TCL, and Wingtech are among the OEMs/ODMs expected to announce smartphones powered by Snapdragon 690.”
We’re expecting the new chip to be deployed in devices by various vendors in the second half of the year.
AMD’s budget motherboard range is often at times more successful than the bigger, full fat versions. Users have in the past got almost all of the same chipset features on these motherboards than they did on the X-series range. That changes with the new B500 series as consumers no longer have PCIe 4.0 on the chipset, instead reverting back to PCIe 3.0. This ultimately should not be an issue, as budget builds are unlikely to have multiple PCIe 4.0 add-in drives, for example. Nonetheless, the high vocal demand for B550 motherboards, especially after AMD launched Ryzen 3, has not gone unnoticed, and there are over 40 new models in the market, most of which should be on sale from today.
Kioxia (formerly Toshiba Memory) has launched their sixth generation enterprise SAS SSD, the PM6 series. This is the first SSD available to support the latest 24G SAS interface, doubling performance over the existing 12Gb/s SAS standard. Using 96-layer 3D TLC NAND flash memory, the PM6 offers capacities up to 30.72 TB and performance up to 4300 MB/s.
Serial-Attached SCSI (SAS) originated from the simple idea of running the enterprise-grade SCSI protocol over the Serial ATA physical layer, obsoleting parallel SCSI connections in the same way that SATA displaced parallel ATA/IDE in the consumer storage world. The first version of SAS corresponded to the second generation of SATA, with each running at 3 Gbit/s. SATA became a dead-end technology after one more speed increase to 6 Gbit/s, but SAS development has continued to higher speeds: 12Gbit SAS-3 was standardized in 2013 and "24G" SAS-4 was standardized in 2017. The "24G" is in quotes because SAS-4 actually runs at a raw rate of 22.5Gbit/s but delivers a true doubling of usable data rate by switching to lower-overhead error correction: 8b/10b encoding replaced with 128b/150b (actually 128/130 plus 20 bits of extra forward error correction), similar to how PCIe 3.0 switched from 8b/10b to 128b/130b to deliver 96% higher transfer rates with only a 60% increase in raw bit rate. Also similar to PCIe, it takes quite a while to go from release of the interface standard to availability of real products, which is why a 24G SAS SSD is only just now arriving.
Kioxia's enterprise SAS SSDs and their enterprise NVMe SSDs share the same bilingual controller ASIC and consequently the PM6 has a very similar feature set to the previously-announced CM6 PCIe 4.0 SSDs. This includes dual-port interface support for higher performance or for fault tolerance, and enough ECC and parity protection for the drive to survive the failure of two entire flash dies. The SAS-based PM6 series is limited to lower maximum throughput than the CM6, but a dual-lane 24G SAS link is still slightly faster than PCIe 3.0 x4. The higher performance enabled by 24G SAS means the PM6 can require more power than its predecessors—now up to 18W, though the drive can be configured to throttle to lower power levels ranging from 9W to 14W.
|Kioxia Enterprise SSD Specifications|
|Model||PM6 SAS||CM6 NVMe|
|Form Factor||2.5" 15mm U.3|
|Interface, Protocol||Dual-port 24G SAS||PCIe 4.0 x4, NVMe 1.4|
|NAND Flash||Kioxia 96L 3D TLC|
|Write Endurance||1 DWPD||3 DWPD||10 DWPD||1 DWPD||3 DWPD|
|Sequential Read||4.3 GB/s||6.9 GB/s|
The PM6 SAS family is available in three endurance tiers: the 1 DWPD and 3 DWPD models closely correspond to CM6 NVMe models, but only the SAS product line gets a 10 DWPD tier. Maximum capacities are 30.72 TB in the 1 DWPD series, 12.8 TB in the 3 DWPD series and 3.2 TB in the 10 DWPD series. Kioxia said that 4 TB class drives are still the most popular, but this will probably be shifting toward the 8 TB models over the next year or so. The 30.72 TB models will remain more of a niche product in the near future, but they expect demand for those capacities to start picking up in 2021 or 2022. Detailed performance specifications for each model are not yet available.
SAS in general is still a growing market both in terms of number of units and bits shipped and is projected to continue growing for at least a few more years, even though NVMe is gradually taking over the enterprise SSD market. Kioxia's customer base for SAS SSDs has been divided between storage array vendors and the traditional enterprise server market. The storage array market has been quicker about migrating to NVMe so this may be the last generation of SAS SSDs to see significant adoption in that market segment. SAS will be hanging around in the enterprise server market for a lot longer, helped in part by the backwards-compatibility with SATA hard drives for cheap high-capacity storage, and the straightforward traditional RAID solutions as compared to the challenges with NVMe RAID. The server market typically doesn't make as much use of the dual-port capability of SAS drives, so the speed boost from 24G SAS will be particularly welcome there, allowing drives to now reach about 2.3GB/s each rather than about 1.1GB/s on 12Gb SAS.
The Kioxia PM6 SAS SSDs are now available for customer qualification and evaluation. The drives have already been validated with 24G SAS host controllers from both Broadcom and Microchip (Microsemi/Adaptec).
One of the more frequent rumors in recent weeks has been that AMD would have some new Ryzen 3000 processors to launch. Today AMD is announcing three new processors into the Ryzen 3000 family, each with the XT name, offering higher frequencies and further filling out the their CPU product stack. Each of these processors will be available on shelves in primary regions on July 7th.
Today AMD has officially announced one of the long rumoured missing Navi parts in the form of the new Radeon Pro 5600M mobile GPU, seeing the Navi 12 design finally take shape as a product.
The new high-end mobile GPU is a successor to the Radeon Pro Vega 20 and Vega 16 designs released back in 2018, products that ended up being used in Apple’s MacBook laptops. The new Radeon Pro 5600M also sees its debut in the new 16” MacBook Pro that’s also been debuted today. Apple has traditionally had exclusive rights to these mobile Radeon Pro SKUs so it’s likely this exclusivity also applies to the new Radeon Pro 5600M.
|AMD Radeon Series Mobile Specification Comparison|
|AMD Radeon Pro 5600M||AMD Radeon RX 5300M||AMD Radeon RX 5500M||AMD Radeon Vega Pro 20||AMD Radeon RX 560X|
|Throughput (FP32)||5.3 TFLOPs||4.1 TFLOPs||4.6 TFLOPs||3.3 TFLOPs||2.6 TFLOPs|
|Memory Clock||1.54 Gbps HBM2||14 Gbps GDDR6||14 Gbps GDDR6||1.5 Gbps HBM2||7 Gbps GDDR5|
|Memory Bus Width||2048-bit||96-bit||128-bit||1024-bit||128-bit|
|Typical Board Power||50W||?||85W||?||?|
|Architecture||RDNA (1)||RDNA (1)||RDNA (1)||Vega
|GPU||Navi 12||Navi 14||Navi 14||Vega 12||Polaris 11|
|Launch Date||Q2 2020||Q4 2019||Q4 2019||10/2018||04/2018|
The new mobile GPU is characterised by its large compute unit count as well as its usage of HBM2 memory. With a CU count of 40, resulting in 2560 stream processors, the Radeon Pro 5600M actually matches AMD’s current best desktop graphics designs such as the Navi 10-based Radeon 5700XT. A key difference here lies in the clocks, as this mobile variant only clocks up to a maximum of 1035MHz, resulting in a theoretical maximum throughput of 5.3TFLOPs, quite a bit less than its desktop counterpart which lands in at 9.75TFLOPs.
In terms of bandwidth however, the mobile chip more than keeps up with its desktop counterpart. AMD is using a 2048-bit HBM2 memory interface to up to 8GB of memory running at 1.54Gbps, resulting in a bandwidth of 394GB/s, only a bit less than the 448GB/s of the Radeon 5700XT.
The Radeon Pro 5600M is advertised with a total graphics power (TGP) of 50W, identical to the TGP of the Radeon Pro 5500M and the Radeon Pro 5300M. Both of those, in turn, are based on the Navi 14 die, which contains far fewer compute units. This makes the Radeon Pro 5600M an incredibly performant and efficient design – albeit one that's undoubtedly expensive to build.
The new Radeon Pro 5600M is now available inside of Apple’s MacBook Pro 16” as an BTO upgrade option, and comes at a $700 mark-up versus the default Radeon Pro 5500M GPU.
A new generation of gaming consoles is due to hit the market later this year, and the hype cycle for the Xbox Series X and Playstation 5 has been underway for more than a year. Solid technical details (as opposed to mere rumors) have been slower to arrive, and we still know much less about the consoles than we typically know about PC platforms and components during the post-announcement, pre-availability phase. We have some top-line performance numbers and general architectural information from Microsoft and Sony, but not quite a full spec sheet.
The new generation of consoles will bring big increases in CPU and GPU capabilities, but we get that with every new generation and it's no surprise when console chips get the same microarchitecture updates as the AMD CPUs and GPUs they're derived from. What's more special with this generation is the change to storage: the consoles are following in the footsteps of the PC market by switching from mechanical hard drives to solid state storage, but also going a step beyond the PC market to get the most benefit out of solid state storage. These are exciting times, to say the least.
To that end, today we're taking a look at what to expect from the new console SSDs, as well as what it means for the gaming industry as a whole.
After a protracted battle with the SARS-CoV-2 virus, this year’s Computex trade show has finally succumbed to the pathogen.
One of the world’s largest IT trade shows – and frequently a venue for major PC-related announcements – Computex 2020 was scheduled to take place last week. However due to the coronavirus and all of the health and travel restrictions born from it, back in March the show was delayed and rescheduled for late September. But as it turns out, even a 3 month delay won’t be quite enough to make the show work, and as a result event organizer TAITRA has given up on plans to host the trade show this year.
Calling the latest change in plans a “rescheduling” of Computex, the show has been officially moved to June 1st through the 5th of 2021. Which means that although the show overall has not been canceled and that there will be another Computex next year, for all practical purposes the 2020 show has been cancelled.
In the brief announcement, TAITRA cited the ongoing travel restrictions as being the primary reason for cancelling the 2020 show. Taiwan is still largely banning foreign nationals from entering the country, which if still in place in September, would pose an obvious issue to attending the trade show. At the same time, the original plan to reschedule the show to September was always a bit of a dicey proposition, as the delay put the show out of sync with annual product release cycles and fewer companies were planning to attend, leading to TAITRA scaling down the show accordingly.
Notably, this makes 2020 the first year that Computex has been cancelled entirely. Even in the SARS outbreak of 2003, the show was successfully moved to September. Which goes to show how much more serious and disruptive SARS-CoV-2 has turned out to be.
Today during Sony’s “The Future of Gaming” show where the company and its partners revealed a slew of next-generation game titles, we also had a first glimpse of the physical design of the new PlayStation 5.
The new console is a significant departure for Sony’s console hardware which has retained a standard black design aesthetic ever since the PlayStation 2 (Although different colour scheme variants have been available). The new PlayStation 5 immediately stands out with its white-black design, as well for the fact that Sony is seemingly presenting the new console in a primarily vertical standing position.
The looks of the console are defined by an enveloping white rounded body that envelops a central glossy black middle section like some sort of cape. The black middle section at the top emits a blue light, illuminating the white side panels as well as the ventilation grills.
Today’s teaser showcased the first time what the console’s cooling hardware might look like. The new design looks to have ventilation grills throughout the whole top of the console as well as the top half of the front of the device, curving along the top corner of the design, with the grills present on both lateral sides. We don’t know if this is an exhaust or intake, or maybe even both, as we haven’t yet seen the back side of the new unit.
Sony’s presentation only showed the console in an upright position, so the design was possibly designed to be used like this in its most optimal fashion.
Another hint that the console might not be designed to be used in a horizontal position is the odd “hump” that appears where the BluRay disc drive is located. It’s a pretty unusual asymmetric design choice that inarguably will spark a lot of discussions.
Edit: Sony also showcasd the console in a horizontal position for a split second in the outro section of the show. It looks like the console is sitting on the "foot" that's depicted in the vertical position shots. This explains why the two feet look different in the previous picture - they both serve as the stands for the console in vertical and horizontal positions, with the odd shape of the foot designed to cup the round side of the console in the horizontal position, with the Digital Edition console having a different curve to it.
Sony is also announcing a Digital Edition of the PlayStation 5 which doesn’t feature a disc drive, getting rid of the hump in the design. Digital distribution has gained a ton of popularity of the last few years and Sony now releasing a digital only console certainly points out that the company envisions this trend to continue and grow.
Both console variants feature a minimalistic front – we only find a single USB-A port and a single USB-C port, alongside a power button, and for the regular version the disc eject button.
Alongside the two new PS5 variants, Sony also announced several new accessories for the console: The new DualSense controller which we’ve known for some time now, a new DualSense charging station which charges up to two controllers at a time, a stereoscopic HD camera, a media remote, and a new headset dubbed the PULSE 3D Wireless Headset.
3D audio is meant to be a big part of the new PlayStation 5 experience thanks to the console’s new audio hardware capabilities – so Sony releasing a first-party headset tied in with the console release isn’t too big of a surprise.
The Sony PlayStation 5 is scheduled to be launched this holiday season at a yet undisclosed price. It is powered by a custom AMD SoC employing 8 Zen 2 cores up to 3.5GHz, a new customised RDNA 2-based GPU with 36 CUs and up to 2.23GHz frequency, and a new ultra-fast SSD and storage architecture that is said to be multiple times faster than the best PC storage devices on the market.
Intel has just published a news release on its website stating that Jim Keller has resigned from the company, effective immediately, due to personal reasons.
Jim Keller was hired by Intel two years ago to the role as Senior Vice President of Intel’s Silicon Engineering Group, after a string of successes at Tesla, AMD, Apple, AMD (again), and PA Semiconductor. As far as we understand, Jim’s goal inside Intel was to streamline a lot of the product development process on the silicon side, as well as providing strategic platforms though which future products can be developed and optimized to market. We also believe that Jim Keller has had a hand in looking at Intel’s manufacturing processes, as well as a number of future products.
Intel’s press release today states that Jim Keller is leaving the position on June 11th due to personal reasons. However, he will remain with the company as a consultant for six months in order to assist with the transition.
As a result of Jim’s departure, Intel has realigned some of its working groups internally with a series of promotions.
Jim Keller’s history in the industry has been well documented – his work has had a significant effect in a number of areas that have propelled the industry forward. This includes work on Apple’s A4 and A5 processors, AMD’s K8 and Zen high-level designs, as well as Tesla’s custom silicon for self driving, which Tesla’s own competitors have said put the company up to seven years ahead.
With our interview with Jim Keller, several weeks after taking the job at Intel, we learned that Keller went in to the company with a spanner. Keller has repeatedly said that he’s a fixer, more than a visionary, and Intel would allow him to effect change at a larger scale than he had ever done previously.
From our interview:
JK: I like the whole pipeline, like, I've been talking to people about how do our bring up labs and power performance characterization work, such as how does our SoC and integration and verification work? I like examining the whole stack. We're doing an evaluation on how long it takes to get a new design into emulation, what the quality metrics are, so yeah I'm all over the place.
We just had an AI summit where all the leaders for AI were there, we have quite a few projects going on there, I mean Intel's a major player in AI already, like virtually every software stack runs on Xeon and we have quite a few projects going on. There's the advanced development stuff, there's nuts and bolts execution, there's process and methodology bring up. Yeah I have a fairly broad experience in the computer business. I'm a ‘no stone unturned’ technical kind of person – when we were in Haifa and I was bugging an engineer about the cleanliness of the fixture where the surface mount packages plug into the test boards.
Jim’s history has shown that he likes to spend a few years at a company and move on to different sorts of challenges. His two year stint at Intel has been one of his shortest tenures, and even recently Fortune published a deep expose on Jim, stating that ‘Intel is betting its chips on microprocessor mastermind Jim Keller’. So the fact that he is leaving relatively early based on his previous roles is somewhat different.
Intel’s press release on the matter suggests that this has been known about for enough time to rearrange some of the working groups around to cover Jim’s role. Jim will be serving at Intel for at least another six months it seems, in the role of a consultant, so it might be that long before he lands another spot in the industry.
It should be noted that Jim Keller is still listed to give one of the keynote addresses at this year’s Hot Chips conference on behalf on Intel. We will update this story if that changes.
This news item was updated on 17th June with information regarding the new rearrangement. Points 2 and 4 were added, while (the new) 5 was adjusted.
Some of the recent discussions around motherboard design are whether the motherboard manufacturers are actually adhering to the CPU vendor specifications. If a motherboard manufacturer improves the base power delivery and cooling, should they be allowed to go beyond Intel’s suggested turbo power limits, for example? The question is actually rather moot, given that the vendors have been doing this for over a decade in one form or another, so varying degrees of extreme. As this practice has come more into the public light, especially with Intel’s high-end processors going north of 250 watts, companies like ASUS have come under increased scrutiny. That is why, at least with the Maximus XII Hero we are testing today, ASUS offers two options on the first boot: Intel Recommended, and ASUS Optimized.
GeIL has announced its newest family of DDR4 modules, the Orion series. Available in two versions, one standard and one for AMD platforms, the Orion series offers SKUs ranging from single 8 GB sticks up to 64 GB kits with two matching 32 GB memory modules. Meanwhile the new modules will be available at memory speeds ranging from DDR4-2666 up to DDR4-4000.
Clad in in either Racing Red or Titanium Grey for something a bit more subtle, GeIL's Orion series of DDR4 memory is offered in kits specially designed for AMD's platforms. And for hardware purists (or closed case owners) out there, the Orion range omits the use of RGB LEDs for a more clean-cut look. Meanwhile it's interesting to note that, at least going by the photos provided by GeIL, the Orion modules look surprisingly tall for otherwise simple, RGB-free memory. Unfortuantely we don't have the physical dimentions of the DIMMs, but users with low clearance coolers and the like may want to double-check that there will be sufficient room.
Onto the technical specifications, GeIL plans to make the Orion flexible with both single and dual-channel kits available. These range from 8 GB to 32 GB modules, with the highest spec kit topping out at 64 GB of DDR4-4000, with latencies of CL18 and an operating voltage of 1.35 V.
|GeIL Orion DDR4 Memory Specifications|
|DDR4-2666||19-19-19-43||1.20 V||8 GB (1 x 8 GB)
16 GB (1 x 16 GB)
16 GB (2 x 8 GB)
32 GB (1 x 32 GB)
32 GB (2 x 16 GB)
64 GB (2 x 32 GB)
|1.20 - 1.35 V|
At present, GeIL hasn't unveiled pricing for any kits in its Orion series, nor has it provided details of when they will hit retail channels.
ZADAK, a company that up until now has primarily been known for its memory modules, has just announced its first-ever PCIe 3.0 SSD. The ZADAK Spark PCIe 3.0 x4 M.2 is exactly what the name says on the tin – a PCie 3.0 x4 M.2 SSD – and like so many other products these days, includes integrated RGB LED lighting, which is built into the inclusive aluminium heatsink.
In terms of performance metrics and specifications, the ZADAK Spark RGB PCIe 3.0 x4 M.2 is rated for sequential read speeds of up to 3,200 MB/s, while sequential write speeds go up to 3,000 MB/s. Meanwhile the drive will be available in three different capacities: 512 GB, 1 TB, and 2 TB.
One of the drive's more unique design feature focuses on the integrated RGB LEDs, which look to be equipped to the rear of the SSD. This design gives the Spark RGB PCIe 3.0 x4 M.2 SSD more of an under glow, as opposed to a direct light source from the top of the black and silver aluminum heatsink. And rather than reinventing the wheel by developing their own lighting control system, ZADAK has opted to focus on making the the integrated RGB lighting compatible with the major motherboard manufacturers' existing ecosystems. As a result, the RGB lighting can be used with ASRock, ASUS, MSI, and GIGABYTE's RGB customization software, allowing users to sync the drive's RGB lighting with compatible RGB-lit motherboards and memory modules.
Unfortunately, ZADAK hasn't released a list of detailed specifications for the drive; so we don't currently have any information on the controller type, the thickness of the heatsink, nor has it released the type of 3D NAND technology it is using. But we do know that the ZADAK Spark RGB PCIe 3.0 x4 M.2 SSD is set to be available in late July, with the 512 GB model starting at $119, while the the 2 TB version will go for $389.