As part of this morning’s announcement from AMD that the company would be buying FPGA maker Xilinx for $35 billion in stock, the company also released its Q3 earnings report alongside the buyout news. An atypical earnings release to say the least, an early-morning release allowed AMD investors and others to get a look at the very latest in AMD’s financials while also digesting the decision to use the company’s sizable market capitalization to buy another company – and ultimately to help AMD justify why they’re in such a good position now to make the transaction. Coming on the heels of a record Q2 for the company, AMD closed out the third quarter of the year setting revenue records once again all the while trebling its profits.
For the third quarter of 2020, AMD reported $2.8B in revenue, a 56% jump over the same quarter a year ago. As a result, AMD has once again set new revenue records for the company, posting both their best Q3 ever, and their best single quarter period. Driving this was further growth in both of AMD’s major segments, with everything from consumer CPU sales to EPYC and semi-custom sales reported as being on the rise.
|AMD Q3 2020 Financial Results (GAAP)|
|Earnings Per Share||$0.32||$0.11||$0.13|
Most eye-popping, perhaps, has been AMD’s net income, which more than trebled over the year-ago quarter. For Q3’20, AMD booked $390M in GAAP net income, a 225% increase that dwarfs the $120M they took home at this point last year. Even on a quarterly basis AMD’s revenues and profits were up significantly across the board, with AMD again more than doubling its net income versus Q2. In fact the only aspect of AMD’s financials not showing significant growth at the moment is AMD’s gross margin, which at 44% is only up 1% over last year. According to the company, GM growth is being limited by relatively low margin semi-custom sales, with the PS5/XSX ramp-up counterbalancing the increase in CPU sales.
Breaking down the numbers by segment, AMD’s Computing and Graphics segment enjoyed a strong quarter based in large part on an increase in Ryzen processor sales. Overall the segment booked $1.67B in revenue, which is up 31% over the year-ago quarter. Carrying the segment were sizable increases in both AMD’s desktop and notebook CPU shipments, with AMD reporting double-digit growth in both and setting a new record for notebook processor shipments. AMD’s graphics division was the odd man out, however; the run-up to the RX 6000 series means that graphics revenue was down versus Q3’19.
|AMD Q3 2020 Computing and Graphics|
As for product average selling prices (ASPs), AMD is reporting that both client processor and graphics ASPs have taken a hit on a yearly basis. Graphics ASPs were down due to AMD’s current-generation RX 5000 products approaching the end of their lifecycle, while CPU ASPs declined due to higher sales of mobile chips, which tend to carry lower prices. Both of these should change for AMD in the next quarter, as the launch of Zen 3 and the Radeon RX 6000 series will put a fresh round of products on the market that can fetch higher prices.
Meanwhile, AMD’s Enterprise, Embedded, and Semi-Custom segment saw another greater quarter with Q3, with shipments of everything from EPYC processors to console SoCs on the rise. With the former, AMD has continued to grow its market share in the server space, and on the company’s earnings call CEO Dr. Lisa Su confirmed that server sales have more than doubled over Q3’19, citing improved cloud and enterprise adoption. Meanwhile the ramp-up for the Playstation 5 and Xbox Series X has pushed semi-custom sales higher as well, and that’s expected to grow even more in Q4.
|AMD Q3 2020 Enterprise, Embedded and Semi-Custom|
And, like the consumer side of AMD’s business, the enterprise side is about to benefit as well from the launch of next-generation products. AMD has confirmed that their Zen 3 architecture-based EPYC “Milan” processors will begin shipping this quarter as previously promised, with the initial chips going out to cloud and “select HPC” customers. Meanwhile general OEM availability will follow in the first quarter of 2021.
Overall, AMD has a lot to be happy about for Q3, and even more to look forward to in Q4. With AMD posting record revenue and their traditionally strongest quarter still to come, the company is expecting an even better Q4. The combination of Zen 3 CPUs for desktops and servers, along with new Radeon hardware will mean that AMD has momentum and new products on their side. All of which comes at a critical time for the industry, as AMD seeks to use its technology advantages to carve a larger piece of the x86 processor market from an uncharacteristically dazed Intel.
It’s a couple of weeks later than originally planned, but this week NVIDIA is finally amping up the last of its high-end video card lineup with the release of the GeForce RTX 3070. Based on the Ampere architecture first launched back in September for the GeForce RTX 3080, the RTX 3070 is NVIDIA’s $500 take on a next-gen video card, incorporating a leaner Ampere GPU and otherwise shrinking Ampere down to something that’s a bit lighter on the wallet. Fittingly, whereas the RTX 3080 was positioned for 4K gaming, the RTX 3070 is being aimed at the 1440p crowd, a lower resolution that the Ampere card is very capable of handling. Reviews of NVIDIA’s Founders Edition (reference) card are going out today, with retail sales of NVIDIA and partner cards set to kick off on October 29th.
After a couple of weeks of rumor, as well as a couple of years of hearsay, AMD has gone feet first into a full acquisition of FPGA manufacturer Xilinx. The deal involves an all-stock transaction, leveraging AMD’s sizeable share price in order to enable an equivalent $143 per Xilinx share – current AMD stockholders will still own 74% of the combined company, while Xilinx stockholders will own 26%. The combined $135 billion entity will total 13000 engineers, and expand AMD’s total addressable market to $110 Billion. It is believed that the key reasons for the acquisition lie in Xilinx’s adaptive computing solutions for the data center market.
“Our acquisition of Xilinx marks the next leg in our journey to establish AMD as the industry’s high performance computing leader and partner of choice for the largest and most important technology companies in the world. This is truly a compelling combination that will create significant value for all stakeholders, including AMD and Xilinx shareholders who will benefit from the future growth and upside potential of the combined company. The Xilinx team is one of the strongest in the industry and we are thrilled to welcome them to the AMD family. By combining our world-class engineering teams and deep domain expertise, we will create an industry leader with the vision, talent and scale to define the future of high performance computing.”
“We are excited to join the AMD family. Our shared cultures of innovation, excellence and collaboration make this an ideal combination. Together, we will lead the new era of high performance and adaptive computing. Our leading FPGAs, Adaptive SoCs, accelerator and SmartNIC solutions enable innovation from the cloud, to the edge and end devices. We empower our customers to deploy differentiated platforms to market faster, and with optimal efficiency and performance. Joining together with AMD will help accelerate growth in our data center business and enable us to pursue a broader customer base across more markets.”
As part of the acquisition, Victor Peng will join AMD as president responsible for the Xilinx business, and at least two Xilinx directors will join the AMD Board of Directors upon closing.
Part of the enablement of the acquisition is AMD leveraging its market capitalization of ~$100 billion, and a lot of the industry will draw parallels of Intel’s acquisition of FPGA-manufacturer Altera in December 2015 for $16.7 billion. The high-performance FPGA markets, as well as SmartNICs, adaptive SoCs, and other controllable logic, reside naturally in the data center markets more than most other markets. With AMD’s recent growth in the enterprise space with its Zen-based EPYC processor lines, a natural evolution one might conclude would be synergizing high-performance compute with adaptable logic under one roof, which is precisely the conclusion that Intel also came to several years ago. AMD reported last quarter that it had broken above the 10% market share in Enterprise with its EPYC product lines, and today’s earnings call is also expected to see growth. AMD is already reporting revenue up +56% year on year company-wide, with +116% in the Enterprise, Embedded, and Semi-Custom markets.
The press release states that AMD expects to save $300m in synergistic operational efficiencies within 18 months of closing, due to streamlining shared infrastructure. The deal has been unanimously approved by both sets of directors, and is subject to approval of both sets of shareholders. The transaction is expected to close by the end of Calendar Year 2021.
AMD shares are currently down 5% before the market opens. A conference call will be held at 8am ET to discuss AMD’s Third Quarter Financial results and acquisition plans.
AMD's key product lines includes its Zen based processor lines such as Ryzen and EPYC, its Graphics division for Radeon and Radeon Instinct, and its semi-custom and embedded division which has been developing the latest generation of console processors for both Sony and Microsoft.
Xilinx recently entered the market with its Versal Alveo Adaptive SoCs, built as combination programmable logic plus hardened compute logic and specialized co-processors and accelerators. Its FPGA families include Spartan, Zynq, Artix, Kintex, Virtex, and Virtex Ultrascale, used in a wide variety of commercial, embedded, and enterprise markets, including the hardware used to design processors of the future.
Source: Press Release
Silent computing systems are preferable for a multitude of use-cases ranging from industrial applications (where dust and fans make for a troublesome configuration) to noiseless HTPCs (particularly for audiophiles). Akasa has been providing thermal solutions in multiple computing verticals for more than 20 years, with a particular focus on passive cooling. They have been targeting NUCs since the launch of the Ivy Bridge version in early 2013. The NUC solution was completely re-imagined with the launch of the Turing fanless case for the Bean Canyon NUCs.
Alongside today’s profitable-but-uneasy earnings report from Intel, the company’s earnings presentation also offered a short update on the status of their discrete GPUs. As of today, Intel’s DG1 GPU is now shipping. Meanwhile the company announced their next GPU, appropriately named DG2, which is based on their upcoming Xe-HPG architecture. This GPU is now back from the fab and is in Intel’s lab, and is now far enough along to have been powered on.
First and foremost we have DG1, or as it’s better known by its commercial product name, Iris Xe Max. Intel’s first discrete GPU in over two decades, the company has since the beginning of this year been touting it as a companion to their Tiger Lake CPUs, pitching it as an upgraded graphics option for thin & light notebooks, and a successor of sorts to Intel’s GT3e and GT4e iGPU configurations from past generations. Until recently, we weren’t quite sure when it would show up in commercial products, but recent OEM notebook reveals along with Intel’s earnings announcement are now confirming that the GPU is shipping to OEMs. According to Intel, DG1-equipped notebooks are expected later in Q4. In the meantime, there are still scant few details on DG1 itself, such as expected performance and power consumption; so hopefully Intel will be getting ahead of its OEM partners on this one to set some expectations.
Meanwhile, today’s notes also announce for the very first time the next discrete GPU to come out of Intel, DG2. While obviously still some time off, Intel has completed tape-out and fabbing of the initial alpha silicon, with the company reporting that they’ve powered-on the GPU in their labs.
Somewhat surprisingly, CEO Bob Swan has also confirmed that this isn’t just a DG1 successor, but instead is a higher performing GPU based on the company’s forthcoming Xe-HPG(aming) architecture. First revealed this summer, Xe-HPG is Intel’s enthusiast/gamer-focused architecture, incorporating marquee features found in similar dGPUs like ray tracing. It’s also being manufactured completely external of Intel; while the company hasn’t said which fab and process node is being used, it’s none of Intel’s nodes. So this is the first major piece of external fabbed silicon that we know of to be up and running at Intel.
But like all teasers/financial disclosures, Intel isn’t saying too much more at this time. Nothing new was revealed about the Xe-HPG architecture, and Intel hasn’t clarified whether DG2 is a big, flagship-grade chip, or a more modest, high-volume part. For now, the company is simply saying that DG2 will “take our discrete graphics capability up the stack into the enthusiast segment.”
Once again kicking off our earnings season coverage for the tech industry is Intel, who reported their Q3 2020 financial results this afternoon. The traditional leader of the pack in more than one way, Intel has been under more intense scrutiny as of late, particularly due to their previously disclosed delay in their 7nm manufacturing schedule. None the less, Intel has been posting record revenues and profits in recent quarters – even with a global pandemic going on – which has been keeping Intel in good shape. It’s only now, with Q3 behind them, that Intel is starting to feel the pinch of market shifts and technical debt – and even then the company is still well into the black.
For the third quarter of 2020, Intel reported $18.3B in revenue. A drop of $0.9B over the year-ago quarter. As previously mentioned, Intel has been setting a string of record revenues in previous quarters, but the boom is coming to an end as margins and revenues are slipping. Those declines are also having the expected knock-on effect to Intel’s profitability, with the company reporting $4.3B in net income, a 29% drop versus Q3’19.
Today Huawei took the stage to unveil the new Mate 40 series of devices. In the form of the Mate 40, Mate 40 Pro and the Mate 40 Pro+, the new phones represent the company’s leading edge in terms of technology, mostly enabled by the new Kirin 9000 chipset which is manufactured on a new 5nm manufacturing node, promising great leaps in performance and efficiency.
The new phones also feature an updated design with a different camera layout, differentiated display design and improved speakers and charging features.
The new Kirin 9000 is are the core of the discussion – and it’s also Huawei’s biggest problem as the new silicon is no longer under production since September due to US sanctions on the company, representing a much more substantial threat than the already existing limitations on the company’s products, such as not being able to ship with Google Mobile Services.
At Acer’s global press conference today, one of the hot ticket items was the announcement of an upcoming laptop featuring an Intel’s discrete graphics option. The new Intel Xe graphics architecture, which debuted in Intel 11th Gen Tiger Lake notebook processors, is now set to see the launch of an additional higher power discrete graphics option and coming to notebooks first. One of those devices will be the Acer Swift 3X, with both Tiger Lake and discrete Xe MAX inside.
Bus-powered portable flash-based storage solutions are one of the growing segments in the consumer-focused direct-attached storage market. The emergence of 3D NAND with TLC and QLC has brought down the cost of such drives. NAND manufacturers like Western Digital, Samsung, and Crucial/Micron who also market portable SSDs have an inherent advantage in terms of vertical integration. Last year, Crucial/Micron had announced its entry into the segment with the QLC-based Crucial Portable SSD X8. A few months back, Crucial updated the lineup with a 2TB model while adding a lower-performance X6 member to the portfolio. Read on for our review of how these two high-capacity models fare in our evaluation.
Silicon Motion has announced the official launch of their first generation of PCIe 4.0-capable NVMe SSD controllers. These controllers have been on the roadmap for quite a while and have been previewed at trade shows, but the first models are now shipping. The high-end SM2264 and mainstream SM2267/SM2267XT controllers will enable consumer SSDs that move beyond the performance limits of the PCIe 3.0 x4 interface that has been the standard for almost all previous consumer NVMe SSDs.
The high-end SM2264 controller is the successor to Silicon Motion's SM2262(EN) controllers, and the SM2264 brings the most significant changes that add up to a doubling of performance. The SM2264 still uses 8 NAND channels, but now supporting double the speed: up to 1600MT/s. The controller includes four ARM Cortex R8 cores, compared to two cores on SMI's previous client/consumer NVMe controllers. As with most SSD controllers aiming for the high end PCIe 4.0 product segment, the SM2264 is fabbed on a smaller node: TSMC's 12nm FinFET process, which allows for substantially better power efficiency than the 28nm planar process used by the preceding generation of SSD controllers. The SM2264 also includes support for some enterprise-oriented features like SR-IOV virtualization, though we probably won't see that enabled on consumer SSD products. The SM2264 also includes the latest generation of Silicon Motion's NANDXtend ECC system, which switches from a 2kb to 4kB codeword size for the LDPC error correction.
The SM2264 controller will be competing with in-house controllers used by Samsung and Western Digital for their flagship consumer SSDs, and against the upcoming Phison E18 controller. Phison's E16 controller was the first consumer PCIe 4.0 controller to hit the market, but is now being outclassed by a second wave of PCIe 4.0 controllers that come much closer to using the full potential of a PCIe 4.0 x4 interface. The SM2264 controller is currently sampling to drive vendors, but we don't have an estimate for when products will be hitting the shelves.
|Silicon Motion Client/Consumer NVMe SSD Controllers|
|Market Segment||Mainstream Consumer||High-End Consumer|
|Arm CPU Cores||2x Cortex||2x Cortex R5||2x Cortex||4x Cortex R8|
|Error Correction||2kB LDPC||2kB LDPC||2kB LDPC||4kB LDPC|
|DRAM Support||LPDDR3, DDR4
|LPDDR3, DDR4||LPDDR4, DDR4|
|Host Interface||PCIe 3.0 x4||PCIe 4.0 x4||PCIe 3.0 x4||PCIe 4.0 x4|
|NAND Channels, Interface Speed||4ch
|CEs per Channel||4||8
|Sequential Read||2400 MB/s||3900 MB/s||3500 MB/s||7400 MB/s|
|Sequential Write||1700 MB/s||3500 MB/s||3000 MB/s||6800 MB/s|
|4KB Random Read IOPS||300k
|4KB Random Write IOPS||250k||500k||420k||1000k|
For the more mainstream product segments, Silicon Motion's SM2267 and SM2267XT controllers are the replacements for the SM2263 and SM2263XT. These will help bring entry-level NVMe performance up to about the level that used to be standard for high-end PCIe 3.0 SSDs. The SM2267XT is the DRAMless variant of the SM2267 and also has half as many chip enables (CEs), which allow for a much smaller package size suitable for use on small form factors like M.2 2230. The SM2267(XT) controllers are still manufactured on the cheaper 28nm process. The SM2267 and SM2267XT controllers are in mass production and the first products using those parts have also entered the supply chain: we already have a sample of ADATA's Gammix S50 Lite with the SM2267 controller on our test bench.
The SM2267 will be competing against a mix of older 8-channel controllers like the Phison E12, and newer 4-channel solutions as seen in the SK hynix Gold P31. We expect this to be the most important consumer SSD product segment going into 2021 as these drives will not carry the steep price premium currently seen on high-end 8-channel PCIe 4.0 SSDs, and they'll still be plenty fast for almost all use cases. The DRAMless SM2267XT variant will be competing against controllers like the Phison E19T for entry-level NVMe SSDs that carry little or no price premium over SATA drives. These low-cost NVMe controllers are also increasingly popular for portable SSDs; the performance increases of the SM2267XT over the SM2263XT will not matter for drives using 20Gb/s USB to NVMe bridge chips, but will be helpful for Thunderbolt SSDs.
In a joint press release issued early this morning, SK Hynix and Intel have announced that Intel will be selling the entirety of its NAND memory business to SK Hynix. The deal, which values Intel’s NAND holdings at $9 billion, will see the company transfer over the NAND business in two parts, with SK Hynix eventually acquiring all IP, facilities, and personnel related to Intel’s NAND efforts. Notably, however, Intel is not selling their overarching Non-Volatile Memory Solutions Group; instead the company will be holding on to their Optane memory technology as they continue to develop and sell that technology.
In the realm of processor and product design, having the right series of tools to actually build and simulate a product has been a key driver in minimizing time to market. Cadence is one of the more prolific companies in the electronic design automation (EDA) software space, with tools for designing integrated circuits, PCBs, packaging, SoCs, radio frequency, as well as respective verification tools. What landed in my inbox this morning was an announcement for a new tool in Cadence’s solver portfolio to enable better full-system EM simulation while also scaling across CPU and GPU as well as to other systems.
Cadence’s 3D Transient Solver uses a finite difference time domain model to essentially model an anechoic chamber over a wide frequency range for both electromagnetic interference but also electromagnetic compliance. This is what the EC/FCC certification enablement is all about in order to enable sales of a product in a region – the more accurate (and scalable) the EM simulation work is, the idea is that the results from any anechoic chamber testing will be more in line with the simulation, resulting in fewer prototype sampling in advance of the certification test.
The Transient Solver is designed to work in conjunction with Cadence’s other tools, such as FEM and MoM, for whole product simulation – due to capacity, it has often been the case that products would be simulated part-by-part, and then a full scale simulation result would be interpolated. Cadence points to its full product potential, as the software is designed to scale almost linearly across as many CPU cores as can be thrown at it (either in a single system or in a multi-system environment), as well as leveraging GPU acceleration (note, something I did in my PhD for FDTD but for chemical simulations). Cadence also points to its ability to keep the simulation memory footprint low, often a difficult task with these sorts of simulations (especially FDTD), such that systems no longer need terabytes of DRAM just to run. The software can also be run via cloud services and scaled as needed. Due to the scalability, it also allows for quicker deformation testing, to avoid such areas as the iPhone 4 issues.
Anechoic chambers are marvellous things – if you ever get a chance to go in one, I highly recommend it. If the 30% reduced cycle time that one of Cadence’s partners is quoting is anything to go by, then there might be some chambers empty for a bit – perfect to get your company to convince that an employee tour is needed.
Cadence’s Clarity 3D Transient Solver is due for release in Q1 2021.
Over the years, the Standard Performance Evaluation Corporation's SPECviewperf benchmark has become the industry standard for workstation GPU benchmarking. Owing to the fact that, unlike video games, there's little concept of a "standard" workload for CAD, content creation, and visual data analysis tools, as well as the sheer complexity of these applications, there is an ongoing need for a standardized multi-application benchmark. This is both to offer a wider, holistic view of GPU performance under workstation applications, and even more fundamental than that, to provide a proper apples-to-apples testing environment.
With the previous release of SPECviewperf dating back to 2013, the benchmark has been due for a refresh – and that has finally arrived this week with the release of SPECviewperf 2020. An incremental update over SPECviewperf 13, the 2020 version makes some important changes including support for 4K resolutions, as well as updating the workload traces.
Meanwhile, with a few systems already setup for other needs, we decided to take a quick test drive of the new version of the SPECviewperf, giving us a better look at what's new along with an idea of where performance lies with the updated benchmark workloads.
The announcement of the new Ryzen 5000 processors, built on AMD’s Zen 3 microarchitecture, has caused waves of excitement and questions as to the performance. The launch of the high-performance desktop processors on November 5th will be an interesting day. In advance of those disclosures, we sat down with AMD’s CTO Mark Papermaster to discuss AMD’s positioning, performance, and outlook.
Today we are taking a look at the largest member of Corsair's new Elite Capellix AIO cooler family, the H150i Elite Capellix. An upgrade for the renowned H150i Pro RGB, the H150i Elite Capellix features a 400 mm long radiator that holds three 120 mm fans, all the while incorporating improved iCUE integration and more powerful cooling fans.
Samsung had last revamped their SDXC cards lineup back in 2014 to delineate them into the standard, PRO, and EVO categories. Since then, the company slowly phased out the full-sized cards from their lineup, and started to focus on microSDXC cards. This week, they are aiming to get back into the SDXC cards market for creators and professionals in the content capture market segment. Two new product families are being announced - the PRO Plus, and the EVO Plus. Both families are UHS-I cards, with the PRO Plus aimed at professionals, and the EVO Plus at creators. Samsung sampled the 128GB capacity cards in both families ahead of the retail launch. This review takes the cards out for a spin and attempts to analyze their value proposition.
A little later in the year than usual, but today we finally saw the announcement of Apple’s newest line-up of iPhones. This time around we didn’t get two, or even three phones, but a total of four new devices ranging both in size as well as in pricing. The iPhone 12 series is a major leap for Apple as they represent the company’s first ever 5G devices, preparing the company for the next generation of cellular networks for the better part of this decade.
The iPhone 12 Pro and 12 Pro Max are both straightforward upgrades to the 11 Pro series, whilst the regular iPhone 12 represents the mainstream option as a successor to the iPhone 11. The new entry in the line-up is the iPhone 12 mini – an incredibly exciting device for people who are looking for a more diminutive form-factor device, being smaller and more light-weight than even the iPhone SE released earlier in the year.
Thanks to the new A14 SoC, we’re seeing upgraded performance across the board, as well as greatly improve image processing on the camera systems, with particularly the iPhone 12 Pro Max standing out in terms of its camera systems.
Now that NVIDIA’s second GTC event of the year has wrapped up, we’ve finally gotten a chance to follow up with NVIDIA on last week’s announcement of their RTX A6000 video card, and what that means for the Quadro brand. In short, NVIDIA has confirmed that the Quadro brand is going away for sure, and as we suspected, it’s largely due to the overlap between graphics and compute.
As a quick refresher, last week NVIDIA launched their new professional visualization-focused video card, the RTX A6000. Based on the new GA102 GPU, the card ticks all the boxes for a high-end, pro-grade video card; and under normal circumstances, it would be part of NVIDIA’s Quadro family of products. However the card was notably excluded from the Quadro family in something of a last-minute change. At the time it wasn’t clear just what this meant for the Quadro brand as a whole, but now that GTC has wrapped we’ve been given some better insights into what’s going on.
First and foremost, NVIDIA has confirmed that the Quadro brand is being retired, or “streamlined” as the company calls it. Similar to the Tesla brand a couple of years back, the brand is set to be slowly retired from the market, as new professional visualization cards are released without the Quadro branding. Going forward, all of these cards will be given brand-less names, such as the “NVIDIA RTX A6000” and “NVIDIA A40”.
The more interesting aspect to this change is why: why would NVIDIA retire one of its oldest video card brands after so long? After all, the market for pro cards isn’t going away, and it remains a tidy, profitable business for NVIDIA. At the time we suspected that this has to do with the increasing overlap in NVIDIA’s product lines between professional visualization cards and compute cards, and the company has since confirmed that our hunch was correct.
As NVIDIA has continued to expand into the compute market, their professional visualization (ProViz) and compute products have increasingly overlapped in terms of features and pricing. As NVIDIA already charges “full” price for both their compute and ProViz cards, there are little-if-any feature differences between the two: desktop ProViz cards have the same access to compute features as compute cards. And compute cards, though almost exclusively server-mounted, can be provisioned as a virtual ProViz card as well.
One of the consequences of which has been that NVIDIA’s own messaging on what cards can do what tasks has become unfocused, never mind potentially confusing customers. If you need an actively-cooled desktop card for running neural network prototyping, for example, what card do you buy? Previously it was the Quadro card, despite the fact that it was a ProViz part. Similarly, the ex-Tesla V100 makes a great part for provisioning a virtual Quadro instance, even though it’s not a Quadro part.
As a result, NVIDIA has opted to go the route of essentially merging their compute and ProViz hardware lineups in an effort to simplify their offerings. NVIDIA wants for there to be a single brand – NVIDIA – which covers both markets, reflecting the flexibility of their cards and (largely) eliminating questions over which cards can be used for graphics or compute. At the same time, this also allows NVIDIA to reduce the number of hardware SKUs offered, as they no longer need overlapping products at the fringes of these markets.
Ultimately, the market for ProViz and for computing has quickly become one and the same. Though the two differ in their specific needs, they still use the same NVIDIA hardware and pay the same NVIDIA “premium”. So both are set to become a single product line to cover the needs of all of NVIDIA’s professional and commercial customers, whatever their graphics and compute needs.
Today Apple is holding its second fall 2020 launch event - only a few weeks after the traditional September launch which saw the unveiling of the a new Apple Watch, and a new line of iPads, including the new iPad Air which sports the new 5nm Apple A14 SoC. What was missing from the September event was any new announcements of new iPhones - which this year seem to have slightly slipped in terms of timing.
Today's event should cover the new iPhones, and if industry reports are accurate, we actually should be seeing quite a slew of new devices in the form of two "regular" iPhones and two Pro models, for a total of four devices. It should demark the first time in 3 years that Apple will be introducing a new iPhone design, and this generation should be the first one to support 5G connectivity.
As always, we'll be live-blogging the event and hold live commentary on Apple's newest revelations.
The event starts at 10am PDT (17:00 UTC).
It’s almost been a year since Imagination had announced its brand-new A-series GPU IP, a release which at the time the company called its most important in 15 years. The new architecture indeed marked some significant updates to the company’s GPU IP, promising major uplifts in performance and promises of great competitiveness. Since then, other than a slew of internal scandals, we’ve heard very little from the company – until today’s announcement of the new next-generation of IP: the B-Series.
The new Imagination B-Series is an evolution of last year’s A-Series GPU IP release, further iterating through microarchitectural improvements, but most importantly, scaling the architecture up to higher performance levels through a brand-new multi-GPU system, as well as the introduction of a new functional safety class of IP in the form of the BXS series.
Intel's introduction of the Tiger Lake U-series processors with support for a range of TDPs up to 28W has resulted in vendors launching a number of interesting systems with a twist to the original NUC's 100mm x 100mm ultra-compact form-factor (UCFF). Notable among these have been the GIGABYTE's BRIX PRO (3.5" SBC form-factor), and ASRock Industrial's STX-1500 mini-STX board, with the latter adopting the embedded versions of the Tiger Lake-U processors. ASRock Industrial also happens to be one of the first to adopt the Tiger Lake-U series for traditional UCFF systems with the launch of their NUC 1100 BOX series.
Intel's Tiger Lake-based NUCs (Panther Canyon and Phantom Canyon) are an open secret in tech circles, but are yet to be officially announced. ASRock Industrial's Tiger Lake NUCs such as the NUC BOX-1165G7 have also been hinted at in Intel's marketplace - a retail follow-up to the embedded market-focused iBOX 1100 and NUC 1100 solutions. Today's announcement makes the Tiger Lake NUCs from ASRock Industrial official. The company is launching three models in this series - NUC BOX-1165G7, NUC BOX-1135G7, and NUC BOX-1115G4. The specifications are summarized in the table below.
|ASRock Industrial NUC 1100 BOX (Tiger Lake-U) Lineup|
|Model||NUC BOX-1115G4||NUC BOX-1135G7||NUC BOX-1165G7|
|CPU||Intel Core i3-1115G4
1.7 - 4.1 GHz (3.0 GHz)
12 - 28 W (28W)
|Intel Core i5-1135G7
0.9 - 4.2 GHz (2.4 GHz)
12 - 28 W (28W)
|Intel Core i7-1165G7
1.2 - 4.7 GHz (2.8 GHz)
12 - 28 W (28W)
|GPU||Intel® UHD Graphics for 11th Gen Intel® Processors (48EU) @ 1.25 GHz||Intel® Iris® Xe Graphics (80EU) @ 1.3 GHz||Intel® Iris® Xe Graphics (96EU) @ 1.3 GHz|
|DRAM||Two DDR4 SO-DIMM slots
Up to 64 GB of DDR4-3200 in dual-channel mode
|Motherboard||4.02" x 4.09" UCFF|
|Storage||SSD||1x M.2-2280 (PCIe 4.0 x4 (CPU-direct) or SATA III)|
|DFF||1 × SATA III Port (for 2.5" drive)|
|Wireless||Intel Wi-Fi 6 AX200
2x2 802.11ax Wi-Fi + Bluetooth 5.1 module
|Ethernet||1 × GbE port (Intel I219-V)
1 × 2.5 GbE port (Intel I225-LM)
|USB||Front||1 × USB 3.2 Gen 2 Type-A
2 x USB 3.2 Gen 2 Type-C
|Rear||2 × USB 3.2 Gen 2 Type-A|
|Display Outputs||1 × HDMI 2.0a
1 x DisplayPort 1.4
2 × DisplayPort 1.4 (using Front Panel Type-C ports)
|Audio||1 × 3.5mm audio jack (Realtek ALC233)|
|Dimensions||Length: 117.5 mm
Width: 110 mm
Height: 47.85 mm
The striking aspect of the NUC 1100 BOX-series chassis is the similarity to the 4X4 BOX-4000U series.
According to the products' datasheet, ASRock Industrial plans to get the two Type-C ports in the front panel certified for USB4. Since the certification plan is still pending, they are being advertised as USB 3.2 Gen 2 for now. They also believe that Thunderbolt 3 devices can be used in the front Type-C ports (since Intel claims four USB4 / Thunderbolt 4 ports on Tiger Lake) - that would be interesting to test out, given the logo on the chassis only indicates SuperSpeed 10Gbps with DP-Alt Mode support.
The key updates compared to the existing NUCs from various vendors (based on Comet Lake-U) are the support for four simultaneous 4Kp60 displays along with the 2.5 GbE wired LAN interface. The performance advantages provided by the 10nm Tiger Lake with its new microarchitecture may probably help the NUC BOX-1100 series get the edge over the 4X4 BOX-4000U series (based on the Renoir APUs) in single-threaded workloads. On the multi-threaded side and GPU-intensive workloads, it is shaping up to be an interesting tussle - one we hope to analyze in more detail in our hands-on review.
Since the unit targets the embedded market also, it has the usual bells and whistles including an integrated watchdog timer and an on-board TPM. Pricing is slated to be announced towards the end of October 2020.
Acer has had a big year in 2020, thanks to their close relationship with AMD. Acer has long been a strong partner of AMD, through the good times, and the bad, and right now is about as good a time to be an AMD partner as it can be. AMD’s Renoir platform has been a revolution for their mobile device efforts. The company had strong packages for the desktop really ever since they launched the Ryzen platform in 2017, but those successes did not translate over to the laptop space, but with the latest Ryzen 4000 series processors, aka Renoir, all of that has changed.
As part of today’s Zen 3 desktop CPU announcement from AMD, the company also threw in a quick teaser from the GPU side of the company in order to show off the combined power of their CPUs and GPUs. The other half of AMD is preparing for their own announcement in a few weeks, where they’ll be holding a keynote for their forthcoming Radeon RX 6000 video cards.
With the recent launch of NVIDIA’s Ampere-based GeForce RTX 30 series parts clearly on their minds, AMD briefly teased the performance of a forthcoming high-end RX 6000 video card. The company isn’t disclosing any specification details of the unnamed card – short of course that it’s an RDNA2-based RX 6000 part – but the company did disclose a few choice benchmark numbers from their labs.
Dialing things up to 4K at maximum quality, AMD benchmarked Borderlands 3, Gears of War 5, and Call of Duty: Modern Warfare (2019). And while these are unverified results being released for marketing purposes – meaning they should be taken with a grain or two of salt – the implied message from AMD is clear: they’re aiming for NVIDIA’s GeForce RTX 3080 with this part.
Assuming these numbers are accurate, AMD’s Borderlands 3 performance are practically in lockstep with the 3080. However the Gears 5 results are a bit more modest, and 73fps would have AMD trailing by several percent. Finally, Call of Duty does not have a standardized benchmark, so although 88fps at 4K looks impressive, it’s impossible to say how it compares to other hardware.
Meanwhile, it’s worth noting that as with all vendor performance teases, we’re likely looking at AMD’s best numbers. And of course, expect to see a lot of ongoing fine tuning from both AMD and NVIDIA over the coming weeks and months as they jostle for position, especially if AMD’s card is consistently this close.
Otherwise, the biggest question that remains for another day is which video card these performance numbers are for. It’s a very safe bet that this is AMD’s flagship GPU (expected to be "Big Navi", Navi 21), however AMD is purposely making it unclear if this is their lead configuration, or their second-tier configuration. Reaching parity with the 3080 would be a big deal on its own; however if it’s AMD’s second tier-card, then that would significantly alter the competitive landscape.
Expect to find out the answers to this and more on October 28th, when AMD hosts their Radeon RX 6000 keynote.
Dr. Lisa Su, the CEO of AMD, has today announced the company’s next generation mainstream Ryzen processor. The new family, known as the Ryzen 5000 series, includes four parts and supports up to sixteen cores. The key element of the new product is the core design, with AMD’s latest Zen 3 microarchitecture, promising a 19% raw increase in performance-per-clock, well above recent generational improvements. The new processors are socket-compatible with existing 500-series motherboards, and will be available at retail from November 5th. AMD is putting a clear marker in the sand, calling one of its halo products as ‘The World’s Best Gaming CPU’. We have details.
One of the most anticipated launches of 2020 is now here. AMD's CEO, Dr. Lisa Su, is set to announce and reveal the new Ryzen 5000 series processors using AMD's new Zen 3 microarchitecture. Aside from confirming the product is coming this year, there are very few concrete facts to go on: we are expecting more performance as well as a competitive product. The presentation is scheduled to last 30 minutes, so we hope there is some juicy information to go on.
Come back at Noon ET for reporting and analysis at AnandTech.
Today Western Digital is announcing a major expansion of their WD Black family of gaming-oriented storage products. In a digital event later today on Twitch, Western Digital will introduce their first PCIe Gen4 SSD, a new high-end PCIe Gen3 SSD, and their first Thunderbolt Dock.
The new WD Black SN850 is Western Digital's first PCIe 4 SSD and the successor to their WD Black SN750. The SN850 features Western Digital's second generation in-house NVMe SSD controller and can hit speeds of 7GB/s (sequential) and 1M IOPS (random). The SN850 will initially be available as a standard M.2 NVMe SSD, suitable for gaming PCs and expected to work in the upcoming Sony PS5. Western Digital is also working on a version of the WD Black SN850 that will add a heatsink and RGB lighting. The plain M.2 version will be hitting the market later this fall with capacities from 500GB to 2TB, while the RGB+heatsink version likely will not be ready until next year.
|WD Black SN850 Specifications|
|Capacity||500 GB||1 TB||2 TB|
|Form Factor||M.2 2280 single-sided
|Interface||PCIe 4 x4 NVMe|
|Controller||Western Digital in-house, second generation|
|NAND Flash||SanDisk 3D TLC|
|Sequential Read||7000 MB/s|
|Sequential Write||4100 MB/s||5300 MB/s||5100 MB/s|
|Write Endurance||300 TB
For gamers on desktops that only support PCIe Gen3 speeds, Western Digital is introducing a new high-end SSD option. The WD Black AN1500 PCIe 3 x8 add-in card SSD puts two of their SN730 SSDs (OEM equivalents of the SN750) in a RAID-0 configuration for increased performance and capacity. The AN1500 uses the Marvell 88NR2241 NVMe RAID chip, which we reported on earlier this week as part of HPE's new RAID1 card for server boot drives. Thanks to that hardware RAID capability, the AN1500 operates as a single drive with a PCIe 3.0 x8 uplink allowing for read speeds of 6.5GB/s and write speeds of 4.1GB/s. Since the AN1500 internally uses a pair of SN730/SN750 M.2 SSDs, the AN1500's capacities are doubled: the smallest model is 1TB and the largest option is 4TB. The card is armored by a substantial aluminum heatsink and backplate that match the recent WD_BLACK design language, including customizable RGB lighting around the edge.
Single-chip NVMe SSD controllers supporting a PCIe 3 x8 interface do exist, but they're only used in high-end enterprise SSDs. That means the WD Black AN1500 is the first consumer NVMe SSD capable of using an 8-lane interface, without the hassle of software RAID as used by competing NVMe RAID solutions. The AN1500 does not require PCIe port bifurcation support from the host system, and is also usable (with reduced performance) in PCIe slots that only provide four lanes of PCIe.
|WD Black AN1500 Specifications|
|Capacity||1 TB||2 TB||4 TB|
|Form Factor||PCIe add-in card|
|Interface||PCIe 3 x8|
|Controller||2x WD in-house NVMe + Marvell 88NR2241 RAID-0|
|NAND Flash||SanDisk 3D TLC|
|Sequential Read||6500 MB/s|
|Sequential Write||4100 MB/s|
|4kB Random Read IOPS||760k||780k||780k|
|4kB Random Write IOPS||690k||700k||710k|
The WD Black family of products for external storage is also getting a new member. The current lineup consists of the P10 portable hard drive, P50 portable SSD, and D10 desktop 3.5" external hard drive. The obvious gap is a desktop-oriented external SSD, but the new Western Digital WD Black D50 goes a bit beyond that: rather than merely provide Thunderbolt-attached NVMe storage, the D50 is a full Thunderbolt 3 dock providing a variety of port expansion. The D50 Game Dock will be available with either 1TB or 2TB of NVMe storage, and in a dock-only version without built-in storage. None of the three models are intended to allow the user to upgrade the storage. Customizable RGB lighting is of course present.
The WD Black D50's natural competition will be Seagate's similar FireCuda Gaming Dock. Seagate's dock comes with a 4TB hard drive and an empty M.2 PCIe slot for the user to install the SSD of their choice, and slightly more ports. The WD Black D50 Game Dock is smaller overall, provides power to a connected laptop, and is intended to be used in a vertical orientation—it has a weighted base to help keep it upright.
The WD Black D50 with no built-in storage has a MSRP of $319.99, the 1TB model is $499.99, and the 2TB model is $679.99.
As Western Digital continues moving their WD Black brand toward a focus specifically on gaming, the products have inevitably been infected with RGB lighting. Western Digital's own WD_BLACK Dashboard software for Windows can control these lighting elements, but Western Digital is also working to integrate with other RGB control systems. They currently have support for Gigabyte RGB Fusion 2.0, MSI Mystic Light Sync and ASUS Aura, and support for Razer Chroma RGB will be ready soon.
In a blog post on Medium today, Intel’s John Bonini has confirmed that the company will be launching its next-generation desktop platform in Q1 2021. This is confirmed as Rocket Lake, presumably under Intel’s 11th Gen Core branding, and will feature PCIe 4.0 support. After several months (and Z490 motherboards) mentioning Rocket Lake and PCIe 4.0 support, this note from Intel is the primary source that confirms it all.
The blog post doesn’t go into any further detail about Rocket Lake. From our side of the fence, we assume this is another 14nm processor, with questions as to whether it is built upon the same Skylake architecture as the previous five generations of 14nm, or is a back-port of Intel’s latest Cove microarchitecture designs. Add in PCIe 4.0 support rather than PCIe 3.0 - there’s no specific indication at this time that there will be an increase in PCIe lane counts from the CPU, although that has been an idea that has been floated. Some motherboards, such as the ASRock Z490 Aqua, seem to have been built with the idea of a PCIe 4.0 specific storage M.2 slot, which when in use makes the PCIe 3.0 slot no longer accessible.
It is notable in the blog that John Bonini (VP/GM for Intel’s Desktop/Workstation/Gaming) cites high processor frequencies as a key metric for high performance in games and popular applications, mentioning Intel’s various Turbo Boost technologies. In the same paragraph, he then cites overclocking Intel’s processors to 7 GHz, failing to mention that this sort of overclocking isn’t done for the sake of gaming or workflow. The blog post also seems to bounce between talking about enthusiast gamers on the bleeding edge and squeezing out every bit of performance at the top-end, to then mentioning casual gamers on mobile graphics; it’s comes across as erratic and a bit bipolar. Note that this blog post is also posted on Medium, rather than Intel’s own website, for whatever reason, and also seems to change font size mid-paragraph in the version we were sent.
The reason why this blog post is being today, in my opinion, is two-fold. Firstly, recent unconfirmed leaks regarding Intel’s roadmap has placed the next generation of desktop processor firmly into that Q1/Q2 crossover in 2021. By coming out and confirming a Q1 launch window, Intel is at least putting those rumors to bed. The second reason is down to what the competition is announcing: AMD has a Zen3 related presentation on October 8th, and so with Intel’s footnote, we at least know what’s going on with both team blue and team red.
It's been nearly two years to the day since NZXT last released a motherboard, which was the Z370 N7. NZXT initially used ECS as its motherboard OEM, but has opted to use ASRock this time round for a new N7 model. This has the same N7 infused armor, albeit using a combined metal and plastic instead of just metal which does reduce the overall cost. Aiming for the mid-range market, NZXT's N7 Z490 features 2.5 GbE, Wi-Fi 6, dual M.2, and four SATA ports, and we give it our focus in this review.
Today we posted a news article about SK hynix’s new DDR5 memory modules for customers – 64 GB registered modules running at DDR5-4800, aimed at the preview systems that the big hyperscalers start playing with 12-18 months before anyone else gets access to them. It is interesting to note that SK Hynix did not publish any sub-timing information about these modules, and as we look through the announcements made by the major memory manufacturers, one common theme has been a lack of detail about sub-timings. Today can present information across the full range of DDR5 specifications.
In 2018 Marvell announced the 88NR2241 Intelligent NVMe Switch: the first—and so far, only—NVMe hardware RAID controller of its kind. Now that chip has scored its first major (public) design win with Hewlett Packard Enterprise. The HPE NS204i-p is a new RAID adapter card for M.2 NVMe SSDs, intended to provide RAID-1 protection to a pair of 480GB boot drives in HPE ProLiant and Apollo systems.
The HPE NS204i-p is a half-height, half-length PCIe 3.0 x4 adapter card designed by Marvell for HPE. It features the 88NR2241 NVMe switch and two M.2 PCIe x4 slots that connect through the Marvell switch. This is not a typical PCIe switch as often seen providing fan-out of more PCIe lanes, but one that operates at a higher level and natively understands the NVMe protocol.
The NS204i-p adapter is configured specifically to provide RAID-1 (mirroring) of two SSDs, presenting them to the host system as a single NVMe device. This is the key advantage of the 88NR2241 over other NVMe RAID solutions: the host system doesn't need to know anything about the RAID array and continues to use the usual NVMe drivers. Competing NVMe RAID solutions in the market are either SAS/SATA/NVMe "tri-mode" RAID controllers that require NVMe drives to be accessed using proprietary SCSI interfaces, or are software RAID systems with the accompanying CPU overhead.
Based on the provided photos, it looks like HPE is equipping the NS204i-p with a pair of SK hynix NVMe SSDs. The spec sheet indicates these are from a read-oriented product tier, so the endurance rating should be 1 DWPD (somewhere around 876 TBW for 480GB drives).
This solution is claimed to offer several times the performance of SATA boot drive(s), and can achieve high availability of the OS and log storage without using up front hot-swap bays on a server. The HPE NS204i-p is now available for purchase from HPE, but pricing has not been publicly disclosed.
Discussion of the next generation of DDR memory has been aflutter in recent months as manufacturers have been showcasing a wide variety of test vehicles ahead of a full product launch. Platforms that plan to use DDR5 are also fast approaching, with an expected debut on the enterprise side before slowly trickling down to consumer. As with all these things, development comes in stages: memory controllers, interfaces, electrical equivalent testing IP, and modules. It’s that final stage that SK Hynix is launching today, or at least the chips that go into these modules.
USB has emerged as the mainstream interface of choice for data transfer from computing platforms to external storage devices. Thunderbolt has traditionally been thought of as a high-end alternative. However, USB has made rapid strides in the last decade in terms of supported bandwidth - From a top speed of 5 Gbps in 2010, the ecosystem moved to devices supporting 10 Gbps in 2015. Late last year, we saw the retail availability of 20 Gbps support with USB 3.2 Gen 2x2 on both the host and device sides. Almost a year down the line, how is the ecosystem shaping up in terms of future potential? Do the Gen 2x2 devices currently available in the retail market live up to their billing? What can consumers do to take advantage of the standard without breaking the bank? Read on to find out.
Continuing this morning’s run of GTC-related announcements, NVIDIA is offering yet another update on the state of their Data Processing Unit (DPU) project. An initiative inherited from Mellanox as part of that acquisition, NVIDIA and Mellanox have been talking up their BlueField-2 DPUs for the better part of the last year. And now the company is finally nearing a release date, with BlueField-2 DPUs sampling now, and set to ship in 2021.
Originally hatched by Mellanox before the NVIDIA acquisition, the DPU was Mellanox’s idea for the next-generation of SmartNICs, combining their networking gear with a modestly powerful Arm SoC to offload various tasks from the host system, such as software-defined networking and storage, as well as dedicated acceleration engines. Mellanox had been working on the project for some time, and while the original BlueField products saw a relatively low-key release last year, the company has been hard at work on Bluefield-2, which NVIDIA has since elevated to a much greater position.
This second generation of DPU-accelerated hardware will go under the BlueField-2 name, and the two companies have been talking about it for most of the past year. Based on a custom SoC, the BlueField 2 SoC uses 8 Arm Cortex-A72 cores along with a pair of VLIW acceleration engines. All of this is then paired with a ConnectX-6 DX NIC for actual network connectivity. At a high level, the DPU is intended to be the next step in the gradual movement towards domain-specific accelerators within the datacenter, offering a more specialized processor that can offload networking, storage, and security workloads from the host CPU.
Coming off of their success in the datacenter market from broadening applications for GPUs, it’s easy to see NVIDIA’s interest in the DPU project: this is another piece of silicon they can sell to server builders and datacenter operators, and further undermines the importance of the one thing NVIDIA doesn’t have, a server-class CPU. So although not a project stated by NVIDIA, it’s a project they’re full embracing and expanding upon.
As the bulk of today’s DPU announcement is a recap for NVIDIA, the actual product plans for BlueField-2 have not changed. NVIDIA will be releasing two DPU-equipped cards, the BlueField-2, and the BlueField-2X. The former is a more traditional SmartNIC with the DPU and 2 100Gb/second Ethernet/InfiniBand ports. This allows it to be used for networking as well as storage tasks like NVMe-over-Fabrics.
Meanwhile the larger Bluefield-2X incorporates a DPU as well as one of NVIDIA’s Ampere-GPUs for further acceleration via in-network computing, as NVIDIA likes to call it. NVIDIA hasn’t disclosed the GPU used on the BlueField-2X, but if these renders are accurate, then the number of memory chips indicates it’s GA102, the same chip going into NVIDIA’s high-end video cards. Which would make BlueField-2X a very potent card with regards to compute performance.
And NVIDIA’s plans don’t stop with the BlueField-2 products. The company has planned out a series of cards based on BlueField-2, which will be released as the BlueField-3 and BlueField-4 family in successive years. BlueField-3 will be a souped-up version of BlueField-2, with separate DPU and DPU + GPU cards. Meanwhile BlueField-4 will be the first part where NVIDIA’s influence makes it into the core silicon, with the company planning a single high-performance DPU that would be able to significantly outperform the easier discrete DPU + GPU designs. All told, NVIDIA is expecting BlueField-4 to offer 400 TOPS of AI performance.
All of this, in turn, will come with NVIDIA’s traditional embrace of both hardware and software. The company is looking to mirror its CUDA strategy with DPUs, offering the Data Center Infrastructure-on-a-Chip Architecture (DOCA) as the software stack and programming model for BlueField 2 and later DPUs. This means assembling high-grade SDKs for developers to use, and then extending support for those SDKs and libraries over multiple generations. NVIDIA is clearly just getting DOCA off of the ground, but if history is any indication, software will play a huge role in the growth of the SmartNIC market, just like it did GPUs a decade prior.
Wrapping things up, the first BlueField-2 cards are now sampling to NVIDIA’s partners. Meanwhile commercial shipments will kick off in 2021, and BlueField-3 shipments may follow as soon as 2022.
NVIDIA’s second GTC of 2020 is taking place this week, and as has quickly become a tradition, one of CEO Jensen Huang’s “kitchenside chats” kicks off the event. As the de facto replacement for GTC Europe, this fall virtual GTC is a bit of a lower-key event relative to the Spring edition, but it’s still one that is seeing some NVIDIA hardware introduced to the world.
Starting things off, we have a pair of new video cards from NVIDIA – and a launch that seemingly indicates that NVIDIA is getting ready to overhaul its professional visualization branding. Being announced today and set to ship at the end of the year is the NVIDIA RTX A6000, NVIDIA’s next-generation, Ampere-based professional visualization card. The successor to the Turing-based Quadro RTX 8000/6000, the A6000 will be NVIDIA’s flagship professional graphics card, offering everything under the sun as far as NVIDIA’s graphics features go, and chart-topping performance to back it up. The A6000 will be a Quadro card in everything but name; literally.
|NVIDIA Professional Visualization Card
|Memory Clock||16Gbps GDDR6||14.5Gbps GDDR6||14Gbps GDDR6||1.7Gbps HBM2|
|Memory Bus Width||384-bit||384-bit||384-bit||4096-bit|
|Half Precision||?||?||32.6 TFLOPS||29.6 TFLOPS|
|Single Precision||?||?||16.3 TFLOPS||14.8 TFLOPS|
|Tensor Performance||?||?||130.5 TFLOPS||118.5 TFLOPs
|Manufacturing Process||Samsung 8nm||Samsung 8nm||TSMC 12nm FFN||TSMC 12nm FFN|
|Launch Date||12/2020||Q1 2021||Q4 2018||March 2018|
The first professional visualization card to be launched based on NVIDIA’s new Ampere architecture, the A6000 will have NVIDIA hitting the market with its best foot forward. The card uses a fully-enabled GA102 GPU – the same chip used in the GeForce RTX 3080 & 3090 – and with 48GB of memory, is packed with as much memory as NVIDIA can put on a single GA102 card today. Notably, the A6000 is using GDDR6 here and not the faster GDDR6X used in the GeForce cards, as 16Gb density RAM chips are not available for the latter memory at this time. As a result, despite being based on the same GPU, there are going to be some interesting performance differences between the A6000 and its GeForce siblings, as it has traded memory bandwidth for overall memory capacity.
In terms of performance, NVIDIA is promoting the A6000 as offering nearly twice the performance (or more) of the Quadro RTX 8000 in certain situations, particularly tasks taking advantage of the significant increase in FP32 CUDA cores or the similar performance increase in RT core throughput. Unfortunately NVIDIA has either yet to lock down the specifications for the card or is opting against announcing them at this time, so we don’t know what the clockspeeds and resulting performance in FLOPS will be. Notably, the A6000 only has a TDP of 300W, 50W lower than the GeForce RTX 3090, so I would expect this card to be clocked lower than the 3090.
Otherwise, as we saw with the GeForce cards launched last month, Ampere itself is not a major technological overhaul to the previous Turing architecture. So while newer and significantly more powerful, there are not many new marquee features to be found on the card. Along with the expanded number of data types supported in the tensor cores (particularly BFloat16), the other changes most likely to be noticed by professional visualization users is decode support for the new AV1 codec, as well as PCI-Express 4.0 support, which will give the cards twice the bus bandwidth when used with AMD’s recent platforms.
Like the current-generation Quadro, the upcoming card also gets ECC support. NVIDIA has never listed GA102 as offering ECC on its internal pathways – this is traditionally limited to their big, datacenter-class chips – so this is almost certainly partial support via “soft” ECC, which offers error correction against the DRAM and DRAM bus by setting aside some DRAM capacity and bandwidth to function as ECC. The cards also support a single NVLink connector – now up to NVLink 3 – allowing for a pair of A6000s to be bridged together for more performance and to share their memory pools for supported applications. The A6000 also supports NVIDIA’s standard frame lock and 3D Vision Pro features with their respective connectors.
For display outputs, the A6000 ships with a quad-DisplayPort configuration, which is typical for NVIDIA’s high-end professional visualization cards. Notably this generation, however, this means the A6000 is in a bit of an odd spot since DisplayPort 1.4 is slower than the HDMI 2.1 standard also supported by the GA102 GPU. I would expect that it’s possible for the card to drive an HDMI 2.1 display with a passive adapter, but this is going to be reliant on how NVIDIA has configured the card and if HDMI 2.1 signaling will tolerate such an adapter.
Finally, the A6000 will be the first of today’s video cards to ship. According to NVIDIA, the card will be available in the channel as an add-in card starting in mid-December – just in time to make a 2020 launch. The card will then start showing up in OEM systems in early 2021.
Joining the new A6000 is a very similar card designed for passive cooling, the NVIDIA A40. Based on the same GA102 GPU as the A6000, the A40 offers virtually all of the same features as the active-cooled A6000, just in a purely passive form factor suitable for use in high density servers.
By the numbers, the A40 is a similar flagship-level graphics card, using a fully enabled GA102 GPU. It’s not quite a twin to the A6000, but other than the cooling difference, the only other change under the hood is the memory configuration. Whereas the A6000 uses 16 Gbps GDDR6, A40 clocks it down to 14.5 Gbps. Otherwise NVIDIA has not disclosed expected GPU clockspeeds, but with a 300W TDP, we’d expect them to be similar to the A6000.
Overall NVIDIA is no stranger to offering passively cooled cards; however it’s been a while since we last saw a passively cooled high-end Quadro card. Most recently, NVIDIA’s passive cards have been aimed at the compute market, with parts like the Tesla T4 and P40. The A40, on the other hand, is a bit different and a bit more ambitious, and a reflection of the blurring lines between compute and graphics in at least some of NVIDIA’s markets.
The most notable impact here is the inclusion of display outputs, something that was never on NVIDIA’s compute cards for obvious reasons. The A40 includes three DisplayPort outputs (one fewer than the A6000), giving the server-focused card the ability to directly drive a display. In explaining the inclusion of display I/O in a server part, NVIDIA said that they’ve had requests from users in the media and broadcast industry, who have been using servers in places like video trucks, but still need display outputs.
Ultimately, this serves as something of an additional feature differentiator between the A40 and NVIDIA’s official PCIe compute card, the PCIe A100. As the A100 lacks any kind of video display functionality (the underlying A100 GPU was designed for pure compute tasks), the A40 is the counterpoint to that product, offering something with very explicit video output support both within and outside of the card. And while it’s not specifically aimed at the edge compute market, where the T4 still reigns supreme, make no mistake: the A40 is still capable of being used as a compute card. Though lacking in some of A100’s specialty features like Multi-Instance GPU (MIG), the A40 is fully capable of being provisioned as a compute card, including support for the Virtual Compute Server vGPU profile. So the card is a potential alternative of sorts to the A100, at least where FP32 throughput might be of concern.
Finally, like the A6000, the A40 will be hitting the streets in the near future. Designed to be sold primarily through OEMs, NVIDIA expects it to start showing up in servers in early 2021.
For long-time observers, perhaps the most interesting development from today’s launch is what’s not present: NVIDIA’s Quadro branding. Despite being aimed at their traditional professional visualization market, the A6000 is not being branded as a Quadro card, a change that was made at nearly the last minute.
Perhaps because of that last-minute change, NVIDIA hasn’t issued any official explanation for their decision. At face value it’s certainly an odd one, as the Quadro brand is one of NVIDIA’s longest-lived brands, second only to GeForce itself. NVIDIA still controls the lion’s share of the professional visualization market as well, so at face value there seems to be little reason for NVIDIA to shake-up a very stable market.
With all of that said, there are a couple of factors in play that may be driving NVIDIA’s decision. First and foremost is that the company has already retired one of its other product brands in the last couple of years: Tesla. Previously used for NVIDIA’s compute accelerators, Tesla was retired and never replaced, leaving us with the likes of the NVIDIA T4 and A100. Of course, Tesla is something of a special case, as the name has increasingly become synonymous with the electric car company, despite in both cases being selected as a reference to the famous scientist. Quadro, by comparison, has relatively little (but not zero) overlap with other business entities.
But perhaps more significant than that is the overall state of NVIDIA’s professional businesses. An important cornerstone of NVIDIA’s graphics products, professional visualization is a fairly stable market – which is to say it’s not a major growth market in the way that gaming and datacenter compute have been. As a result, professional visualization has been getting slowly subsumed by NVIDIA’s compute parts, especially in the server space where many products can be provisioned for either compute or graphics needs. In all these cases, both Quadro and NVIDIA’s former Tesla lineup have come to represent NVIDIA’s “premium” offerings: parts that get access to the full suite of NVIDIA’s hardware and software features, unlike the consumer GeForce products which have certain high-end features withheld.
So it may very well be that NVIDIA doesn’t see a need for a specific Quadro brand too much longer, because the market for Quadro (professional visualization) and Tesla (computing) are one in the same. Though the two differ in their specific needs, they still use the same NVIDIA hardware, and frequently pay the same high NVIDIA prices.
At any rate, it will be interesting to see where NVIDIA goes from here. Even with the overlap in audiences, branding segmentation has its advantages at times. And with NVIDIA now producing GPUs that lack critical display capabilities (GA100), it seems like making it clear what hardware can (and can’t) be used for graphics is going to remain important going forward.
As part of this morning’s fall GTC 2020 announcements, NVIDIA is revealing that they are releasing an even cheaper version of their budget embedded computing board, the Jetson Nano. Initially introduced back in 2015 as the Jetson TX1, an updated version of NVIDIA’s original Jetson kit with their then-new Tegra X1 SoC, the company has since kept the Jetson TX1 around in various forms as a budget option. Most recently, the company re-launched it in 2019 as the Jetson Nano, their pint-sized, $99 entry level developer kit.
Now, NVIDIA is lowering the price tag on the Jetson Nano once again with the introduction of a new, cheaper SKU. Dubbed the Jetson Nano 2GB, this is a version of the original Jetson Nano with 2GB of DRAM instead of 4GB. Otherwise the performance of the kit remains unchanged from the original Nano, with 4 Cortex-A57 CPU cores and the 128 CUDA core Maxwell GPU providing the heavy lifting for CPU and GPU compute, respectively.
|NVIDIA Jetson Family Specifications|
|AGX Xavier||Jetson Nano (4GB)||Jetson Nano (2GB)|
|GPU||Volta, 384 Cores
|Volta, 512 Cores
|Maxwell, 128 Cores
|Maxwell, 128 Cores
|Accelerators||2x NVDLA||2x NVDLA||N/A||N/A|
|Memory||8GB LPDDR4X, 128-bit bus
|16GB LPDDR4X, 256-bit bus
|4GB LPDDR4, 64-bit bus
|2GB LPDDR4, 64-bit bus
|Storage||16GB eMMC||32GB eMMC||16GB eMMC||microSD|
|USB||4x USB-A 3.1 Gen 2||2x USB-C 3.1
1x USB-A 3.0
|4x USB-A 3.0||1x USB-A 3.0
2x USB-A 2.0
1x USB-C (Power)
|AI Perf.||21 TOPS||32 TOPS||N/A||N/A|
|Dimensions||45mm x 70mm||100mm x 87mm||45mm x 70mm||45mm x 70mm|
Meanwhile, though not mentioned in NVIDIA’s official press release, it looks like the company has simplified the carrier board a bit as part of their process of getting the price tag down. Relative to the original 4GB Nano, the Nano 2GB is pictured without a DisplayPort output, and with one fewer USB port. Furthermore those USB ports are no longer blue,
hinting that they are USB 2.0 instead of USB 3.0 with NVIDIA confirming that just 1 port is USB 3.0-capable, while the other two are USB 2.0. Finally, the barrel power connector has been replaced with a USB Type-C connector, and it looks like various pins have also been removed.
Overall, NVIDIA is pitching the cost-reduced Jetson Nano as a true starter kit for embedded computing, suitable for early training and learning. Despite receiving a minor neutering, the Nano 2GB can still run all of NVIDIA’s Jetson SDKs, allowing it to be used as a stepping stone of sorts towards learning NVIDIA’s NVIDIA’s ecosystem, and eventually moving on to their more powerful products like their GPU accelerators and Jetson Xavier NX kits. Ultimately, with their efforts to position it as a starter kit for teaching purposes, I imagine NVIDIA is gunning for the educational market, particularly with the continued uptick in STEM-focused programs.
The kit will go on sale later this month through NVIDIA’s usual distribution channels.
In a brief news post made to their GeForce website last night, NVIDIA has announced that they have delayed the launch of the upcoming GeForce RTX 3070 video card. The high-end video card, which was set to launch on October 15th for $499, has been pushed back by two weeks. It will now be launching on October 29th.
Indirectly referencing the launch-day availability concerns for the RTX 3080 and RTX 3090 last month, NVIDIA is citing a desire to have “more cards available on launch day” for the delay. NVIDIA does not disclose their launch supply numbers, so it’s not clear just how many more cards another two weeks’ worth of stockpiling will net them – it likely still won’t be enough to meet all demand – but it should at least improve the odds.
|NVIDIA GeForce Specification Comparison|
|RTX 3070||RTX 3080||RTX 3090||RTX 2070|
|Memory Clock||16Gbps GDDR6||19Gbps GDDR6X||19.5Gbps GDDR6X||14Gbps GDDR6|
|Memory Bus Width||256-bit||320-bit||384-bit||256-bit|
|Single Precision Perf.||20.4 TFLOPs||29.8 TFLOPs||35.7 TFLOPs||7.5 TFLOPs|
|Tensor Perf. (FP16)||81.3 TFLOPs||119 TFLOPs||143 TFLOPs||59.8 TFLOPs|
|Tensor Perf. (FP16-Sparse)||163 TFLOPs||238 TFLOPs||285 TFLOPs||59.8 TFLOPs|
|Manufacturing Process||Samsung 8nm||Samsung 8nm||Samsung 8nm||TSMC 12nm "FFN"|
|Launch Price||MSRP: $499||MSRP: $699||MSRP: $1499||MSRP: $499
Interestingly, this delay also means that the RTX 3070 will now launch after AMD’s planned Radeon product briefing, which is scheduled for October 28th. NVIDIA has already shown their hand with respect to specifications and pricing, so the 3070’s price and performance are presumably locked in. But this does give NVIDIA one last chance to react – or at least, distract – should they need it.
It is no secret that Intel's 10th generation processors are power-hungry. Intel has been squeezing every last drop of MHz out of the 14 nm process with its fastest desktop processors yet, but sometimes conventional air cooling just won't suffice for those wanting to push the limits even further. ASRock understands this, and building on the success of the elegant (yet wallet-emptying) AMD Aqua, the company unveiled an Intel version.
The Z490 Aqua comes equipped as a premium feature motherboard. The large integrated monoblock which cools both the CPU and the board's power delivery, amd it now features a very cool OLED display. There is also integrated Thunderbolt 3, 10 gigabit Ethernet, as well as its large 16-phase power delivery.
Microsoft’s Surface lineup started with a Surface RT tablet, but is now features a wide range of devices targeting different markets, with different price brackets, and different levels of performance, but Microsoft is more often than not aiming for the higher end of the price range with the Surface lineup. This has kept their products out of reach for a lot of consumers. Today the company is announcing the Surface Laptop Go, broadening the laptop audience considerably with an entry point of just $549 USD.
Microsoft’s Surface Laptop 3 is a competent device, with two device sizes at 13.5 and 15-inches, featuring an all-aluminum chassis, and the 3:2 PixelSense display which is one of the highlights of any Surface device. But with an entry price of $999, there is a large part of the market that has been left vacant by Microsoft, until today.
|Microsoft Surface Laptop|
|CPU||Intel Core i5-1035G1
4C / 8T 1.0-3.6 GHz
Gen 10 Graphics with 32 Eus
|Memory||4 / 8 GB LPDDR4x
16 GB LPDDR4x Available on Commercial Model
1536 x 1024 Resolution 148 PPI
3:2 Aspect Ratio
|Storage||64 GB eMMC
128 GB or 256 GB SSDs
|I/O||1 x USB Type-C
1 x USB Type-A
|Battery||Up to 13 hours
|Dimensions||278 x 206 x 15.7 mm
10.95 x 8.10 x 0.62 inches
|Starting Price (USD)||$549|
|Availability||Preorder Now, Available Oct 13|
The Surface Laptop Go offers up a 12.4-inch PixelSense touchscreen display, making the device a bit smaller than the Laptop 3, and it weighs a bit less as well, at 2.45 lbs. The display does lose some sharpness compared to the Laptop 3 though, with a 1536 x 1024 3:2 resolution, which is only 148 pixels-per-inch. That is a steep decline compared to the Laptop 3 with its 200 pixels-per-inch display. Microsoft does color-calibrate all of its displays to sRGB, so despite the lesser display than its Surface brethren, it should still be a step ahead of most of the displays in this price range.
Unlike the Surface Go 2, the Surface Laptop Go avoids the Y-Series processors and packs in a proper 15-Watt Intel Core i5-1035G1 processor, meaning four cores and eight threads based on Intel’s Ice Lake platform. If you feel it is a bit odd to see a 10th Generation Intel processor being announced in a new device when all of the 11th gen products are just being announced, you are right. But, Microsoft has had a tendency to release at their own cadence, rather than follow the annual product updates from Intel. Still, the Core i5-1035G1 is a great pick for this class of device. Ice Lake also means the Laptop Go gets Wi-Fi 6, thanks to Intel’s AX201.
What is not so great is the baseline offering in terms of memory and storage, with the $549 entry-level device offering just 4 GB of RAM and 64 GB of eMMC storage. This is unacceptable for a 2020 laptop. Microsoft can be difficult to figure out, as they want to offer premium products, but then offer configurations which are going to make the people who purchase them not enjoy them. It would have been best to see this configuration skipped entirely, as it should not be purchased. The Laptop Go will be offered with 8 GB of LPDDR4x, and even 16 GB on the commercial variant, with 128 GB and 256 GB SSD options.
Microsoft rates the new laptop at up to 13 hours of battery life with typical device usage – no longer do they rate their devices based on local video playback. It also offers a fast-charge with 80% battery life in just an hour of charging.
Although the device is small, Microsoft has managed to include a proper keyboard, and the Surface keyboards are generally some of the best around, so hopefully the Laptop Go continues that trend. It features 1.3 mm of travel, and offers backlighting, which is not always a given at this starting price. The glass trackpad which is 115 mm x 77 mm should be a nice step up from other laptops in this class, which generally feature plastic trackpads of mediocre quality.
Sadly this will be the first Surface product to not feature an IR camera since the Windows Hello-based facial recognition was first added back on the Surface Pro 4. Instead, Microsoft is including a fingerprint reader on the Power button, except on the base model, which gets no biometric support. The camera is a 720p f2.0.
It will also be the first Surface device to not offer an all-metal design. The Laptop Go will have an aluminum top mated to a polycarbonate and fibre resin base, featuring 30% post-consumer recycled content.
Microsoft clearly made some cuts to bring a premium design to the mid-range, but as long as the base model is avoided, the Surface Laptop Go looks to be a nice new entrant. It will be available in three colors with Ice Blue, Sandstone, and Platinum. The new Surface Laptop Go is available for pre-order today, with a launch date of October 13th.
Microsoft’s Surface Pro X seems to be a very divisive device. Being the only current generation Surface product powered by an Arm-based processor, it thrusts its users directly into the world of WoA: Windows on Arm – and all of the caveats that exist there. It is not too often we see Microsoft do a mid-cycle refresh, but the Surface Pro X gets to be the exception here as well. Today Microsoft is announcing some new updates to the Surface Pro X to make it faster, and flashier.
|Microsoft Surface Pro X|
|Memory||8 / 16 GB LPDDR4x|
2800 x 1920 (267 PPI)
3:2 aspect, 10-point multitouch
|Storage||128 / 256 / 512 GB removable SSD|
Qualcomm Snapdragon X24 LTE
|I/O||2 x USB Type-C Gen 2
|Webcam||5.0 MP front camera 1080p video
10 MP rear camera autofocus 4K Video
|Battery||Up to 15 hours
60 Watt Adapter
|Dimensions||287 x 208 x 7.3 mm
11.3 x 8.2 x 0.28 inches
|Weight||774 grams / 1.7 lbs (no keyboard)|
|Starting Price (USD)||$999
$1499 for new SQ2 Processor
The big change is that Microsoft is going to be offering their new Microsoft SQ2 processor as an optional upgrade over the SQ1 found in the Surface Pro X. We’ve reached out to the company to get clarification on the changes, but have only been told so far that the new processor is an enhanced version of the Qualcomm-built SQ1, offering more CPU and GPU performance. At this point our best guess is that the SQ2 is a version of Qualcomm's 8CX Gen 2 SoC, similar to how the SQ1 was based on the original 8CX.
Under the hood of the SQ2, the GPU upgrade comes courtesy of the Adreno 690, compared to the Adreno 685 in the SQ1. We have not been told frequencies yet but the SQ1 was 3 GHz peak, so expect a number higher than that. More performance is always welcome, so we hope we can review this model to see how it fares.
The performance increases also go hand-in-hand with the news yesterday that x64 emulation coming to the Windows Insider Program in November, which likely means a rollout to full Windows 10 on Arm sometime next year. This, coupled with more programs being natively compiled for Arm, such as Teams, should help get the Surface Pro X over the hump for more people. If more of the apps you use are natively compiled, the emulation performance and battery impact will be less noticeable, so that is always going to be the goal, but Microsoft has never been able to get every developer to get on-board with major changes like this, so the x64 emulation is a big step in making the Surface Pro X more usable for more people.
Other than the new, optional CPU, the other big change is that Surface Pro X will now be available in Platinum, rather than just the matte black that it was before.
As this is just a refresh, not much else is changing. Surface Pro X still comes with LTE availability with the Qualcomm X24 LTE modem, a 13-inch PixelSense display with a 2880x1920 resolution for 267 pixels-per-inch, 8 or 16 GB of LPDDR4x RAM, and 128 / 256 / 512 GB SSD drives which are removable.
The Surface Pro X starts at $999.99 USD, with the new SQ2 powered update starting at $1499.99.
Microsoft is also announcing new accessories today, including new keyboard colors for the Surface Pro X, with Platinum, Ice Blue, and Poppy Red. There are also new Designer Compact Keyboards with Bluetooth, offering two years of battery life, and three-device support, as well as matching number pads.
Microsoft is offering a wide-range of colors on the Microsoft Modern Mobile Mouse (Quad M? Impressive) with a new sandstone color joining the mix.
If you prefer something with a bit more shape, Microsoft also is announcing the Bluetooth Ergonomic Mouse, priced at $49.99.
Finally, there is a new 4K Display Adapter from Microsoft, priced at $69.99.
As a journalist and an editor that in any given year travels around the world to attend events, having the best device that helps me write news and generate content is always on my mind. Having a device with more battery life, or more performance, or can deal with dodgy Wi-Fi connections, or can process photos and video while having a good display at a nice price are all factors. Weight is also an important one, as it gets lugged around for 12+ hours a day, and I don’t want to be carrying around too many dongles for everything. The new XMG DJ 15 might be a contender for one of the best devices to do this with.
The XMG DJ 15 is a 15-inch notebook that is designed to be light but also has an array of connectivity unlike any other notebooks I’ve seen in this segment. Despite the size, there is no discrete graphics card in this design, saving space and also allowing the cooling system to be reduced from a typical gaming notebook. This keeps it thin, measuring only 19.9mm at its tallest point, but the use of aluminium keeps it light, weighing in at only 1.6 kg.
For connectivity, the unit has two USB 3.1 Type-A ports, a Thunderbolt 3/Type-C with 60W fast charging, a full-sized HDMI port, a mini-DisplayPort, an SD card reader, a gigabit Ethernet port (!), two 3.5mm jacks for headphone/microphone, and a USB 2.0 Type-A port for legacy.
Inside is an Intel 10th Gen Comet Lake processor, with the base specification using the Core i5-10210U, 16 GB of DDR4-2666 memory, and a 1 TB Samsung 970 EVO Plus PCIe 3.0 x4 NVMe storage drive, all for a pre-tax price 1052 Euro / $1237. The memory is SO-DIMM and the storage is M.2, making both upgradeable.
That also gets Wi-Fi 5, the 15.6-inch 1920x1080 IPS thin-bezel display, a HD webcam, a backlit keyboard with number pad, a Microsoft Precision Touchpad with a fingerprint sensor, and a 54.4 Wh battery. Moving up to Wi-Fi 6 is another $5.
This device isn’t so much aimed at workers, but DJs. XMG claims the unit is built with components that minimise DPC latency, even when Wi-Fi and Bluetooth are enabled. It is pre-tested with all the major DJ software, such as Serato, Traktor, Rekordbox and Virtual DJ. This makes it suitable even for simple DAW projects. It will ship with an optimized installation of Windows 10 Pro, and XMG quotes a maximum DPC Latency of a millisecond. XMG will also work with DJs that have custom needs and offers a bespoke service for those that might need additional features.
The XMG DJ 15 will be available in a traditional silver color or a more striking red design. Even in the base configuration, that price seems great for a workhorse machine on the road, and hopefully the display quality is decent. The version I’d be interested in, with the i7-10510U, Wi-Fi 6, and 32 GB of memory, runs at €1,337 (incl. tax). That can’t be a coincidence.
Today through the company’s rather short virtual launch event, among other novelties, Google has officially announced the new Pixel 4a (5G) and the new Pixel 5. Both phones had been teased for some time now as Google had pre-announced them back in in early August with the announcement of the Pixel 4a.
The new Pixel 4a (5G) is very much what its name implies, a variant of the Pixel 4a with added 5G connectivity through the addition of a Snapdragon 765 SoC. The phone here is very similar to its 4G variant, although Google had to grow the device’s dimensions a bit, and a more apt name for it would have been the 4a XL (5G) but that’s quite a mouthful.
The new Pixel 5 is a quite different phone for Google’s mainstream line-up as here the company has abandoned any attempts at making a flagship device, relegating itself into the mid-range to premium price segment. Also featuring a Snapdragon 765, the phone’s other specs are quite more conservative compared to other devices in 2020 – it’s somewhat of a risky move at a still rather high $699 price point.
Today Xiaomi is announcing their late-year flagship devices with the unveiling of the new Mi 10T and Mi 10T Pro devices. The refreshes this year are a bit more unconventional as they aren’t exactly direct successors to the Mi 10 and Mi 10 Pro – but rather lower-cost alternatives. Still, the new devices promise to bring a slew of new software camera features as well as being the first phones on the market to adopt new 144Hz display screens with AdaptiveSync variable refresh rate functionality.
The new Mi 10T’s series biggest selling point is probably their reduced prices, starting at only 499€ for the base model Mi 10T – still delivering you a Snapdragon 865 SoC, a competitive camera as well as the aforementioned 144Hz OLED display, packing a massive 5000mAh battery.
External bus-powered storage devices have grown both in storage capacity as well as speeds over the last decade. Thanks to rapid advancements in flash technology (including the advent of 3D NAND and NVMe) as well as faster host interfaces (such as Thunderbolt 3 and USB 3.x), we now have palm-sized flash-based storage devices capable of delivering 2GBps+ speeds. While those speeds can be achieved with Thunderbolt 3, mass-market devices have to rely on USB. This review discusses the performance and characteristics of Western Digital's latest offerings (2020 catalog) supporting USB 3.1 Gen 2 (10 Gbps) speeds.
High-performance external storage devices use either Thunderbolt 3 or USB 3.2 Gen 2 for the host interface. Traditional SATA SSDs (saturating at 560 MBps) can hardly take full advantage of the bandwidth offered by USB 3.2 Gen 2. In 2020, we have seen the market move en-masse to NVMe SSDs behind a USB 3.2 Gen 2 bridge for this market segment.
Western Digital brought NVMe support to their My Passport SSD product line last month. Today, the company is launching the SanDisk Extreme Portable SSD v2 (along with the Extreme PRO Portable SSD v2). The Extreme v2 is of particular interest here, as both the feature set and the performance specifications tally with that of the My Passport SSD. The company provided us with review samples of the 1TB versions of the My Passport SSD as well as the SanDisk Extreme Portable SSD v2.
The two products are packaged similarly and both come with short (15cm) USB 3.2 Gen 2 Type-C to Type-C cables. A Type-C to Type-A adaptor is supplied, similar to the ones with the previous generation external SSDs from Western Digital. The industrial design of the units is quite different, each appealing to its own target market. The carabiner loop in the SanDisk Extreme / PRO line has proved to be a useful complement to the gumstick form-factor enforced by the usage of a M.2 NVMe SSD. It has been particularly appreciated by content creators (photographers and videographers) on the go. The My Passport SSD with its rounded edges and grooves / availability in multiple colors may hold appeal to the mainstream style-conscious audience. As we shall see further down in the 'Device Features & Characteristics' section, the internal hardware is identical. The rest of the review also tackles another interesting aspect - does the same internal hardware lead to similar performance profiles for the two SSDs?
In this review, we compare the SanDisk Extreme Portable SSD v2 and the WD My Passport SSD (2020) against each other, as well as the following DAS units that we have reviewed before.
A quick overview of the internal capabilities of the storage devices is given by CrystalDiskInfo.
The SanDisk Extreme Portable SSD v2 and the WD My Passport SSD (2020) use the same internal SSD - the Western Digital SN550E. The SN550 is available in retail under the WD Blue branding. We believe that the 'E' suffix stands for 'External' - WD did confirm that the SSD being used was SN550-class, and it contained specific firmware tweaks for use as an external SSD. Like almost every other M.2 NVMe SSD behind a USB 3.2 Gen 2 bridge, the SanDisk Extreme Portable SSD v2 and the WD My Passport SSD (2020) support S.M.A.R.T. passthrough and TRIM (though it is not explicitly evident in the CrystalDiskInfo screenshot).
The gallery above presents some pictures of the internals of the WD My Passport SSD (2020). We see that the two sides of the My Passport SSD clamshell are held together by industrial-strength double sided tape. Prying apart the two at the seam was relatively painless - in fact, it was the easiest portable SSD to take apart (out of all the ones that I had worked on earlier). The SanDisk Extreme Portable SSD v2's top segment holds on to the bottom segment using a series of plastic clips in the inside perimeter - this is straightforward to take out using opening picks. Selected pictures are available in the gallery below.
Inside the unit, we see that a thermal pad right across the M.2 SSD (in the Extreme v2 teardown) and another on the reverse side (in the My Passport SSD teardown). In addition to helping remove the heat away, they also ensure that the boards are snug inside the enclosure and can withstand shocks and vibrations. Pictures of the ASMedia ASM2362 bridge chip can be seen in the main board, while the SanDisk 20-82-10023 controller can be seen in the M.2 SSD.
Evaluation of DAS units on Windows is done with a Hades Canyon NUC configured as outlined below. We use one of the rear USB Type-C ports enabled by the Alpine Ridge controller for both Thunderbolt 3 and USB devices.
|AnandTech DAS Testbed Configuration|
|CPU||Intel Core i7-8809G
Kaby Lake, 4C/8T, 3.1GHz (up to 4.2GHz), 14nm+, 8MB L2
|Memory||Crucial Technology Ballistix DDR4-2400 SODIMM
2 x 16GB @ 16-16-16-39
|OS Drive||Intel Optane SSD 800p SSDPEK1W120GA
(118 GB; M.2 Type 2280 PCIe 3.0 x2 NVMe; Optane)
|SATA Devices||Intel SSD 545s SSDSCKKW512G8
(512 GB; M.2 Type 2280 SATA III; Intel 64L 3D TLC)
|Chassis||Hades Canyon NUC|
|PSU||Lite-On 230W External Power Brick|
|OS||Windows 10 Enterprise x64 (v1909)|
|Thanks to Intel for the build components|
Our evaluation methodology for direct-attached storage devices adopts a judicious mix of synthetic and real-world workloads. While most DAS units targeting a particular market segment advertise similar performance numbers and also meet them for common workloads, the real differentiation is brought out on the technical side by the performance consistency metric and the effectiveness of the thermal solution. Industrial design and value-added features may also be important for certain users. The remaining sections in this review tackle all of these aspects after analyzing the features of the drives in detail.
Prior to looking at the usage characteristics of the SanDisk Extreme Portable SSD v2 and the WD My Passport SSD (2020), it is helpful to compare their specifications against other similar SSDs.
|Direct-Attached Storage Characteristics|
|Upstream Port||USB 3.2 Gen 2 Type-C||USB 3.2 Gen 2 Type-C|
|Bridge / Controller||ASMedia ASM2362 + SanDisk 20-82-10023||ASMedia ASM2362 + SanDisk 20-82-10023|
|Flash||SanDisk BiCS 4 96L 3D TLC||SanDisk BiCS 4 96L 3D TLC|
|Power||Bus Powered||Bus Powered|
|Physical Dimensions||52.42 mm x 100.54 mm x 8.95 mm||55 mm x 100 mm x 9 mm|
|Weight||63 grams (without cable)||54 grams (without cable)|
|Cable||15 cm USB 3.2 Gen 2 Type-C to Type-C
Type-C to Type-A Adaptor
|15 cm USB 3.2 Gen 2 Type-C to Type-C
Type-C to Type-A Adaptor
|Encryption Support||Hardware (SanDisk SecureAccess App)||Hardware (WD Security App)|
The two SSDs have the shortest supplied cable lengths at 15cm. Tower desktop users with USB-C ports in the rear panel may need to keep this in mind. The drives feel solid in hand, thanks to their 50g+ weight. The WD My Passport SSD (2020) is slightly wider than the SanDisk Extreme v2, but both of them fit easily in pockets for carrying around.
SanDisk claims speeds of up to 1050 MBps for the two SSDs, and these are almost backed up by the ATTO benchmarks provided below. Unfortunately, these access traces are not very common in real-life scenarios.
|Drive Performance Benchmarks - ATTO|
Speeds top out at 989 MBps reads and around 912 MBps writes for the two SSDs. An interesting point to note here is that the SanDisk Extreme Pro Portable SSD from last year had similar read numbers, but the writes went up to 929 MBps.
CrystalDiskMark, despite being a canned benchmark, provides a better estimate of the performance range with a selected set of numbers.
|Drive Performance Benchmarks - CrystalDiskMark|
As evident from the screenshot above, the performance can dip to as low as 21 MBps for 4K random reads.
Our testing methodology for DAS units takes into consideration the usual use-case for such devices. The most common usage scenario is transfer of large amounts of photos and videos to and from the unit. Other usage scenarios include the use of the DAS as a download or install location for games and importing files directly off the DAS into a multimedia editing program such as Adobe Photoshop. Some users may even opt to boot an OS off an external storage device.
The AnandTech DAS Suite tackles the first use-case. The evaluation involves processing three different workloads:
Each workload's data set is first placed in a 25GB RAM drive, and a robocopy command is issued to transfer it to the DAS under test (formatted in NTFS). Upon completion of the transfer (write test), the contents from the DAS are read back into the RAM drive (read test). This process is repeated three times for each workload. Read and write speeds, as well as the time taken to complete each pass are recorded. Bandwidth for each data set is computed as the average of all three passes.
It can be seen that there is no significant gulf in the numbers between the different units. For all practical purposes, the casual user will notice no difference between them in the course of normal usage.However, power users may want to dig deeper to understand the limits of each device. To address this concern, we also instrumented our evaluation scheme for determining performance consistency.
Aspects influencing the performance consistency include SLC caching and thermal throttling / firmware caps on access rates to avoid overheating. This is important for power users, as the last thing that they want to see when copying over 100s of GB of data is the transfer rate going down to USB 2.0 speeds.
In addition to tracking the instantaneous read and write speeds of the DAS when processing the AnandTech DAS Suite, the temperature of the drive was also recorded at the beginning and end of the processing. In earlier reviews, we used to track the temperature all through. However, we have observed that SMART read-outs for the temperature in NVMe SSDs using USB 3.2 Gen 2 bridge chips end up negatively affecting the actual transfer rates. To avoid this problem, we have restricted ourselves to recording the temperature at either end of the actual workloads set. The graphs below present the recorded data.
|Performance Consistency and Thermal Characteristics|
The first three sets of writes and reads correspond to the photos suite. A small gap (for the transfer of the video suite from the internal SSD to the RAM drive) is followed by three sets for the video suite. Another small RAM-drive transfer gap is followed by three sets for the Blu-ray folder. An important point to note here is that each of the first three blue and green areas correspond to 15.6 GB of writes and reads respectively. The consistency shown across different passes of the same workload show that no thermal throttling is at play for either SSD. The thermal solution in both perform very similarly for normal workloads - around 3C-4C rise in temperature after around 250GB of reads and writes.
There are a number of storage benchmarks that can subject a device to artificial access traces by varying the mix of reads and writes, the access block sizes, and the queue depth / number of outstanding data requests. We saw results from two popular ones - ATTO, and CrystalDiskMark - in a previous section. More serious benchmarks, however, actually replicate access traces from real-world workloads to determine the suitability of a particular device for a particular workload. Real-world access traces may be used for simulating the behavior of computing activities that are limited by storage performance. Examples include booting an operating system or loading a particular game from the disk.
PCMark 10's storage bench (introduced in v2.1.2153) includes four storage benchmarks that use relevant real-world traces from popular applications and common tasks to fully test the performance of the latest modern drives:
Despite the data drive benchmark appearing most suitable for testing direct-attached storage, we opted to run the full system drive benchmark as part of our evaluation flow. Many of us use portable flash drives as boot drives and storage for Steam games. These types of use-cases are addressed only in the full system drive benchmark.
The Full System Drive Benchmark comprises of 23 different traces. For the purpose of presenting results, we classify them under five different categories:
PCMark 10 also generates an overall score, bandwidth, and average latency number for quick comparison of different drives. The sub-sections in the rest of the page reference the access traces specified in the PCMark 10 Technical Guide.
The read-write bandwidth recorded for each drive in the boo access trace is presented below.
Both SSDs appear in the top half of the chart, and are off from the leader by a small margin.
The read-write bandwidth recorded for each drive in the sacr, saft, sill, spre, slig, sps, aft, exc, ill, ind, psh, and psl access traces are presented below.
In almost all of the creative workloads, the two SSDs miss out on the top spot by a whisker to the HP Portable SSD P700.
The read-write bandwidth recorded for each drive in the exc and pow access traces are presented below.
The SSDs come out in the top half again with at least one pole position - however, the HP P700 performs almost as well for the office workloads.
The read-write bandwidth recorded for each drive in the bf, cod, and ow access traces are presented below.
The observations repeat for the gaming workloads - the two SSDs are neck-to-neck with the HP P700 (for write-heavy workloads) and the Crucial Portable SSD X8 (for the read-heavy ones).
The read-write bandwidth recorded for each drive in the cp1, cp2, cp3, cps1, cps2, and cps3 access traces are presented below.
Mixed workloads involving large file sizes seem to trip up the two SSDs, but the drives emerge in pole position in the other file transfer workloads.
PCMark 10 reports an overall score based on the observed bandwidth and access times for the full workload set. The score, bandwidth, and average access latency for each of the drives are presented below.
From an overall perspective, the SanDisk Extreme Portable SSD v2 and the WD My Passport SSD (2020) come out on top by a significant margin. This points to an all-round performance, while other competing SSDs are optimized for one type of workload only.
The performance of the drives in various real-world access traces as well as synthetic workloads was brought out in the preceding sections. We also looked at the performance consistency for these cases. Power users may also be interested in performance consistency under worst-case conditions, as well as drive power consumption. The latter is also important when used with battery powered devices such as notebooks and smartphones. Pricing is also an important aspect. We analyze each of these in detail below.
Flash-based storage devices tend to slow down in unpredictable ways when subject to a large number of small-sized random writes. Many benchmarks use that scheme to pre-condition devices prior to the actual testing in order to get a worst-case representative number. Fortunately, such workloads are uncommon for direct-attached storage devices, where workloads are largely sequential in nature. Use of SLC caching as well as firmware caps to prevent overheating may cause drop in write speeds when a flash-based DAS device is subject to sustained sequential writes.
Our Sequential Writes Performance Consistency Test configures the device as a raw physical disk (after deleting configured volumes). A fio workload is set up to write sequential data to the raw drive with a block size of 128K and iodepth of 32 to cover 90% of the drive capacity. The internal temperature is recorded at either end of the workload, while the instantaneous write data rate and cumulative total write data amount are recorded at 1-second intervals.
|Sequential Write to 90% of Disk Capacity - Performance Consistency|
The Extreme v2 maintains speeds between 815 - 850 MBps throughout the write workload (up to 90% of drive capacity), with the temperature ending up at 70C. (11C delta). In terms of performance numbers, it is almost as good as the Extreme PRO from 2019 which stayed at 850 MBps throughout. The My Passport SSD (2020) starts off with similar speeds as the Extreme v2, and stays there for around 15 seconds (~13GB of write data), before dropping down to around 670 MBps (which is sustained throughout the workload). The temperature does end up at 76C (21C delta). Normally, one would assume that the 13GB change-over is the effect of SLC caching, but the absence of the cliff in the Extreme v2 points to something else - the SanDisk Extreme v2 has a little higher thermal capacity compared to the My Passport SSD (2020). This allows a slightly higher bump up in performance for the former. In other words, Western Digital is more proactive in throttling the My Passport when there is a possibility of rapid temperature rise.
Bus-powered devices can configure themselves to operate within the power delivery constraints of the host port. While Thunderbolt 3 ports are guaranteed to supply up to 15W for client devices, USB 2.0 ports are guaranteed to deliver only 4.5W (900mA @ 5V). In this context, it is interesting to have a fine-grained look at the power consumption profile of the various drives. Using the Plugable USBC-TKEY, the bus power consumption of the drives was tracked while processing the CrystalDiskMark workloads (separated by 30s intervals). The graphs below plot the instantaneous bus power consumption against time, while singling out the maximum and minimum power consumption numbers.
|Drive Power Consumption - CrystalDiskMark Workloads|
The two SSDs clock in between 2.1W and 5W for the workloads. These are not the most power efficient external SSDs, as the Samsung T7 Touch operates between 0.6W and 4W for the same workloads. These numbers are fine for usage with desktops and high-performance notebooks, but the aspect needs to be kept in mind when using them with mobile phones and tablets.
The price of flash-based storage devices tend to fluctuate quite a bit over time. However, the relative difference between different models usually doesn't change. The table below summarizes the product links and pricing for the various units discussed in the review.
|External Flash Storage Devices - Pricing|
|Product||Model Number||Capacity (GB)||Street Price (USD)||Price per GB (USD/GB)|
|ADATA SE800 1TB||ASE800-1TU32G2-CBK||1000||$130||0.13|
|Crucial Portable SSD X8 1TB||CT1000X8SSD9||1000||$150||0.15|
|WD My Passport SSD (2020) 1TB||WDBAGF0010BGY-WESN||1000||$150||0.15|
|Patriot PXD 1TB||PXD1TBPEC||1000||$170||0.17|
|HP P700 1TB||5MS30AA#ABC||1000||$175||0.175|
|Lexar SL100 Pro 1TB||LSL100P-1TBRBNA||1000||$190||0.19|
|Samsung Portable SSD T7 Touch 1TB||MU-PC1T0S/WW||1000||$190||0.19|
|SanDisk Extreme Pro Portable SSD 1TB||SDSSDE80-1T00-A25||1000||$190||0.19|
|SanDisk Extreme Portable SSD v2 1TB||SDSSDE61-1T00||1000||$240||0.24|
The WD My Passport SSD (2020) offers excellent value for money. The verdict on the SanDisk Extreme Portable SSD v2 can be given only after the street pricing is known. That said, even if it were to be $200, we can say that the performance and consistency are worth it.
After careful analysis of various aspects (including benchmark numbers, temperatures, power consumption, and pricing), it is clear that the WD My Passport SSD (2020) and the SanDisk Extreme Portable SSD v2 are both excellent choices for a wide variety of applications. However, as the adage goes - one can't have the cake and eat it too. Both SSDs deliver the performance and consistency at the cost of increased power consumption and slightly high temperatures. Optimizing for those metrics would mean losing out on the aspects that deliver instant gratification - getting transfers done quickly without any throttling. However, those very metrics might turn out to be of key concern in certain scenarios. Therefore, the right choice depends on the use-case. Based on our tests, the SanDisk Extreme Portable SSD v2 effectively serves the needs of content creators who need to use it in the field (thanks to its carabiner loop, IP55 rating, and 2m drop protection). The WD My Passport SSD (2020) is a better fit for the casual home / business user.
The SanDisk Extreme PRO Portable SSD (2019) model continues to be our favorite / recommended portable SSD as long as it is not EOL-ed. The use of a high-end SSD (non-DRAM-less) and a better thermal solution in the unit ensures that it surpasses the SanDisk Extreme v2 and the My Passport SSD across all metrics except for power consumption. Despite the suggested retail price of $200 for the new 1TB Extreme v2, e-tailers are pricing it at $240. The premium is not justified as long as the $190 Extreme PRO (2019) is available in the market. Once the street price starts approaching the 20c/GB mark, the SanDisk Extreme Portable SSD v2 will climb to the second spot. The WD My Passport SSD (2020) is recommended for casual users who do not need the IP55 rating and need an economical yet stylish portable drive. Even if its performance consistency doesn't match the Extreme v2, the value and performance proposition is quite strong compared to the other options in the market.
The SanDisk Extreme PRO Portable SSD released in 2019 has been one of the top performers in the external flash storage market segment. Putting a high-end WD Black SN750-class M.2 PCIe 3.0 x4 NVMe SSD behind an ASMedia ASM2362 bridge helped it deliver speeds of up to 1050 MBps when used with USB 3.2 Gen 2 ports. One of the key differentiators was its performance consistency under heavy sequential writes, with no SLC caching effects (write speed cliff) or thermal throttling. Coupled with its handy industrial design (in particular, the carabiner loop integrated into an easy-to-carry casing), it has been on top of our list of recommended USB 3.2 Gen 2 external SSDs since the beginning of this year. Along with the 2019 PRO, SanDisk also offered the lower-priced 2018 SanDisk Extreme - a SATA SSD behind a USB 3.2 Gen 1 bridge. Despite thermal throttling under stress, the performance and price made the 2018 Extreme Portable SSD an attractive option for casual users.
Today, Western Digital is upgrading both the SanDisk Extreme and the Extreme PRO Portable SSDs models with a v2 suffix - Accompanying that is an approximate doubling of the peak bandwidth numbers for both models. In short, the SanDisk Extreme Portable SSD v2 is now a USB 3.2 Gen 2 device with speeds of up to 1050 MBps. The SanDisk Extreme PRO Portable SSD v2 is a USB 3.2 Gen 2x2 device with speeds of up to 2000 MBps.
A detailed review of the 1TB model of the SanDisk Extreme Portable SSD v2 (along with the recently introduced WD My Passport SSD (2020) 1TB version) is available here. A DRAM-less SN550-class NVMe SSD is used behind an ASMedia bridge, allowing for lower power consumption and a relaxed thermal design compared to the 2019 Extreme PRO Portable SSD (which also claimed read/write speeds of up to 1050 MBps).
The Extreme PRO Portable SSD v2 uses the WD Black SN730E SSD, which is essentially the OEM version of the SN750, but upgraded to 96L BiCS 4 3D NAND flash, and has firmware tweaked for use with external drives. The PCIe 3.0 x4 NVMe interface allows the Extreme PRO Portable SSD v2 to support read/write speeds of up to 2000 MBps. The forged aluminum heat sink used in the 2019 SanDisk Extreme PRO is carried over to prevent thermal throttling. The SSD also has a 2m drop protection. New to the v2 SSDs is official compatibility with a range of USB Type-C smartphones.
The USB 3.2 Gen 2x2 port in the PRO v2 is enabled by the ASMedia ASM2364 bridge chip. The SanDisk Extreme PRO Portable SSD v2 is not the first USB 3.2 Gen 2x2 drive from the Western Digital stable. Late last year, WD had started selling the WD_BLACK P50 with similar advertised speeds. Targeting the gaming market, the WD_BLACK P50 had a unique industrial design and utilized a SN750E internal SSD (64L BiCS 3 3D NAND flash) - a version of the SN750 with tweaked firmware.
Despite the appearance of USB 3.2 Gen 2x2 ports in certain high-end motherboards, and the announcement of several USB 3.2 Gen 2x2 PCIe cards, the uptake of this high-speed interface in the computing world has been limited. Announced cards (such as the GIGABYTE GC-USB 3.2 GEN2X2) are yet to become available in the retail market. In fact, only the WD_BLACK P50 and the Seagate Firecuda Gaming SSD appear to be USB 3.2 Gen 2x2 client devices from well-known manufacturers available for purchase today. They are now joined by the SanDisk Extreme PRO Portable SSD v2. Western Digital indicated that they expect increased adoption of USB 3.2 Gen 2x2 next year on the host side. Given that the Tiger Lake platform doesn't support USB 3.2 Gen 2x2, it is going to be interesting to watch how this plays out in the near future. We will have some additional comments on the state of this market segment and hands-on reviews of some USB 3.2 Gen 2x2 gear in the coming days.
Both of the drives being introduced today have an operating temperature range of 0 to 45C, and come with an IP55 ingress protection rating. The drives are also getting hardware encryption support, which brings them on par with the WD My Passport SSD as far as security is concerned. Western Digital also indicated that their confidence in BiCS 4 flash is allowing them to upgrade the warranty on the SanDisk Extreme model from the usual 3 years to 5 years. On the pricing front, the USB 3.2 Gen 2x2 Extreme PRO Portable SSD v2 is priced at $300 and $500 for the 1TB and 2TB versions. The Extreme Portable SSD v2 has a suggested retail price of $120, $200, and $355 for the 500GB, 1TB, and 2TB versions respectively.
Intel's Tiger Lake launch was focused on ultrabooks and notebooks, as various SKUs with TDP ranging from 7 to 28W were launched. The performance of Intel's low-power parts (U- and Y-series) have been good enough to land them inside small and ultra-compact form-factor systems. These systems have become a big hit in the market (not least, Intel's own NUC systems) since they gained prominence in the early 2010s. Vendors such as ASRock, ASUS, ECS, and GIGABYTE also jumped on this bandwagon to market 'NUCs' under their own branding. GIGABYTE was one of the early ones to do so with their BRIX series of mini-PCs. These SFF and UCFF systems find applications in multiple areas including content creation, productivity, and gaming, as well as embedded systems applications such as digital signage.
Intel's Tiger Lake-based NUCs (Panther Canyon and Phantom Canyon) are an open secret in tech circles. ASRock Industrial's Tiger Lake NUCs such as the NUC BOX-1165G7 have also been hinted at in Intel's marketplace - a retail follow-up to the embedded market-focused iBOX 1100 and NUC 1100 solutions. GIGABYTE, however, became the first vendor to officially announce Tiger Lake-based mini-PCs targeting the retail market with the launch of the GIGABYTE BRIX PRO. Three models (BSi3-1115G4, BSi5-1135G7, and the BSi7-1165G7) are being introduced. Their specifications are summarized in the table below.
|GIGABYTE BRIX PRO (Tiger Lake-U) Lineup|
|CPU||Intel Core i3-1115G4
1.7 - 4.1 GHz (3.0 GHz)
12 - 28 W (28W)
|Intel Core i5-1135G7
0.9 - 4.2 GHz (2.4 GHz)
12 - 28 W (28W)
|Intel Core i7-1165G7
1.2 - 4.7 GHz (2.8 GHz)
12 - 28 W (28W)
|GPU||Intel® UHD Graphics for 11th Gen Intel® Processors (48EU) @ 1.25 GHz||Intel® Iris® Xe Graphics (80EU) @ 1.3 GHz||Intel® Iris® Xe Graphics (96EU) @ 1.3 GHz|
|DRAM||Two DDR4 SO-DIMM slots
Up to 64 GB of DDR4-3200 in dual-channel mode
|Storage||SSD||1x M.2-2280 (PCIe 4.0 x4 (CPU-direct))
1x M.2-2280 (PCIe 3.0 x4 or SATA)
|DFF||1 × SATA III Port (for SATA DOM? No space for 2.5-inch drive?)|
|Wireless||Intel Wi-Fi 6 AX201
2x2 802.11ax Wi-Fi + Bluetooth 5.1 module
|Ethernet||1 × GbE port (Intel I219-V)
1 × 2.5 GbE port (Intel I225-V)
|USB||Front||4 × USB 3.2 Gen 2 Type-A|
|Rear||2 × USB 3.2 Gen 2 Type-A|
|Thunderbolt||1 x Thunderbolt 4 (Type-C Rear Panel)|
|Display Outputs||4 × HDMI 2.0a
1 × DisplayPort 1.4 (using Thunderbolt 4 Type-C)
(Only four simultaneous display outputs are supported)
|Audio||1 × 3.5mm audio jack (Realtek ALC255)|
|Warranty||Typical, varies by country|
|Dimensions||Length: 196.2 mm
Width: 140 mm
Height: 44.4 mm
THe Tiger Lake-based BRIX PRO eschews the NUC form-factor (approx. 4"x4" / 100mm x 100mm) for a 3.5" single-board computer one that is popular in embedded markets. The motherboard's actual dimensions are 5.75" x 4" (146mm x 102mm), and the system's dimensions come in at 196.2mm x 44.4mm x 140mm. At 1.16L in volume, it is still a compact machine. The Tiger Lake-U processors in the BRIX PRO units are configured to run at their maximum cTDPup of 28W.
One of the unique aspects of the units is the availability of 4x HDMI 2.0 ports - each capable of driving a 4Kp60 display. In addition, a Thunderbolt 4 port (with a display output capability of 8Kp60) is also available. The system can drive four of those five display outputs simultaneously. Segments of the chassis are metallic, allowing for the Wi-Fi antenna to magnetically clasp to it.
The Tiger Lake-U processor can be configured with different PL2 values depending on the power delivery circuitry. GIGABYTE believes that the robustness of its board design, coupled with the 135W external power adapter can sustain upwards of 70W for the PL2 setting.
Retail availability of the new BRIX PRO units is expected in November 2020. Pricing hasn't been announced yet. GIGABYTE also hinted at the possibility of UCFF BRIX systems sporting Tiger Lake-U processors reaching the market soon.
Functional safety is an area of computing that is becoming ever more important as we see more and more embedded technologies integrated into our daily lives. Arm’s Automotive Enhanced (AE) line of IP had been launched back in 2018 with the release of the Cortex-A76AE.
Fast-forward a few years, it’s time for a new set of AE IP, with Arm now introducing the new Cortex-A78AE, bringing a higher performance CPU core, and also for the first time introducing an AE class GPU and ISP in the form of the Mali-G78AE and Mali-C71AE. With the move, Arm also says that it is diversifying beyond just the automotive sector and widening the scope to industrial and other autonomous systems.
High-performance bus-powered direct-attached storage units have become very popular, thanks to the advent of high-speed interfaces and technologies such as USB 3.2 Gen 2, NVMe, and 3D NAND flash. Flash manufacturers such as Western Digital / SanDisk, Crucial (Micron), and Samsung produce and market external solid-state drives (SSDs) under their own brand. In addition, manufacturers such as Patriot Memory also buy flash in the open market and bring their own value additions to the external SSD market. Today, we are taking a look at Patriot's PXD external SSD.
MinisForum has been making some interesting moves in the last few months with their computing platforms, ranging from the DMAF5 based on the Ryzen 3000H-series SoCs to the Ice Lake-based DeskMini X35G. In early August, they reached out to us to pitch their first mini-PC sporting a discrete GPU - the EliteMini H31G. Intrigued by their claims of being able to cram a 65W CPU and a 75W GPU in a chassis measuring approximately 15cm x 15cm x 6cm, we accepted their offer of a review unit to put through our standard mini-PC benchmarking process. The sample arrived last week, but we have a few thoughts to share.
The EliteMini H31G is a compact mini-PC smaller than other dGPU-equipped mini-PCs we have reviewed before like the ASRock DeskMini Z370 GTX 1060 and the Ghost Canyon NUC. A look at the unit reminded me of the GIGABYTE BRIX Gaming BXi5G-760 we had reviewed back in 2014. Our review sample came with a Core i5-9500F 65W CPU pre-installed, along with a GTX 1050 Ti MXM card with a TDP of 75W. As the naming of the kit indicates, the board uses a H310 PCH.
The system uses a unique integrated dual-fan cooling system - the first time we have seen this type of design in a mini-PC. The dual-fan configuration in the BRIX Gaming kit was just two fans crammed into one end of the case, while the cooling kit in the H31G appears to be much better thought out. Obviously, these are early days and I am not passing any judgement on the effectiveness without putting it through our thermal stress test. The BRIX Gaming kit was cooling a 47W CPU and a 100W GPU, while the H31G is handling a 65W CPU and a 75W GPU. There was a reason for GIGABYTE to redesign their Gaming BRIX units from scratch for the newer iterations - so I am really looking forward to seeing how the H31G handles thermal stress from both the CPU and GPU simultaneously. Readers interested in a full breakdown of the cooling system should view the official launch video of the EliteMini H31G.
Our review sample shipped with a single 8GB DDR4 SODIMM, and a 256GB Kingston M.2 2280 NVMe drive. The system's performance is bound to be better with both SODIMM slots occupied, but we are proceeding with the review of the supplied configuration as-is (it happens to be one of the pre-built configurations available for purchase). The gallery below shows some of the internals of the system, and a size comparison against other mini-PCs with dGPUs.
Based on our experience with setting up the system and our first round of benchmarking, we found some minor annoyances and a few interesting aspects:
The barebones version (without a CPU, but with the MXM GTX 1050 Ti module) is priced at $399. Our review configuration (Core i5-9500F, 8GB RAM, 256GB SSD) is priced at $629. The main challenge for MinisForum is that the system design, though unique, carries technology that is almost 3 years old at this point - a Kaby Lake or Coffee Lake desktop CPU, along with a Pascal GPU. Fortunately, the system is priced accordingly. The ASRock Deskmini Z370 GTX 1060 was launched at $800 - the MinisForum EliteMini H31G is being marketed for half of that. It would have been preferable for MinisForum to use a more modern CPU and GPU for this project. That said, having an effective working system based on older components might just give them the impetus to use the design with newer CPUs and GPUs.
The fall rush of laptop announcements is upon us, thanks to Intel announcing their latest 11th generation Core processor, codenamed Tiger Lake, and packaged as part of the Intel Evo program. Today Lenovo is announcing the new ThinkPad X1 Nano, featuring Intel’s Evo platform, as well as a few tweaks to the traditional ThinkPad design.
|Lenovo ThinkPad X1 Nano|
|CPU||Up to 11th Gen Intel Core i7|
|Memory||Up to 16 GB LPDDR4x|
|Display||13-inch 2160x1350 Dolby Vision
100% sRGB 450-nit
With or without Touch
|Storage||Up to 1 TB PCIe NVMe|
|Wireless||Intel AX201 Wi-Fi 6
LTE 5G CAT20
LTE 4G CAT9
|I/O||Thunderbolt 4 x 2
|Webcam||IR with Human Presence|
Up to 65-Watt Type-C Adapter
|Dimensions||292.8 x 207.7 x 13.87 mm
11.5 x 8.15 x 0.55 inches
|Weight||Starting at 962 grams / 2.12 lbs|
|Starting Price (USD)||$1,399|
Powering the new ThinkPad X1 Nano will be Intel’s newest 10 nm design, Tiger Lake, with up to a Core i7 processor. That also means it will feature the full 96 Execution Unit Intel Iris Xe graphics, and up to 16 GB of LPDDR4x memory. The X1 Nano will offer up to 1 TB of PCIe storage, and the 48 Wh battery is rated up to 17.3 hours.
Lenovo has finally made the jump back to 16:10 displays, with the X1 Nano featuring a 13-inch panel with a somewhat odd, but effective, 2160x1350 display. This “2K” display is a nice step up over a more traditional 1920x1200, coming in at 195 pixels-per-inch. It may seem like a small jump over the 170 pixels-per-inch of the 1920x1200, but will allow 200% scaling to work perfectly. It also won’t impact the battery life as dramatically as a “4K” panel would, so it seems like a nice balance. As seems to be the norm with Lenovo displays of late, this 100% sRGB panel features Dolby Vision, and can be had with or without touch.
The new laptop is also light. The ThinkPad X1 Nano weighs in at just 2.12 lbs. The device measures in at 11.5 x 8.15 x 0.55 inches, so it is not the thinnest, nor the lightest, but it is close.
There is plenty of connectivity as well, with Lenovo outfitting the X1 Nano with two Thunderbolt 4 ports. Not only does Thunderbolt 4 offer more performance, security, and features compared to Thunderbolt 3, it also provides full access to data, power, and video guaranteed in every port, unlike USB which has a long list of optional features.
Lenovo is implementing Intel’s Wi-Fi 6 solution, which is of course part of the Intel Evo platform, but they are enhancing that with LTE 5G CAT20 for those that need network on the go.
As a proper ThinkPad, the X1 Nano also takes security seriously, with a dTPM 2.0 chip, IR camera and Match on Chip fingerprint reader for Windows Hello logins, and a ThinkShutter camera cover.
The new X1 Nano will be available in Q4 2020, starting at $1399.
Announced back at CES, Lenovo’s ThinkPad X1 Fold is now available for preorder. Combining a foldable 13.3-inch OLED display with Intel’s Lakefield Hybrid CPU, this Always Connected PC ushers in a new form factor for the PC, mirroring some of the development on the smartphone side of the fence.
|Lenovo ThinkPad X1 Fold|
|CPU||Intel Core Processor with Intel Hybrid Technology|
|Memory||8 GB LPDDR4X-4267|
|Display||13.3-inch Flexible OLED
4:3 aspect ratio
95% P3 Gamut
|Storage||Up to 1 TB NVIe M.2 2242|
|Wireless||Intel AX201 Wi-Fi 6
5G sub 6GHz with 4G LTE CAT20 coverage
|I/O||1 x USB Type-C Gen 1
1 x USB Type-C Gen 2
1 x SIM
|Webcam||5MP HD RGB + IR Camera|
65-Watt Type-C Adapter
|Dimensions||299.4 x 236.0 x 11.5 mm open
158.2 x 236.0 x 27.8 mm folded
|Weight||999 grams / 2.2 lbs|
|Starting Price (USD)||$2,499|
The 13.3-inch flexible OLED display features a 2048 x 1536 resolution, 300 nit brightness, and can display 95% of the P3 color gamut. You can use it as a 13.3-inch tablet with it open, or use each 9.6-inch half of the display separately.
The ACPC features 5G connectivity, as well as Wi-Fi 6, and offers two USB Type-C ports, one at Gen 1 speeds and the other at Gen 2. The foldable PC offers USB-C docking, and of course supports an active pen.
The final dimensions are 299.4 x 236 x 11.5 mm open, and 158.2 x 236 x 27.8 mm when closed.
If you want to be one of the first to own a foldable PC, it is perhaps unsurprising that the X1 Fold is going to cost. A lot. The new X1 Fold starts at $2499 USD with preorders starting today at Lenovo.com.