One of the big reasons for why faster-than-GbE networks have not gained traction in the consumer space is due to a lack of appropriate network switches. 10 GbE switches are generally aimed at businesses, and they are priced accordingly. Fortunately, the situation is beginning to change. Buffalo Japan has introduced its new six-port switch featuring two 10 GbE ports and four 2.5 GbE ports that is designed for home use.
Buffalo’s LXW-10G2/2G4 Giga Switch is aimed at homes with a high-speed optical Internet connectivity as well as multiple computers or NAS with 2.5 GbE or 10 GbE network adapters and/or Gigabit-class Wi-Fi. The switch can automatically prioritize 10 GbE connectivity and also supports loop detection to optimize a network’s configuration and performance. Besides the switch, Buffalo also offers its WXR-5950AX12 10G Wi-Fi router as well as LUA-U3-A2G 2.5 GbE USB adapter for PCs.
Buffalo’s LXW-10G2/2G4 switch will be available starting from mid-December exclusively in Japan, but nothing stops the company to start sales of the product elsewhere. The price of the switch will be approximately ¥34,000 including taxes ($312 with VAT, $283 w/o VAT), which is quite expensive even by Japanese standards. Though at least for the time being, it's a rather unique offering in the consumer switch space; similar switches with a mix of ports have generally combined 10 GbE with pure GbE, so the use of 2.5 GbE ports makes for an interesting development.
Rambus has developed a comprehensive PCIe 5.0 and CXL interface solution for chips built using 7 nm process technologies. The interface is now available for licensing by SoC designers and will enable them to bring PCIe 5.0/CXL-supporting hardware to the market faster.
Rambus’ PCIe 5.0 solution includes a controller core originally developed by Northwest Logic (which was recently acquired by Rambus) and is backwards compatible with PCIe 2.0, PCIe 3.0 and PCIe 4.0, as well as a PHY that also supports CXL. The solution supports 32 GT/s per lane data transfer rate and is designed for advanced 7 nm FinFET process technologies. Besides the IP itself, Rambus will also offer design, integration, and support services to speed up the development process.
Rambus believes that its PCIe 5.0 solution will be used by developers of processors for AI, HPC, storage, and 400 GbE networking applications. Considering the fact that many of the upcoming accelerator chips will use the CXL interface, it is important that Rambus’ PHY also support the new technology.
Rambus did not disclose how much its PCIe 5.0 solution will cost to its licensees.
Over the second-half of this year, AMD has been gearing up to cascade their latest Radeon graphics architecture to successively cheaper and more mainstream products. Last month we saw the announcement of the mid-tier Radeon RX 5500 and RX 5500M series for desktop and mobile respectively. And now this morning the company is adding a third tier of Radeon mobile discrete graphics to their lineup, with the addition of the Radeon RX 5300M series.
Revealed alongside today’s 16-inch MacBook Pro announcement – with Apple once again getting their own exclusive Radeon Pro SKUs – the 5300M represents the further proliferation of AMD’s Radeon RDNA architecture. Based on AMD’s Navi 14 GPU, AMD is tapping their (currently) smallest Navi chip to offer a lower performing and presumably lower priced graphics adapter for laptop use.
|AMD Radeon RX Series Mobile Specification Comparison|
|AMD Radeon RX 5300M||AMD Radeon RX 5500M||AMD Radeon Vega Pro 20||AMD Radeon RX 560X|
|Throughput (FP32)||4.1 TFLOPs||4.6 TFLOPs||3.3 TFLOPs||2.6 TFLOPs|
|Memory Clock||14 Gbps GDDR6||14 Gbps GDDR6||1.5 Gbps HBM2||7 Gbps GDDR5|
|Memory Bus Width||96-bit||128-bit||1024-bit||128-bit|
|Typical Board Power||?||85W||?||?|
|Architecture||RDNA (1)||RDNA (1)||Vega
|GPU||Navi 14||Navi 14||Vega 12||Polaris 11|
|Launch Date||Q4 2019||Q4 2019||10/2018||04/2018|
At a high level, the 5300M is a further cut down version of Navi 14. AMD has held the number of CUs constant at 22 (which was already a cut-down amount from a full chip), and instead they’ve cut the memory bus instead. By disabling one of the 4 memory partitions, the resulting chip ends up with a 96-bit GDDR6 memory bus, and a proportional drop in memory bandwidth. The end result is that the 5300M gets a maximum of 168GB/sec of memory bandwidth, down from 224GB/sec in 5500M. This also limits the card to 3GB of VRAM, further differentiating the 5300M from the 5500M, and lowering the total bill of materials costs for OEMs.
Otherwise, while the processing core of the GPU hasn’t been cut back further in terms of hardware, the 5300M does ship with noticeably lower clocks. The game clock for the new adapter is just 1181MHz, 267Mhz lower than on the 5500M. The net result is that, on paper, shading, texturing, compute, and ROP performance should all be around 82% of 5500M’s performance, not counting the hit from the reduced memory bandwidth.
Meanwhile, AMD unfortunately isn’t disclosing TDPs for the new mobile part. So it’s not clear how much lower (if at all) the power consumption of the 5300M is. The drop in clockspeeds as well as the narrower memory bus should help to reduce power consumption, however there’s a wildcard in how much AMD needs to bin for power for their mobile parts.
At any rate, like the Radeon RX 5500M series, the Radeon RX 5300M is officially launching this quarter. With Apple seemingly having first dibs on the Navi 14 silicon, expect to see it show up in other laptops soon.
Seagate has introduced its first Thunderbolt 3 docking solution designed specifically for laptop gamers. Seagate’s FireCuda Gaming Dock includes a hard drive, an M.2 SSD slot, and features a variety of essential connectors, few of which are found in large numbers on today’s ultra-compact notebooks.
Seagate’s FireCuda Gaming Dock (STJF4000400) is designed to expand the storage capacity of today’s laptops and add ports required to attach various peripherals. The device comes with a built-in 4 TB hard drive, and also has an M.2 PCIe 3.0 x4 slot for an additional SSD to boost performance and add capacity if needed. In addition, the docking station offers a GbE port, five USB 3.1 Gen 2 ports, an extra Thunderbolt 3 port to daisy chain more TB3 devices, a DisplayPort 1.4 output, a headphone output, and an audio-in connector.
In a bid to appeal to gamers, the FireCuda Gaming Dock has integrated RGB LEDs that can be controlled using the company’s own software.
By launching its FireCuda Gaming Dock, Seagate is looking to address the market of modern ultra-thin notebooks from the outside-in, as fewer and fewer laptops come with user-accessible/upgradable storage. By and large, Seagate is banking on the fact that mobile PCs generally lack space to store games, and may not have enough ports for a full suite of peripherals. Therefore, integrating storage into a TB3 docking station makes a lot of sense.
Seagate’s FireCuda Gaming Dock will be available later this month for $349.99.
Alongside today's 16-inch MacBook Pro announcement, Apple has also confirmed that their long-awaited redesign of the Mac Pro, which has been due this fall, will be launching next month.
Apple’s upcoming Mac Pro desktop will be the company’s highest-performing desktop in years and will address the key issues of the cylindrical Mac Pro, namely insufficient graphics performance as well as limited expandability. The Mac Pro systems will be based on Intel’s Xeon W processors with up to 28 cores paired with up to 1.5 TB of DDR4-2933 as well as up to 4 TB of solid-state storage (using two SSDs based on the T2 controller). To offer its customers a whopping compute and graphics performance, Apple will equip its Mac Pro with up to two AMD Radeon Pro Vega II Duo graphics cards in MPX form-factor with a total of 16384 stream processors (4096 SPs per GPU) and 128 GB of HBM2 memory (32 GB per GPU). Furthermore, the systems may be equipped with the Afterburner ProRes and ProRes RAW FPGA-based accelerator card, or any other accelerator that is compatible with PCIe 3.0 bus (granted that the system has 64 PCIe lanes). In fact, with a 1.4 kW PSU, the new Mac Pro could accommodate quite a lot of options.
With the new Mac Pro workstation offering massive performance, its owners will naturally benefit from new high-resolution displays and here Apple has a unique proposition with its unique Pro Display XDR, which is also due out in December. The 32-inch monitor is based on a 10-bit IPS panel and features a 6016×3384 resolution, 1,000 nits – 1,600 nits brightness (sustained/peak), and a 1,000,000:1 contrast ratio because of Mini-LED backlighting.
Apple will start taking orders on its new Mac Pro as well as Pro Display XDR in December. The Mac Pro workstation will start at $5,999 for a version with an eight-core processor. The standard version of the monitor will be priced at $4,999, whereas a model with nano-texture glass will be priced at $5,999. The display will come without a stand or VESA mount adapter that will have to be acquired separately for $999 and $199, respectively.
With rumors swirling for the last few months a new high-end MacBook Pro laptop from Apple, the company this morning is making those rumors a reality, announcing and launching the 16-Inch MacBook Pro. Replacing Apple’s 15-inch model, the new laptop is a half-step of sorts for Apple to improve their flagship professional laptop, addressing some long-simmering critiques about the laptop, but not radically overhauling the unibody-built, Touch Bar-equipped laptop design that Apple has used since 2016. The end result is a laptop that’s an incremental improvement over the 15-inch models, with Apple making the new laptop a bit larger, a bit more powerful, and significantly overhauling their problematic Butterfly Switch keyboard.
Apple’s MacBook Pro lineup of course needs no introduction. The company pioneered a lot of the design elements that have become common-place across the industry in premium laptops, including the ultrabook-like thin & light design, high-DPI (Reinta) displays, and more. However the most recent generation of models have been received with less enthusiasm, as Apple’s continued focus on thinness and soldering down components has run headlong into traditional expectations for what a “professional” laptop should entail. And while Apple is not one for mea culpas, I don’t think there’s any doubt that the 16-inch MacBook Pro design is an effort to respond to some of the biggest criticisms about the previous 15-inch design.
Overall then, the new 16-inch MacBook Pro is not a radically departure from the 15-inch in terms of the design. There are a few tells in the design – small changes that you’ll spot if you know what to look for – but externally Apple has kept to the same unibody design that we’ve seen since the first wave of Touch Bar notebooks in 2016. So from the outside, the 16-inch laptop looks like an ever-so-larger version of the 15-inch model.
And indeed, the laptop’s larger footprint sounds bigger than it actually is. While Apple called their previous laptop the 15-inch MacBook Pro, the actual screen was 15.4-inches diagonal, while this one is 16-inches flat. So with just a 0.6-inch increase in screen size and some slightly smaller bezels, the footprint of the 16-inch model is only 5% larger than the 15-inch model. So it’s just enough to not be the same size as the previous MacBook Pro, but also not substantially larger ala the long-retired 17-inch model. Meanwhile the new model has bulked up just a bit in weight and height as well; at 2kg, it’s 0.17 kg heavier, and Apple has added another 0.7mm to the height, bringing it to 16.2mm. Overall this means that although the new laptop is decidedly not identical to the 15-inch laptop it replaces, it’s very much a similar successor that’s meant to fit in to the same role as the earlier model.
|MacBook Pro 15 & 16-Inch (Base Models)|
|Model||2019 (16-inch)||2019 (15-inch)||2018 (15-inch)||2017 (15-inch)|
6 CPU Cores
6 CPU Cores
6 CPU Cores
4 CPU Cores
|GPU||Intel UHD Graphics 630 + AMD Radeon Pro 5300M (4GB)||Intel UHD Graphics 630 + AMD Radeon Pro 555X (4GB)||Intel HD Graphics 630 + AMD Radeon Pro 555 (2GB)|
16" 3072 x 1920 IPS LCD
|15.4" 2880 x 1800 IPS LCD
|15.4" 2880 x 1800 IPS LCD
|Memory||16GB DDR4-2666||16GB DDR4-2400||16GB LPDDR3-2133|
|SSD||512GB PCIe SSD||256GB PCIe SSD|
|I/O||4x Thunderbolt 3 (supports DP1.2 & USB 3.1 Gen 2 modes),
|Battery Capacity||100 Wh||83.6 Wh||76 Wh|
|Battery Life||11 Hours||10 Hours|
|Dimensions||1.62 cm x 35.79 cm x 24.59 cm||1.55 cm x 34.93 cm x 24.07 cm|
|Weight||4.3 lbs (2.0 kg)||4.02 lbs (1.83 kg)|
Headlining the new laptop is of course its 16.0-inch display. The 0.6 longer diagonal nets an 8% gain in total screen real estate, and Apple has scaled up the display resolution accordingly. The resulting 3072 x 1920 resolution panel is just a bit denser than the old 15-inch panel – offering 226 PPI versus 220 PPI – however in practice I don’t expect the difference to be noticeable (if Apple were really looking to increase their density, they would have needed to go to 4K or beyond). Otherwise the display is similar to the last generation, using an IPS panel with support for the P3 color space, and a maximum brightness of 500 nits.
Going under the hood, even the small increase in the laptop’s volume is still enough to make a big difference throughout the laptop, as Apple has essentially rolled back some of the changes they’ve made in previous generations to slim down the laptop. The biggest of which is, of course, the keyboard. While I remain a fan of the butterfly keyboard, there’s no getting around the fact that, despite Apple’s best efforts, it developed long-term reliability concerns, particular with dust ingress. Even after 3 revisions, the issue apparently wasn’t entirely resolved, and so Apple is rolling back the butterfly mechanism entirely.
Replacing the butterfly is a more traditional switch mechanism, which is what Apple eliminated in the first place. Unfortunately, unlike with the butterfly switch’s launch, the company isn’t providing any handy diagrams of the new switch, so it’s hard to say just how “traditional” it really is. A big part of the reason Apple stopped using switch style in the first place was that they weren’t happy with the stability of the keys, so I would assume they’ve found another way to address that issue.
At any rate, the new Magic Keyboard offers a whole lot more key travel than the previous butterfly mechanism. According to Apple, the newest keyboard offers a full 1mm of key travel, which is almost double the 0.55mm the older, butterfly-based keyboard offered. So reliability concerns aside, for anyone who wasn’t happy with the shallow key travel of the recent MacBook Pro models, the new keyboard may be more up your alley.
Apple is also using the occasion to (thankfully) make a couple of other key-related changes to their keyboard. First off, the new keyboard marks the return of a physical Escape key, shortening the Touch Bar just slightly to accommodate the key. The lack of a physical key has been one of the longstanding critiques about the Touch Bar MacBook Pro models, as it’s a heavily used key in some application environments. Meanwhile the arrow keys have been harmonized; Apple is now using a more traditional inverted-T setup, making all four keys half-height, rather than having the left and right keys being full-size keys while the up/down keys were half-height.
The other big change under the hood is total battery capacity. For their latest MacBook Pro, Apple is shipping the laptop with a 100 Watt-hour battery, which for practical purposes is the absolute largest battery they can even ship in a laptop – any larger and special permission is required to bring them on airplanes. Prior to this, Apple had shipped batteries as large as 99.5 Whr in the last revision of the 3rd generation MacBook Pro, however Apple cut the battery size beginning with the Touch Bar models, and has slowly been increasing it since then. Overall, the new battery is 16.4 Whr (~20%) larger.
The net result of the new battery is that Apple’s official battery life figures are being bumped up an hour, from 10 hours on the 15-inch MacBook Pro to 11 hours on the 16-inch laptop. Which, as I’m sure some of our readers will have noted by now, is a smaller change than you’d expect for a 20% jump in battery capacity. And to answer why that is, let’s talk about cooling.
Further taking advantage of their extra volume, Apple has refined their cooling design for the new laptop. While Apple’s previous design was arguably no slouch, it was definitely tuned to size and volume over performance; a sensible decision in 2016, but less so in 2019. Without recapping the entire history of Intel CPUs, in the last 4 years Intel has doubled the number of cores in their chips while only making moderate improvements in their power efficiency, and as a result the amount of power required to run the entire chip during heavy workloads has been creeping up. So the MacBook Pro hasn’t entirely kept up with the needs of a now 8 core processor running at full tilt.
To that end, Apple’s revised cooling design incorporates more of everything: more heatsink mass, more heatsink surface area, and more airflow running through those heatsinks. As a result, the 16-inch MacBook Pro can sustain 12 more watts of thermal dissipation, according to Apple. Unfortunately the company doesn’t quote the previous generation figure, so we don’t know what the total is, but 12 more watts is still significant in a 15/16-inch laptop, and should go a long way towards allowing the CPU and GPU to stay at higher turbo clockspeeds for longer.
And for the moment at least, Apple’s CPU cooling needs won’t be changing. The company refreshed their 15-inch MacBook Pro just this summer with Intel’s latest 6 and 8 core chips, ranging from the Core i7-9750H up to the Core i9-9980HK. These are still Intel’s best chips for high-end (45W TDP) laptops, so these are the same CPUs that are going into Apple’s newest hardware.
A much more significant change, however, is on the GPU side of matters. The new laptops continue to use discrete mobile GPUs, and for their new laptop Apple is tapping AMD’s newest Radeon Pro 5300M and Radeon Pro 5500M mobile GPUs. These are based on AMD’s 7nm RDNA architecture, offering significant gains in performance and power efficiency over the Radeon Pro 500 (Polaris) chips that they replace.
|AMD Radeon Mobile Specification Comparison|
|AMD Radeon Pro 5500M||AMD Radeon Pro 5300M||AMD Radeon RX 5500M||AMD Radeon Vega Pro 20|
|Throughput (FP32)||4.0 TFLOPs||3.2 TFLOPs||4.6 TFLOPs||3.3 TFLOPs|
|Memory Clock||12 Gbps GDDR6||12? Gbps GDDR6||14 Gbps GDDR6||1.5 Gbps HBM2|
|Memory Bus Width||128-bit||128-bit||128-bit||1024-bit|
|Architecture||RDNA (1)||RDNA (1)||RDNA (1)||Vega
|GPU||Navi 14||Navi 14?||Navi 14||Vega 12|
|Launch Date||11/2019||11/2019||Q4 2019||10/2018|
As these are Apple-specific SKUs, AMD doesn’t offer a whole lot of details about the new chips, but the specifications are similar to AMD’s Radeon RX 5500M, which was announced last month. Notably, Apple’s Radeon Pro 5500M SKU has 2 more active CUs than the open market Radeon RX 5500M; however overall throughput is lower, as Apple surely running their SKUs at lower TDPs. Joining this is also the Radeon Pro 5300M, which is the base SKU for the new MacBook Pro. On paper, this chip offers around 20% lower performance than the 5500M. Meanwhile Apple is also segmenting their GPU options by VRAM; while the 5300M comes with just 4GB of GDDR6 memory, the 5500M is available with both 4GB and a rather unique to Apple 8GB of VRAM.
Overall, Apple is claiming that the new GPUs offer a significant improvement in performance over Apple’s previous generation 15-inch laptop. The Radeon Pro 5300M should be 120% faster than the Radeon Pro 555X used in the last-generation base models, while the Radeon Pro 5500M is said to offer 80% more performance than the outgoing Radeon Pro 560X.
Shifting gears, Apple is also further ramping up their memory and storage capacity options for the new notebook. After moving to DDR4 and adding a 32GB option on the last-generation notebook, for the new 16-inch model Apple is doubling the DRAM options again, with the new top-end SKU now offering 64GB of DRAM. Better still, memory speeds are being increased slightly, from DDR4-2400 to DDR4-2667, so the new laptop gets 11% more memory bandwidth. Do note, however, that the base models still only come with 16GB of RAM, so that much hasn’t changed.
Apple is thankfully also increasing their storage options across the board. The base configuration for the MacBook Pro 16-inch includes 512GB of flash, up from 256GB in the last generation. Meanwhile the high-end has increased by the same increment as well, and as a result Apple is now offering laptops with a whopping 8TB of storage. While I don’t expect Apple to be alone here for too long, for practical purposes this is a new record for laptop storage; the only other laptops I know that come close are large laptops that are RAIDing together two SSDs. This, I suspect, is Apple flexing its muscles on the chip and integration front; producing their own SSD controller (as part of the T2) means they aren’t reliant on component suppliers in the same way that other vendors are.
But the extra RAM and storage options will set your wallet back significantly. Apple is charging $400 to go from 16GB to 32GB of RAM, and another $400 on top of that to make it 64GB. Meanwhile the 1TB of storage upgrade runs for $200, and 8TB of storage is a $2400 upgrade – the cost of a whole 16-inch MacBook Pro to being with. So it goes without saying that Apple’s upgrade pricing remains ridiculously steep; Apple is charging around 3x to 4x what the DRAM and NAND cost in the spot market. Unfortunately everything is still soldered down as well, so potential MacBook buyers will have to decide up-front how much RAM and on-board storage they wish to pay for.
Rounding out the package, Apple has also given their audio system an upgrade. The embiggened laptop now incorporates a 6 speaker setup, as well as what Apple is calling “force‑cancelling woofers” to minimize the vibrations caused by their speakers. The platform has also added support for Dolby Atmos audio. Meanwhile, according to the company’s press release, they’ve also improved the triple microphone array to reduce hiss by 40% and improve the overall signal to noise ratio. Apple’s spec sheets also specifically mention beamforming, but it’s not clear how much of this is new and how much of it is simply something Apple wasn’t previously disclosing.
For all of the changes big and small in the 16-inch MacBook Pro, there are also several elements that aren’t changing from the previous 15-inch laptop; or at least aren’t changing enough for Apple to even bother noting the change in their specifications. As I noted towards the start, this laptop is something of a half-step forward, so not everything has received the same focus as the keyboard and cooling system.
The biggest surprise to me is that Apple hasn’t upgraded their wireless capabilities at all; the 16-inch MacBook Pro still ships with 802.11ac (Wi-Fi 5) wireless. For a long time, Apple was on the cutting-edge of wireless support, being among the first vendors to add support for new wireless technologies and standards. So it’s a surprise that a high-end laptop being launched by the company in late 2019 isn’t going to ship with Wi-Fi 6, which Apple already made available in their phones a bit earlier this year.
Meanwhile don’t expect any new wired I/O options either. Apple has retained the same 4 port Thunderbolt 3 setup, with 2 of the TB3-enabled USB-C ports on either side of the laptop. This is joined by a 3.5mm combo jack. Admittedly the internal plumbing of these laptops hasn’t changed – Apple would still need more PCIe lanes for more TB3 ports – but including just 4 USB-C ports has remained a friction point with some Apple users.
Don’t expect a better camera, either. Apple is still shipping the same 720p FaceTime HD camera as they have been for the past several years.
Wrapping things up, the launch of the 16-inch MacBook Pro is a hard launch for Apple. The company is taking orders now, with delivery dates as soon as this week. Meanwhile the laptop will be available at brick & mortar stores a bit later, which judging from Apple’s past launches is usually a week or two of lag time.
Pricing starts at $2399 for the base model, and $2799 for the upgraded model. Further build-to-order options go as high as $6099.
SK Hynix has been in the NAND and SSD business for a long time, but we haven't had the opportunity to review a drive with SK Hynix NAND in years. SK Hynix 3D NAND has been considerably more popular in mobile applications like smartphones and memory cards, and their client OEM SSDs are widespread but not sampled for review.
This year, SK Hynix decided to start competing directly in the retail SSD market by introducing the SK Hynix Gold S31 SATA SSDs. The Gold S31 showcases SK Hynix's vertical integration with the NAND, DRAM, controller and firmware all produced in-house.
ASUS and Google have joined forces to develop a new project that the companies are calling ‘Tinker Board’ single board computers (SBCs). With a footprint not much larger than a credit card, the systems are designed for building small systems to work on AI inference applications like image recognition.
The systems in question are the Tinker Edge T and Tinker Edge R. The former is based on the NXP i.MX8M with an Edge TPU chip that accelerates TensorFlow Lite, whereas the Tinker Edge R is powered by the Rockchip RK3399 Pro processor with an NPU for 4K machine learning. The SBCs officially support Android and Debian operating system, though nothing prevents them from running Linux or other OSes.
Both Tinker Edge T and Tinker Edge R computers feature active cooling as well as mainstream I/O interfaces, including GbE, USB 3.0, and HDMI.
ASUS and Google position their Tinker Edge T and Tinker Edge R for various edge AI applications that have to be compact and very energy efficient.
ASUS plans to demonstrate its Tinker Edge T and Tinker Edge R SBCs at the IoT Technology 2019 conference in Japan, which kicks off on November 20. Pricing of the devices remains to be seen, but it will depend on volumes and other factors.
Portable SSDs with NVMe-based internal drives and a Thunderbolt 3 interface are the fastest bus-powered storage devices currently available in the market. We have been following this market since inception with a steady review of incoming products, while also experimenting with DIY models. The recent glut in the flash market with low-priced, yet high-performance, 3D TLC memory and the availability of Phison's Thunderbolt 3 external SSD reference platform has enabled vendors to put out relatively cheap high-capacity Thunderbolt 3 SSDs. Today, we take a look at two 2TB Thunderbolt 3 SSDs that do not break the bank - the OWC Envoy Pro EX Thunderbolt 3 (Standard Edition) and the recently launched Plugable TBT3-NVME2TB.
In a bit of offbeat monitor news, MSI has teased via its CES award announcements that it is working on an external display for notebooks that is geared towards gaming. Seemingly externalizing one of MSI's gaming laptop displays, the upcoming Optix MAG161 is a 15.6 monitor with a blistering 240Hz refresh rate.
While few details are available today, it is safe to say that MSI’s Optix MAG161 uses the same 15.6-inch Full-HD 240 Hz LCD panel as the company’s high-end gaming notebooks (such as the GS65 Stealth). The display is said to feature wide viewing angles, so this is not a typical cheap TN-type unit with poor color reproduction.
The industry’s first 240 Hz external monitor for notebooks is said to be 5 mm thick and features HDMI and USB Type-C connectors to maintain compatibility with various laptops. The device will come with a special folio that will protect it during handling and will serve as a stand while in use.
The Optix MAG161 has already won a CES 2020 innovation award (ed: these get issued 2 months in advance of the show itself), so expect to see MSI is set to formally roll-out the external monitor at CES 2020 when it kicks off in January.
Seagate is refreshing their consumer SSD lineup with 96-layer 3D NAND, and introducing a new flagship model: the FireCuda 520, Seagate's first PCIe 4.0 SSD. As with every other consumer PCIe 4.0 SSD so far, the Seagate FireCuda 520 uses the Phison E16 controller. The FireCuda 520 arrives several months after the first Phison E16-based SSDs, and Seagate has used the time to refine the product a bit. They haven't made any firmware tweaks that affect performance, but the FireCuda 520 does use a Seagate-specific firmware variant that includes some extra security measures to protect against firmware hacks, and it's been through some extra QA.
In terms of hardware, the FireCuda 520 is a pretty standard Phison E16 reference design using Toshiba/Kioxia BiCS4 96-layer 3D TLC NAND flash memory. The drive uses a two-sided black PCB with no heatspreader or heatsink included, since most new motherboards are providing their own M.2 cooling solution. Seagate rates the FireCuda for up to a generous one drive write per day for the five-year warranty period. MSRPs for the FireCuda 520 match its status as a flagship drive, and are almost twice the street prices for the cheapest PCIe 3.0-based NVMe SSDs.
|Seagate FireCuda 520 SSD Specifications|
|Capacity||500 GB||1 TB||2 TB|
|Controller||Phison PS5016-E16 (PCIe 4.0 x4)|
|NAND Flash||BiCS4 96L 3D TLC NAND|
|Form-Factor, Interface||Double-sided M.2-2280, PCIe 4.0 x4, NVMe 1.3|
|Sequential Read||5000 MB/s|
|Sequential Write||4400 MB/s|
|Random Read IOPS||760k IOPS|
|Random Write IOPS||700k IOPS|
|TCG Opal Encryption||No|
|Write Endurance||900 TB
|MSRP||$124.99 (25¢/GB)||$249.99 (25¢/GB)||$429.99 (21¢/GB)|
The rest of Seagate's consumer SSD lineup is also being updated. Their PCIe 3.0 SSD lineup was split evenly between the single-sided BarraCuda 510 (256GB and 512GB) and the double-sided FireCuda 510 (1TB and 2TB). Both models are sticking around under the same names, getting upgraded from 64L to 96L TLC while keeping the same Phison E12 controller. The new editions will also overlap in capacities: the BarraCuda 510 is gaining a 1TB model, and the FireCuda 510 is gaining a 500GB model. The FireCuda 510 is also switching from a blue to black PCB.
Seagate's BarraCuda SATA SSD is getting a more sensible update, gaining a new model designation (BarraCuda 120) to go with the new NAND.
Seagate has already started shipping the new SSDs to retailers, and product listings are already popping up on online retailer websites.
When LG introduced its 43UD79 monitor over two years ago, it quickly gained popularity both among gamers and among office workers mainly due to its combination of size, connectivity options, and image quality. Now the tTime has come to improve the product, and to that end LG has unveiled its successor, the 43UN700. The new display is positioned both for work and mainstream gaming; it adds support for HDR10, higher brightness levels, and features 60 W USB-C power delivery.
Eurocom, the well-known purveyor of ultra-high-end laptops for gamers and professionals, has announced that it has started equipping its Sky X4C and Sky X7C desktop replacement notebooks with Intel’s Core i9-9900KS processor. In fact, the company is so confident in the design of those notebooks that it is even selling SKUs designed for overclocking the already highly-clocked CPU.
Eurocom’s Sky X4C is a 15.6-inch DTR laptop that uses Intel’s socketed desktop processors, NVIDIA’s high-end GeForce RTX/GTX graphics cards in MXM form-factor, two SO-DIMMs, two M.2-2280 PCIe 3.0 x4 slots for SSDs, two 2.5-inch storage devices, and vast connectivity capabilities. When configured appropriately, the Sky X4C can indeed offer the performance of a higher-end desktop, though at 3.4 kilograms (7.48 pounds) it lives up to the desktop replacement name. The Sky X7C is an even more powerful machine with more options that comes with a 17.3-inch display and weighs 3.9 kilograms (8.58 pounds). You can check our own review of that beast here.
Intel’s eight-core Core i9-9900KS processor runs at a base frequency of 4.0 GHz with a 127 W TDP, and can turbo boost all of its cores to 5.0 GHz so long as there's sufficient power and cooling. And if that's not enough, Eurocom will also be SKUs tuned for overclocking, which de-lid the CPU, install a more sophisticated cooling system (new thermal compound, new IHS), and unlock the BIOS. Needless to say, the latter will push the hot chip even harder, but Eurocom is certain that its cooling system will cope with that.
Eurocom’s X4C and X7C notebooks start at $2187 and $2166, respectively. When beefed up with something like Intel’s Core i9-9900KS, NVIDIA’s GeForce RTX 2080, multiple storage devices, and 128 GB of DDR4 memory on top of that, the price will push towards five digits.
Sharp and NHK have co-developed a new rollable 30-inch OLED display, with a design emphasis on keeping the screen thin and light. The prototype monitor will be showcased at a trade show in mid-November, but the company isn't yet talking about mass production.
The experimental 30-inch OLED display offers a 3840×2160 resolution and a 60 Hz refresh rate, all in a package that is just 0.5 mm thick and weighs 100 grams. The developers say that the screen can be rolled up into a 4 cm diameter cylinder, with the idea of being able to integrate the display into various appliances like furniture. Meanwhile, the technology itself could simplify production of foldable electronics, such as smartphones and tablets.
The flexible 30-inch OLED display is produced by Sharp at one of its factories in Japan using vapor deposition method. The screen uses a film substrate and IGZO thin film transistors to drive OLED elements that use separate RGB subpixels. Meanwhile, NHK’s image processing technologies were used to improve brightness uniformity as well as sharpness of moving objects.
Sharp and NHK will demonstrate their prototype rollable 30-inch OLED display at Intel's BEE 2019 trade show, which will take place in Chiba, Japan, from November 13 to November 15.
Sharp and NHK are not the only companies to develop a rollable OLED screen. Earlier this year LG demonstrated such a TV at CES 2019 and even started to sell its rollable Signature TVs in South Korea.
Microsoft has started sales of its HoloLens 2 mixed reality smart glasses. Aiming to be a significant upgrade over its predecessor in both field and overall performance, the second generation of the company's head-mounted computer is geared primarily towards enterprise organizations, where Microsoft and its partners are continuing to experiment with and develop practical applications for augmented reality in the workplace.
From a tech perspective, Microsoft’s HoloLens 2 isn't a radical departure from the original HoloLens in terms of features and basic design goals, but as a second-generation product Microsoft has put a lot of work into improving the technology and the user experience. HoloLens 2's visual system offers a 52º diagonal field-of-view (up from 34º) with a resolution of 47 pixels per degree on its MEMS display. Under the hood, it is based on Qualcomm’s Snapdragon 850 as well as Microsoft’s custom holographic processing unit (HPU) 2.0 chips, which should offer a drastically higher performance than the previous-generation hardware. Finally, the product features better ergonomics and revamped interaction model with full-blown hand tracking and improved visuals.
As noted above, the HoloLens 2 is still focused primarily on businesses and organizations that can take advantage of the device in their workflows. Microsoft will continue to support the original HoloLens, but customers can now deploy the improved version.
Microsoft offers three options to buy its HoloLens 2. The a stand-alone HoloLens 2 device with Windows 10 Holographic is available from select resellers for $3,500. The HoloLens Development Edition with Windows 10 Holographic, a $500 credit for Azure (including mixed reality services), and Unity Pro & PiXYZ Plugins (with a three-months license) pre-installed can be purchased for $3,500 (or $99 per month). Finally, there is HoloLens 2 with Remote Assist version for enterprises that comes with Windows 10 Holographic and Dynamics 365 Remote Assist that can be had for $125 per user per month.
Nixeus this week took the wraps off its latest curved ultrawide NX-EDG34 gaming display, which blends together a large size, a WQHD resolution, a 144 Hz maximum refresh rate, and AMD’s FreeSync dynamic refresh rate technology. At present, only a few monitors can boast the same combination of features that the EDG34 has to offer, so it will be in a rather unique position when it becomes available.
The Nixeus NX-EDG34 display builds upon a curved VA panel with a 3440×1440 resolution, and is capable of reaching 350 nits typical brightness (400 nits in HDR mode), a 3000:1 contrast ratio, a 21:9 aspect ratio, 178°/178° viewing angles, and a 4 ms GtG response time. In terms of refresh rates, the monitor's maximum rate is 144 Hz, and in variable refresh mode it operates in a 48 Hz – 144 Hz range. The LCD can display 16.7 million colors and supports an HDR mode, which suggests a wider-than-sRGB color gamut.
Because the Nixeus NX-EDG34 is designed for gamers, it naturally features multiple inputs to connect a PC (or two) and a couple of game consoles, so it has two DisplayPorts 1.4 and two HDMI 2.0 ports. It is also equipped with a headphone jack.
From design standpoint, the Nixeus NX-EDG34 has very thin bezels on three sides as well as red LEDs on the backside to emphasize gaming nature of the device. Nixeus will offer two versions of its new monitor: the NX-EDG34S with a stand that can adjust tilt as well as the NX-EDG34 that can adjust both tilt and height.
|Nixeus NX-EDG34 Displays|
|Native Resolution||3440 × 1440|
|Brightness||350 cd/m² typical
400 cd/m² HDR
|Maximum Refresh Rate||144 Hz|
|Variable Refresh Rate||AMD FreeSync
48 Hz ~ 144 Hz
|Response Time||4 ms GtG|
|Viewing Angles||178°/178° horizontal/vertical|
|Pixel Pitch||0.233 mm|
|Pixel Density||110 ppi|
|Inputs||2 × DisplayPort
2 × HDMI 2.0
|Stand||NX-EDG34: height and tilt adjustable
NX-EDG34S: tilt adjustable
75x75 VESA mount
|Launch Price||$500 ~ $550|
Nixeus will start shipments of its NX-EDG34S display in late November or early December, depending on the retailer. At present, the monitor can be pre-ordered for $499.99 from Newegg or for $551.15 from Amazon.
As an aside, while 34-inch ultrawide WQHD displays are gaining traction, there are just a few gaming monitors from popular brands that feature a 3440×1440 resolution with high refresh rate (i.e., 100 and higher). In fact, the only product that has the same specs as the EDG 34 is Xiaomi’s Mi Surface Display that was introduced back in October, but that is only available in China (and we do not know if and when it will be sold in other countries). Another, similar monitor is MSI’s Optix MAG341CQ, that has just a 100 Hz refresh rate and is priced at $428.99. Finally, there is Dell’s latest Alienware 34 LCD with as an IPS panel and a 120 Hz refresh rate, but it is priced at a whopping $1499.99. All things considered, the Nixeus NX-EDG34 will be in a unique position in the US at least for a while before other makers adopt the same LCD panel.
Dynabook, formerly Toshiba's PC division, has been pretty energetic with announcements recently, marking its return to active life and attempt to address certain market segments with unique products. This week the company introduced its new T8 and T9 laptops for multimedia enthusiasts, which feature a rare 16.1-inch display as well as an integrated Blu-ray/DVD burner.
Dynabook’s T8 and T9 notebooks come in Stylish Blue or Satin Gold chassis made of thick plastic that measures 379×256.5×23.7 mm — similar dimensions to a mainstream 15.6-inch class laptop. The laptop features a 16.1-inch Full-HD IPS display, which offers a larger screen real estate that the company hopes will provide a slightly more immersive entertainment experience. To further improve the system's multimedia capabilities, Dynabook’s T8 and T9 systems are equipped with stereo speakers by Onkyo that are located under the display and are enhanced with DTS software that adds Dynamic Wide Sound capability.
The innards of Dynabook’s T8/T9 laptops are pretty typical for this class of machines (except for the ODD, of course). The notebooks pack Intel’s quad-core Core i7-8565U processor with UHD 620 Graphics, 8 GB or 16 GB of DDR4-2400 memory (T8 and T9 models, respectively), a 256 GB M.2 PCIe 3.0 x4 SSD, a 1 TB 5400 RPM hard drive, and a Blu-ray drive that is compatible with BD XL discs.
On the connectivity side of matters, the Dynabook T8/T9 features Wi-Fi 5, Bluetooth 5, GbE, three USB 3.0 Type-A connectors, a USB Type-C port, an HDMI 1.4 output, an SD card reader, and a 3.5-mm audio connector for headsets. The laptop also offers a 720p webcam with IR sensors for Windows Hello face recognition, a microphone array, a full-size keyboard with a 19-mm key pitch and a numpad.
When it comes to battery life, Dynabook claims that its T8/T9 laptops will work for up to 9 hours on one charge based on JEITA 2.0 run time measurement method.
|Dynabook's T8 and T9|
|CPU||Intel Core i7-8565U|
|Graphics||HD Graphics 620 (24 EUs)|
|RAM||8 GB DDR4-2400||16 GB DDR4-2400|
|SSD||256 GB SSD (M.2, PCIe 3.0 x4)|
|HDD||1 TB 5400 RPM HDD|
|ODD||Blu-ray burner, compatible with DVD, BD, BD XL media|
|Wi-Fi||Wi-Fi 5 (802.11ac)|
|USB 3.0||3 × Type-A
1 × Type-C
|GbE||1 × GbE|
|Other I/O||HDMI 1.4, 720p webcam with RGB + IR sensors, microphone, stereo speakers by Onkyo, audio jack|
|Battery||up to 9 hours|
|Dimensions||Width: 379 mm
Thickness: 23.7 mm
|Weight||Starting at 2.4 kg|
|Price||$1,920 without VAT in Japan|
Dynabook will start sales of its T8 and T9 laptops in Japan some time in mid-December. The T9 model will cost ¥210,000 without tax (about $1,920). It is unclear whether Dynabook will offer T8 and T9 machines outside of Japan as the unique value proposition offered by the laptop — an enlarged 16.1-inch Full-HD IPS display and Blu-ray disc playback — makes a great sense in Japan, but may get a lackluster welcome in other countries.
Amongst the last big smartphone releases of 2019 is Google’s Pixel 4 series. Google’s own flagship devices come late in the generational product cycle whose timing is mostly dictated by the SoC release schedule – it’s always hard to be able to make a case for your product knowing in a few months’ time we’ll be seeing a barrage of new competing products raising the bar again. Google’s forte in this regard is that it promises to augment its products with features beyond what the hardware can provide, yet in a sense, the Pixel 4’s biggest improvements (and weaknesses) this year are actually mostly related to its hardware.
The new Pixel 4 is again a camera centric phone – it’s the topic that Google talked about the most and dedicated the most time to during its launch event in New York. The Pixel 4 adds for the first time a second camera module that acts as a telephoto unit, and also promises to have improved the capture quality on the new main camera. Whilst Google has a reputation for having good cameras, the Pixel 4 this year faces incredible competition as essentially every other vendor this year has launched devices with triple cameras and have stepped up in terms of their computational photography capabilities.
Other big features of the Pixel 4 include a new 90Hz capable display panel that allows for a new ultra-smooth device experience, a feature that’s still quite rare amongst flagship devices this year, but quickly catching up with many vendors. Another big change for the Pixel 4 is the dropping of the fingerprint sensor in favour for a new full-blown face unlock feature. This latter feature is augmented by another novelty of the Pixel 4: A radar sensor that’s able to detect movements and gestures pointed at the phone.
We’ll be putting the new Pixel 4 XL through our test benches and determine if Google has managed to create a compelling product worth your money.
Earlier this year, Alienware launched its Area 51m laptop, a high end desktop replacement (DTR) class laptop. Now, living up to the idea of being a proper replacement to a desktop, the company has started selling GeForce RTX 2070/2080 GPU upgrade kits for the laptop. The graphics modules come in Dell’s proprietary Dell Graphics Form-Factor (DGFF) and include a cooler as well as installation by the company’s technician.
Upgrading laptops is always a challenge for many reasons, but upgrading notebooks that use proprietary components has always been particularly tricky, especially due to a lack of standardization. Early this year Alienware introduced its 17-inch Area 51m machine that can challenge many desktops in terms of performance and upgradeability as it uses a socketed desktop-class CPU, regular SO-DIMMs, standard storage devices, but it also a proprietary form factor GeForce RTX graphics module. Fortunately, this week Dell fulfilled its promise and started selling upgrade kits for the DTR notebook.
Right now, Dell offers two upgrade kits based on NVIDIA’s GeForce RTX 2070 and RTX 2080 GPUs, which are aimed at laptops that originally came with GeForce RTX 2070 or RTX 2060. Each kit contains a DGFF module with the GPU and memory, an advanced CryoTech 2.0 cooling system with seven heat pipes, and a matching power brick. Furthermore, the price includes installation service by a Dell technician.
The Alienware Area 51m DGFF upgrade kits are far from cheap as we are dealing with low volume products that require professional labor to install. The GeForce RTX 2070 upgrade is available for $1,038.99, whereas the GeForce RTX 2080 is officially priced at $1,638.99 – though it is currently available for $1,138.99.
A solid state drive is often the most important component for making a PC feel fast and responsive; any PC still using a mechanical hard drive as its primary storage is long overdue for an upgrade. The SSD market is broader than ever, with a wide range prices, performance and form factors.
SSD prices have started to creep up a bit in some corners of the market, but the upcoming holiday sales should reverse that and bring new price per GB records. The industry is still slowly migrating from 64-layer to 96-layer 3D NAND, but that doesn't have much impact on end-user price, performance or endurance. At the very high end, PCIe 4.0 SSDs have arrived but are still far more expensive than PCIe 3.0-based high-end drives, without offering much in the way of real-world performance improvements. The sweet spot for pricing is usually with 1TB models, but anything from 480GB up to 2TB can come close on a $/GB basis. There are now several 4TB consumer SSDs to choose from, but they're all more than twice the price of their 2TB counterparts.
Razer has announced its new flagship Basilisk Ultimate wireless gaming mouse aimed at esports and FPS gamers. The Basilisk Ultimate uses Razer’s latest sensor, latest optical switches, and features a HyperSpeed wireless connectivity technology that promises an ultra-low lag. To make the mouse catch eyes, it features 14 RGB lighting zones that can be programmed using the company’s software.
The Razer Basilisk Ultimate is based on the company’s in-house designed Focus+ optical sensor featuring a 20,000 SPI precision, a 650 IPS maximum speed, and a 50G acceleration. The sensor is paired with an SoC that enables multiple features which improve its accuracy and cut down response time of this wireless mouse. In particular, the Smart Tracking capability automatically calibrates the sensor across different surfaces; the Motion Sync aligns polling rates of the host PC, receiver, and sensor to reduce input lag; whereas the Asymmetric Cut-off further improves precision by setting an accurate lift-off distance.
Besides the proprietary Focus+ sensor, the Razer Basilisk Ultimate also uses the company’s own HyperSpeed wireless technology that uses a 4 GHz band and a special dongle as well as features the Adaptive Frequency Hopping that scans interconnection channels and switches to the one with the lowest interferences to ensure the lowest lag. Meanwhile, it is possible that HyperSpeed also optimizes lag on the software side of things, though Razer does not talk about it.
When it comes to ergonomics, the Basilisk Ultimate is a right-handed mouse with a scrolling wheel as well as 11 programmable controls (onboard or cloud storage). Sensitivity of the device can be adjusted on the fly using a special paddle on the left side of the unit, a common feature on contemporary gaming mice.
Razer’s Basilisk Ultimate will be available directly from the company as well as from its retain partners starting from November 6. The mouse itself will cost $149.99/€169.99, the mouse with its dock will be priced at $169.99/€189.99, whereas the dock sold separately will carry a $49.99/€59.99 price tag. Without the dock, the Basilisk Ultimate can be charged using a micro-USB cable that also enables the mouse’s wireless mode for the lowest lag possible.
We’re back with another giveaway for the month of November, this time from Patriot memory. For this latest giveaway, the company has put together a prize pack containing several items from their Viper Gaming product family, including a mechanical keyboard and an audio headset.
Headlining the package is Patriot’s Viper V765 keyboard. The mechanical keyboard is based on Kailh switches and ticks all of the features you’d expect for a gaming-focused keyboard, including programmable macro keys and an aluminum chassis. And of course, the keyboard features copious amounts of RGB lighting to round out its feature set.
RGB lighting can also be found on the Viper V370 headset, which is also a part of the prize package. This is a USB headset with a fold-down boom mic, as well as 7.1 channel virtual surround. And to go with the headset, Patriot is also including their USB hub-equipped headset stand.
Finally, rounding out the package is a 256GB Viper Fang USB flash drive. The USB 3.1 Gen 1 drive is rated for write speeds up to 200 MB/second.
The giveaway is running through November 22nd and is open to all US residents (sorry, ROW!). You can enter below, and you can find more details (and the full discussion) about the giveaway over on the AnandTech Forums.
Intel this week initiated end-of-life plan for two of its 2nd Generation Xeon Scalable (Cascade Lake) processors, possibly in a bid to reduce the number of SKUs in the new family. The CPUs in question are the Xeon Gold 6222 as well as Xeon Gold 6262 and Intel recommends to use different versions of these products instead.
Intel’s 20-core Xeon Gold 6222 (1.80/3.60 GHz, 27.5 MB cache) and 24-core Xeon Gold 6262 (1.9/3.6 GHz, 33 MB cache) processors with two UPI links were not officially a part of the Cascade Lake family introduced in April and were probably available to select customers only. Meanwhile, Intel’s lineup did include the Xeon Gold 6222V and Xeon Gold 6262V products that featured the same specification in terms of core count, frequency, cache size, TDP, and other, but had three UPI links to enable more versatile 4P configurations. In fact, according to Intel’s ARK database, the CPUs even carry the same tray pricing as the V counterparts.
|Intel Xeon Scalable 6222 & 6262 Vs. 6222V & 6262V|
|Xeon Gold 6200|
By EOLing the Xeon Gold 6222 and Xeon Gold 6262 CPUs, Intel reduces pressure on its manufacturing network as it no longer has to disable an additional UPI link inside these chips or even find silicon that has a broken UPI interface. Ultimately, having fewer SKUs is easier to manage.
Those Intel customers who need the Xeon Gold 6222 and Xeon Gold 6262 processors are advised to place their orders by December 27, 2019. The final CPUs will be shipped by November 6, 2020. Meanwhile, the Xeon Gold 6222V and Xeon Gold 6262V will continue to ship onwards.
Having released multiple docking stations for working in studios and offices, OWC this week introduced a new dock designed for digital imaging technicians who need a vast collection of connection options. The 10-in-1 Thunderbolt 3 Pro Dock has all the ports necessary to attach devices commonly used by creative professionals, including displays, DAS, and high-speed local networks.
According to OWC, their Thunderbolt 3 Pro Dock was designed primarily for digital imaging professionals, so it has a DisplayPort 1.2 connector as well as an additional Thunderbolt 3 port to plug in two 4K monitors (or one 5K display). The docking station also has a CFast 2.0 and a SD 4.0 card reader to extract data from appropriate memory cards speeds at up to 370 MB/s. Furthermore it can connect to an eSATA (with port multiplier support) or TB3 DASes using appropriate headers.
The dock also comes with a 10 GbE port and three USB 3.2 Gen 2 (10 Gbps) connectors to plug in various peripherals like a mouse, a keyboard, and specialized controllers.
Meanwhile the dock comes with an external 150 W PSU, giving it ample power to drive all of the devices connected. This includes the host notebook, where the dock can provide up to 60W – which is plenty for a 13.3-inch machine, but under load will be right on the knife's edge for 15.6-inch laptops, which typically ship with ~80W adapters. Meanwhile, to ensure stable operation under high workloads, the device has a fan, which can be temporarily turned off using a switch to eliminate unwanted noises during filming.
OWC’s Thunderbolt 3 Pro Dock is available now directly from the company and from Amazon for an MSRP of $339.99.
A couple of months back, Samsung announced a new ultra high resolution 108 MegaPixel sensor for smartphone cameras. At the time we didn't know where this sensor would first show up, but now we have our answer: Xiaomi. This week the company is introducing the industry’s first smartphones with a five-module (penta) camera setup, including the 108 MP sensor from Samsung. Dubbed the Mi Note 10 and Mi Note 10 Pro (and in China, the Mi CC9 Pro/Mi CC9 Pro Premium Edition), these new phones are aimed specifically at photographers, with Xiaomi taking an upper-mid-range platform and giving it one of the most extensive camera arrays on the market.
Without any doubts, the key feature of Xiaomi’s Mi Note 10 and Mi Note 10 Pro devices (and their Chinese equivalents) is their five-module rear camera featuring Samsung’s ICOCELL bright HMX 108 MP wide-angle 1/1.33-inch sensor with a 4-axis OIS, 4-in-1 pixel binning, and a 7-piece lens assembly (an 8-piece in case of the Pro model). In addition to the gargantuan 108 MP sensor, the camera array also has a 20 MP 117º ultra-wide-angle module, a 12 MP telephone module, a 5 MP telephoto module for 10x hybrid and 50x digital zoom, and a 2 MP macro camera. The whole camera setup is equipped with a quad-LED dual-tone flash. And along with the main camera, the smartphones come with a 32 MP 'waterdrop' selfie camera.
To view high-quality photos, the flagship smartphone camera is paired with a 6.47-inch SAMOLED display featuring a 2340×1080 resolution, a 19.5:9 aspect ratio, 600 nits peak brightness, the DCI-P3 color gamut, and HDR10 support.
Meanwhile, at the heart of the Xiaomi’s Mi Note 10 and Mi Note 10 Pro smartphones is Qualcomm’s Snapdragon 730G SoC, which is one of Qualcomm's upper-mid-range SoCs and features two Kryo 470 Gold cores, six Kryo 470 Silver cores, and Adreno 618 graphics. That SoC is accompanied by 6 GB of RAM and 128 GB of storage in case of the regular model, as well as 8 GB of RAM and 256 GB of storage in case of the Pro model. The handsets are powered by a non-removable Li-Po 5260 mAh battery.
Rounding out the package, the new phones feature 4G/LTE, Wi-Fi 5, Bluetooth 5.0, a USB 2.0 Type-C connector, an under-display fingerprint reader, as well as all the other essential features that we come to expect in late 2019. However as these aren't full flagship-level phones, you won't find much in the way of advanced features: the phones aren't listed as dust and water resistant, there's no wireless charging, and they do not have a microSD card slot (which is a bit odd given the target audience).
|The Xiaomi Mi Note 10 Family|
|Mi Note 10||Mi Note 10 Pro|
Corning Gorilla Glass 5
|RAM||6 GB||8 GB|
|Storage||128 GB of NAND flash
||256 GB of NAND flash|
|Local Connectivity||Wi-Fi||802.11ac Wi-Fi|
|Data/Charging||USB 2.0 Type-C|
active noice cancellation
|Navigation||A-GPS, GLONASS, BDS, GALILEO|
|Rear Camera||Main||108 MP, f/1.69, 25mm
PDAF, Laser AF, OIS
|Ultrawide||20 MP, f/2.2, 13mm
|Telephoto||12 MP, f/2.0, 50mm
Dual Pixel PDAF, Laser AF, 2x optical zoom
|Telephoto 2||5 MP, f/2.0
PDAF, Laser AF, OIS, 5x optical zoom
|Macro||2 MP, f/2.4
|Main Lense Assembly||7 pieces||8 pieces|
|Front Camera||32 MP, f/2.0, 0.8µm|
|SIM Size||Nano SIM + Nano Sim|
|Sensors||accelerometer, gyro, proximity, compass|
|Dimensions||Height||157.8 mm | 6.21 inches|
|Width||74.2 mm | 2.92 inches|
|Thickness||9.7 mm | 0.38 inches|
|OS||Google Android 9.0 with MIUI 11|
|Launch Countries||China, Europe initially|
Xiaomi's Mi Note 10 smartphone will be available in Midnight Black, Aurora Green, and Glacier White in Europe starting mid-November for €549. The Mi Note 10 Pro model will be priced at €649.
AMD is set to close out the year on a high note. As promised, the company will be delivering its latest 16-core Ryzen 9 3950X processor, built with two 7nm TSMC chiplets, to the consumer platform for $749. Not only this, but AMD today has lifted the covers on its next generation Threadripper platform, which includes Zen 2-based chiplets, a new socket, and an astounding 4x increase in CPU-to-chipset bandwidth.
Last week NVIDIA introduced its latest GeForce GTX 1660 Super performance mainstream GPU. There are plenty of designs to chose from, and both ASUS and GIGABYTE are now set to offer small form factor designs.
ASUS has two new GeForce GTX 1660 Super boards that are 17.4 centimeters (6.9 inches) long. The ASUS Phoenix PH-GTX1660S-6G and Phoenix PH-GTX1660S-O6G cards are based on NVIDIA’s TU116 GPU with 1408 CUDA cores, carry 6 GB of GDDR6 memory, share the same PCB design with one 8-pin auxiliary PCIe power connector, feature three display outputs (DVI-D, DP 1.4, HDMI 2.0b), and use the same dual-slot cooling system with one dual ball bearing fan. The only difference between the two are their clocks and even they are pretty close: up to 1815 MHz vs 1830 MHz in OC mode.
GIGABYTE has a more 'canonical' GeForce GTX 1660 Super Mini ITX OC 6G (GV-N166SIXOC-6GD) board that is exactly 17 centimeters long. The card has NVIDIA’s TU116 GPU clocked at up to 1800 MHz, 6 GB of 14 Gpbs GDDR6 RAM, uses a dual-slot single-fan cooler with a heat pipe that can stop the fan in idle mode, has an 8-pin PCIe power connector, and offers four display outputs (DP 1.4, HDMI 2.0b).
|NVIDIA GeForce GTX 1660 Super Graphics Cards for Mini-ITX|
|Core Clock||1530 MHz||1530 MHz (?)|
|Boost Clock||1785 MHz||1815 MHz||1830 MHz||1800 MHz|
|Memory Clock||14 Gbps GDDR6|
|Memory Bus Width||192-bit|
|Single Precision Perf.||5 TFLOPS||~5 TFLOPS|
|Manufacturing Process||TSMC 12nm "FFN"|
|Launch Date||10/29/2019||Q4 2019|
All three graphics cards are listed at ASUS’ and GIGABYTE’s websites, so expect them to be available shortly. Pricing wise, they should not be much more expensive than NVIDIA’s $229 MSRP for the GeForce GTX 1660 Super.
Western Digital has introduced its new series of SSDs designed for mission critical applications, including OLTP, OLAP, hyper converged infrastructure (HCI), as well as software-defined storage (SDS) workloads. The Ultrastar DC SS540 drives are aimed at mixed and write intensive workloads and can be configured accordingly. Since the SSDs use an SAS 12 Gbps interface, they are drop in compatible with existing machines.
The Western Digital Ultrastar DC SS540 is based on the company's sixth-generation dual-port SAS 12 Gbps platform co-developed with Intel as well as 96-layer 3D TLC NAND memory (presumably, also from Intel) and comes in a 2.5-inch/15 mm form-factor. The new SSDs are drop-in compatible with existing servers that support 9, 11, and 14 W per drive power options (SKUs with higher power consumption offer higher random read/write speeds).
As is traditional for SAS SSDs from Western Digital and Intel, the Ultrastar DC SS540 supports extended error correction code (ECC with a 1x10^-17 bit error rate) to ensure high performance and data integrity, exclusive-OR (XOR) parity in case a whole NAND die fails, and parity-checked internal data paths. In addition, the Ultrastar SS540 complies with the T10 Data Integrity Field (DIF) standard, which requires all interconnect buses to have parity protection (on the system level), as well as a special power loss data management feature that does not use supercapacitors. As usual, Western Digital's Ultrastar SS540 will be available in different SKUs with capabilities like instant secure erase and/or TCG+FIPS encryption to conform with various security requirements.
The manufacturer plans to offer the Ultrastar DC SS540 rated for 1 or 3 drive writes per day (DWPD) to target different workloads. The former will offer capacities between 960 GB and 15.36 TB, whereas the latter will feature capacities from 800 GB to 6.4 TB. The new lineup does not include drives rated for 10 DWPD and less than 1 DWPD, so those who need higher or lower endurance (as they run extremely read intensive or extremely write intensive workloads) will have to use previous-generation offerings from Western Digital. When it comes to warranty and MTBF, the drives are rated for a 0.35% annual failure rate (AFR), 2.5 million hours MTBF and are covered with a five-year limited warranty (or the max PB written, whichever occurs first).
As far as sustained performance is concerned, the Ultrastar DC SS540 is rated for up to 2130 MB/s sequential read/write speed, up to 470K IOPS random write IOPS, and up to 240K random write IOPS, depending on exact model, which is generally in line with performance of the Ultrastar DC SS530 SSDs launched last year. Traditionally, higher capacity SSDs are slightly slower when it comes to writes and mixed workloads, but those who need maximum performance can always use more drives to hit desired speeds.
Western Digital’s Ultrastar DC SS540 SSDs are currently sampling and qualified by select clients of the company. The manufacturer plans to start commercial shipments of the drives in the Q1 2020.
|HGST Ultrastar SS540 Series Specifications|
|3 DWPD||1 DWPD|
|Interface||SAS 6/12 Gb/s, dual port for 12 Gb/s|
3D TLC NAND
|Sequential Read||2116 ~ 2130 MB/s||1985 ~ 2130 MB/s|
|Sequential Write||1008 MB/s ~ 2109 MB/s||1985 MB/s ~ 2130 MB/s|
|Random Read (4 KB) IOPS||237K ~ 470K IOPS||237K ~ 470K IOPS|
|Random Write (4 KB) IOPS||128K ~ 240K IOPS||79K ~ 110K|
|Mixed Random R/W (70:30 R:W, 4KB)
|182K ~ 300K IOPS||143K ~ 200K IOPS|
|Read/Write Latency (average)||140/60 μs ~ 150/80 μs||140/90 μs ~ 150/300 μs|
|Power||Idle||3.7 W (<15 TB) - 4.7 W (>15 TB)|
|Operating||9 W, 11 W, 14 W (configurable)|
|Max. PB||6.4 TB: 36,150 TB
3.2 TB: 17,150 TB
1.6 TB: 9,410 TB
800 GB: 4.700 TB
|15.36 TB: 30,110 TB
7.68 TB: 15,050 TB
3.84 TB: 7,000 TB
1.92 TB: 3,760 TB
960 GB: 1,880 TB
TCG + FIPS
|Power Loss Protection||Yes|
|MTBF||2.5 million hours|
|Warranty||Five years or max PB written (whichever occurs first)|
|Legend for Model Numbers||Example: WUSTR6464ASS201=6.4TB, SAS 12Gb/s, TCG
W = Western Digital
U = Ultrastar
S = Standard
TR = NAND type/endurance
TR=TLC/read-intensive) 64 = Full capacity (6.4TB) 64 = Capacity of this model
15 = 15.2TB 76 = 7.6TB 38 = 3.84TB 32 = 3.2TB 19 = 1.92TB 16 = 1.6TB 96 = 960GB 80 = 800GB 48 = 480GB 40 = 400GB
A = Generation code
S = Small form factor (2.5” SFF) S2 = Interface, SAS 12Gb/s
1 = Encryption setting
0 = Instant Secure Erase
1 = TCG Enterprise encryption
4 = No encryption/Secure Erase 5 = TCG+FIPS
Source: Western Digital
Since launching their organization early last year, the MLPerf group has been slowly and steadily building up the scope and the scale of their machine learning benchmarks. Intending to do for ML performance what SPEC has done for CPU and general system performance, the group has brought on board essentially all of the big names in the industry, ranging from Intel and NVIDIA to Google and Baidu. As a result, while the MLPerf benchmarks are still in their early days – technically, they’re not even complete yet – the group’s efforts have attracted a lot of interest, which vendors are quickly turning into momentum.
Back in June the group launched its second – and arguably more interesting – benchmark set, MLPerf Inference v0.5. As laid out in the name, this is the MLPerf group’s machine learning inference benchmark, designed to measure how well and how quickly various accelerators and systems execute trained neural networks. Designed to be as much a competition as it is a common and agreed upon means to test inference performance, MLPerf Inference is intended to eventually become the industry’s gold standard benchmark for measuring inference performance across the spectrum, from low-power NPUs in SoCs to dedicated, high-performance inference accelerators in datacenters. And now, a bit over 4 months after the benchmark was first released, the MLPerf group is releasing the first official results for the inference benchmark.
Since it was launched earlier this decade, NVIDIA’s Jetson lineup of embedded system kits remains one of the odder success stories for the company. While NVIDIA’s overall Tegra SoC plans have gone in a very different direction than first planned, they’ve seen a lot of success with their system-level Jetson kits, as customers snatch them up both as dev kits and for use in commercial systems. Now in their third generation of Jetson systems, this afternoon NVIDIA is outlining their plans to diversify the family a bit more, announcing a physically smaller and cheaper version of their flagship Jetson Xavier kit, in the form of the Jetson Xavier NX.
Based on the same Xavier SoC that’s used in the titular Jetson AGX Xavier, the Jetson Xavier NX is designed to fill what NVIDIA sees as a need for both a cheaper Xavier option, as well as one smaller than the current 100mm x 87mm board. In fact the new Nano-sized board is quite literally that: the size of the existing Jetson (TX1) Nano, which was introduced earlier this year. Keeping the same form factor and pin compatibility, the Jetson Xavier NX sports the same 45mm x 70mm dimensions, making it a bit smaller than a credit card.
Compared to the full-sized Jetson AGX Xavier, NVIDIA is aiming the Jetson Xavier NX at customers who need to do edge inference in space-constrained use cases where the big Xavier won’t do. Since it’s based on the same Xavier SoC, the Jetson Xavier NX uses the same Volta GPU, and critically, the same NVDLA accelerator cores as the original. As a result, for inference tasks the Jetson Xavier NX should be significantly faster than the Jetson Nano and various Jetson TX2 products – curently NVIDIA's most widely used embedded Jetson – none of which have hardware comparable to NVIDIA’s dedicated deep learning accelerator cores.
Not that Jetson Xavier NX is a wholesale replacement for Jetson AGX Xavier, however. The smaller Xavier board is taking a shave both in performance and in I/O for a mix of product segmentation, power consumption, and pin compatibility reasons. Notably, the Xavier SoC uses in the NX loses out on 2 CPU cores, 2 GPU SMs, and perhaps most important to heavy inference users, half of the chip’s memory bandwidth. As a result the Jetson Xavier NX should still be significantly ahead of Jetson TX1/TX2, but it will definitely trail the full-fledged Jetson AGX Xavier.
|NVIDIA Jetson Family Specifications|
|AGX Xavier||Jetson Nano|
|GPU||Volta, 384 Cores
|Volta, 384 Cores @ 800MHz||Volta, 512 Cores
|Maxwell, 128 Cores
|Accelerators||2x NVDLA||2x NVDLA||N/A|
|Memory||8GB LPDDR4X, 128-bit bus
|16GB LPDDR4X, 256-bit bus
|4GB LPDDR4, 64-bit bus
|Storage||8GB eMMC||32GB eMMC||16GB eMMC|
|AI Perf.||21 TOPS||14 TOPS||32 TOPS||N/A|
|Dimensions||45mm x 70mm||100mm x 87mm||45mm x 70mm|
All told, for inference applications NVIDIA is touting 21 TOPS of performance at the card’s full power profile of 15 Watts. Alternatively, at 10 Watts – which happens to be the max power state for Jetson Nano – this drops down to 14 TOPS as clockspeeds are reduced and two more CPU cores are shut off.
Otherwise, the Jetson Xavier NX is designed to slot right in with the rest of the Jetson family, as well as NVIDIA’s hardware and software ecosystem. The embedded system board is being positioned purely for use in high-volume production systems, and accordingly, NVIDIA won’t be putting together a developer kit version of the NX. Since the current Jetson AGX Xavier will be sticking around, it will fill that role, and NVIDIA is offering software patches for developers who need to specifically test against Jetson Xavier NX performance levels.
The Jetson Xavier NX will be shipping from NVIDIA in March of 2020, with NVIDIA pricing the board at $399.
AOC is about to start sales of its new Q27T1 display. It comes with a 'designed by Studio F. A. Porsche' logo which aims to propogate a sense of style, decent specifications, and a 'reasonable' price. The monitor is targeted at office workers, yet it may also appeal to gamers and multimedia enthusiasts as it supports AMD’s FreeSync variable refresh rate technology.
AOC’s Porsche Design Q27T1 display comes in a sleek chassis with slim bezels on three sides and featuring an asymmetric stand. The monitor is very thin, yet it has a an integrated cable management with a special cover.
Characteristics wise, the AOC Q27T1 is a good performance-mainstream display: it has a 27-inch IPS panel featuring a 2560×1440 resolution, 350 nits brightness, a 1300:1 contrast ratio, a variable refresh rate between 48 Hz and 75 Hz, and viewing angles of 178º. The display claims to cover 107% of the sRGB and 90% of the NTSC color gamut. To make it more comfortable for work, the monitor has an anti-glare coating. As for inputs, the LCD has two HDMI 1.4 connectors, one DisplayPort 1.2, a line in, and a headphone output.
The Q27T1 is not AOC’s first Porsche Design monitor. About two and a half years the company introduced its PDS-series 23-inch and 27-inch LCDs designed for the same style-minded audience and offering a Full-HD resolution. One of the peculiarities of those displays was an external PSU with a proprietary connection that integrated an HDMI interface and a power cable into the same wire, which is not particularly practical. The new monitor not only features improved specifications, but does not use proprietary connections (yet it still has an external PSU).
|AOC Porsche Design 27-Inch Display|
|Panel||27" IPS with anti-glare coating|
|Native Resolution||2560 × 1440|
|Maximum Refresh Rate||75 Hz|
|Dynamic Refresh Rate||AMD FreeSync (48 Hz ~ 75 Hz)|
|Response Time||5 ms (gray-to-gray)|
|Viewing Angles||178°/178° horizontal/vertical|
|Color Gamut||107% sRGB, 90% NTSC|
|Pixel Pitch||0.2331×0.2331 mm|
|Inputs||1 × DisplayPort 1.2
2 × HDMI 1.4
1 × Line-In
|Audio||3.5-mm headphone jack|
|Color||Gray + Silver|
|Power Consumption||Standby||0.3 W|
AOC will start sales of the Q27T1 display later this month, in time for holiday shopping season. The monitor will cost £279 in the UK, so it is reasonable to assume that it will carry a $299 MSRP in the US.
Intel has notified its partners about plans to discontinue its only 10nm small form factor NUC in the market. The NUC, which went under the code name of Crimson Canyon, is/was Intel's only 10nm device in this market - it used Cannon Lake processors made on its 10 nm technology, and paired with AMD’s Radeon 540 graphics.
The fate of Intel’s Cannon Lake processors has been, to put it mildly, 'dead on arrival'. Delayed by over a year because of problems with 10 nm fabrication process, the CPUs suffered low yields and had design selections made that resulted in a non-functioning integrated GPU, as well as high power consumption: the Core i3-8121U processor at the heart of Intel's first generation 10nm ended up in a few China-only laptops (which we reviewed), and in a small number of Crimson Canyon NUC devices.
Intel advises parties interested in its Crimson Canyon NUC SFF PCs to make their final orders by December 27, 2019, or return them by that date. The final NUCs powered by the Cannon Lake processors will be shipped by February 28, 2020.
The axing of this NUC also coincides with several other 14nm NUCs being given the same treatment:
While the EOL of the Crimson Canyon Mini PCs is not exactly surprising, it will be interesting to see what Intel plans to offer on 10nm in the NUC space in future. Technically speaking the Core i3-8121U has not been formally discontinued, which is a real head scratcher.
Source: Intel (Thanks to our reader SH SOTN for the tip)
The best thing about manufacturing Field Programmable Gate Arrays (FPGAs) is that you can make the silicon very big. The nature of the repeatable unit design can absorb issues with a process technology, and as a result we often see FPGAs be the largest silicon dies that enter the market for a given manufacturing process. When you get to the limit of how big you can make a piece of silicon (known as the reticle limit), the only way to get bigger is to connect that silicon together. Today Intel is announcing its latest ‘large’ FPGA, and it comes with a pretty big milestone with its connectivity technology.
GlobalFoundries and SiFive announced on Tuesday that they will be co-developing an implementation of HBM2E memory for GloFo's 12LP and 12LP+ FinFET process technologies. The IP package will enable SoC designers to quickly integrate HBM2E support into designs for chips that need significant amounts of bandwidth.
The HBM2E implementation by GlobalFoundries and SiFive includes the 2.5D packaging (interposer) designed by GF, with the HBM2E interface developed by SiFive. In addition to HBM2E technology, licensees of SiFive also gain access to the company’s RISC-V portfolio and DesignShare IP ecosystem for GlobalFoundries’ 12LP/12LP+, which will enable SoC developers to build RISC-V-based devices GloFo's advanced fab technology.
GlobalFoundries and SiFive suggest that the 12LP+ manufacturing process and the HBM2E implementation will be primarily used for artificial intelligence training and inference applications for edge computing, with vendors looking to optimize for TOPS-per-milliwatt performance.
For GlobalFoundries, it is important to land customers who need specialized process technologies and may not be ready for leading-edge processes from TSMC and Samsung Foundry for cost or other reasons. As for SiFive's involvement, this is a bit trickier – RISC-V itself isn't likely to be used for the core logic in deep learning accelerators, but it is a solid architecture to use for the embedded CPU cores needed to control the dataflows within an accelerator.
SiFive’s HBM2E interface and custom IP for GlobalFoundries’ 12LP and 12LP+ technology are being developed at GF’s Fab 8 in Malta, New York. The two companies expect that they'll be able to wrap up their work in the first half of 2020, at which point the IP will become available for licensing.
Seagate last week clarified its high-capacity HDD roadmap during its earnings call with analysts and investors. The company is on track to ship its first commercial HAMR-based hard drives next year, but only in the back half of the year. Before that, Seagate intends to ship its 18 TB HDDs.
It is expected that Seagate’s 18 TB hard drive will be based on the same nine-platter platform that is already used for the company’s Exos 16 TB HDD, which means that it will be relatively easy for the company to kick off mass production of 18 TB hard drives. Overall, Seagate’s HDD roadmap published in September indicates that the company’s 18 TB drive will use conventional magnetic recording (CMR) technology. In addition to this product, Seagate’s plans also include a 20 TB HDD based on shingled magnetic recording (SMR) technology that is due in 2020.
Seagate says that its Exos 16 TB hard drives are very popular among its clients and even expects to ship more than a million of such drives in its ongoing quarter, which ends in December. The launch of its 18 TB HDD will maintain Seagate’s capacity leadership in the first half of next year before Western Digital starts volume shipments of its HAMR+CMR-based 18 TB and HAMR+SMR-based 20 TB hard drives.
Seagate itself will be ready with its HAMR-based 20 TB drive late in 2020. Right now, select Seagate customers are qualifying HAMR-based 16 TB HDDs, so they will likely be ready to deploy 20 TB HAMR drives as soon as they are available. It is noteworthy that Seagate is readying HAMR HDDs with both one and two actuators, as to offer the right performance and capacity for different customers. This would follow Seagate's current dual-actuator MACH.2 drives, which the company started shipping for revenue last quarter.
Dave Mosley, CEO of Seagate, said the following:
“We are preparing to ship 18 TB drives in the first half of calendar year 2020 to maintain our industry capacity leadership. We are also driving areal density leadership with our revolutionary HAMR technology, which enables Seagate to achieve at least 20% areal density CAGR over the next decade. We remain on track to ship 20 TB HAMR drives in late calendar year 2020.
As drive densities increase, multi-actuator technology is required to maintain fast access to data and scale drive capacity without compromising performance. We generated revenue from our MACH.2 dual actuator solutions for the first time in the September quarter. We are working with multiple customers to qualify these drives, including a leading US hyperscale customer, who is qualifying the technology to meet their rigorous service level agreements without having to employ costly hardware upgrades. We expect to see demand for dual actuator technology to increase as customers transition to drive capacities above 20 TB.”
Just in time for this week's Adobe MAX conference, Dell has introduced an updated version of its popular 27-inch 4K UltraSharp professional display. The latest iteration of Dell's pro monitor, the UltraSharp 27 4K PremierColor Monitor (UP2720Q) is shaking things up by taking the already factory-calibrated monitor family and integrating a colorimeter for even further calibration options, as well as Thunderbolt 3 support. At the same time, however, Dell is also dropping HDR support, making this (once again) a purely SDR display.
Like its predecessors, the UltraSharp 27 4K PremierColor Monitor UP2720Q is particularly aimed at photographers, designers, and other people with color-critical workloads. The LCD comes factory calibrated to a Delta <2 accuracy so to be ready to work out of the box and is equipped with a light shielding hood.
Under the hood, the UP2720Q is based on a 10-bit IPS panel featuring a 3840x2160 resolution. The now purely SDR monitor offers a typical brightness of 250 nits, a 1300:1 contrast ratio, a 6 ms GtG response time, 178°/178° viewing angles, and has a 3H anti-glare hard coating. Being aimed at graphics and photography professionals, the LCD can display 1.07 billion colors and covers 100% of the Adobe RGB color gamut, 98% of DCI-P3 , and 80% of BT.2020. Furthermore, the monitor can display two color gamuts at once when Picture-by-Picture capability is used.
The key new feature of the UP2720Q is its built-in colorimeter, which is compatible with CalMAN software and allows users to ensure that they use the most accurate colors possible. Typically, monitors used for graphics and photo editing need to be recalibrated every several months and integrated colorimeter stands to make the task much easier.
The monitor can connect to host PCs using a DisplayPort 1.4, two HDMI 2.0 inputs, or, new to this latest model, a Thunderbolt 3 connector. The display has an additional TB3 port to daisy chain another TB3 device, and also includes a USB 3.2 Gen 2 hub and a headphone output. The Thunderbolt 3 port can supply its host PC with up to 90 W of power, enough for high-end 15.6-inch laptops.
Just like other professional monitors, the UltraSharp 27 4K PremierColor Monitor UP2720Q has a stand that can adjust height, tilt, and swivel. Besides, the display can be used in portrait mode.
Dell’s PremierColor Monitor UP2720Q will be available starting from January 15, 2020, at a price of $1,999.99.
|The Dell UltraSharp PremierColor 27-inch Monitor Specs|
|Panel Type||10-bit IPS||10-bit IPS|
|Refresh Rate||60 Hz||60 Hz|
|Response time||8 ms typical
6 ms GtG in fast mode
|6 ms GtG|
|Contrast Ratio||1300:1||1000:1 (SDR)
|Brightness||250 nits||400 nits (SDR)
1000 nits (HDR)
|Color Gamut||100% AdobeRGB
|Stand||Height adjustability stand (130 mm),
Tilt (-5° to 21°),
Swivel (-45° to 45°),
Pivot (-90° to 90°)
|Height adjustability stand (145 mm),
Tilt (-5° to 21°),
Swivel (-45° to 45°),
Pivot (-90° to 90°)
|Connectivity||2 x HDMI 2.0
1 x DisplayPort 1.4
1 x Thunderbolt 3 (Upstream)
1x Thunderbolt 3 (Downstream)
1 x Headphone output
USB 3.2 Gen 2 Hub
|2 x HDMI 2.0
1 x DisplayPort 1.4
1x Mini DisplayPort 1.4
1 x Headphone output
USB 3.0 Gen 1 Hub
|Availability||January 2020||May 2017|
Dell has introduced a new version of its high end 12-inch Latitude Rugged Extreme tablet, the Latitude 7220 Rugged Extreme. Like other ruggedized PCs, the new 7220 is designed to reliably work in harsh environments, offering against scapes, drops, and material ingresses of all kinds. At a high level, Dell's latest ruggedized tablet largely carries over their earlier designs – making it compatible with 'most' of the accessories developed for them – but now it has been upgraded to a quad-core Intel Core 8th Generation CPU, a 1000 nits display, as well as the latest in connectivity technologies.
Dell has a history of offering fully-rugged tablets that goes back to 2015, and with the Latitude 7220 RE they are now on their third generation tablet. Just like its predecessors, the Latitude 7220 Rugged Extreme tablet comes in the MIL-STD-810G-certified 24-mm thick chassis designed to withstand operating drops, thermal extremes (-29°C to 63°C/-20°F to 145°F), dust, sand, humidity, blowing rain, vibration, functional shock and all other kinds of physical impact. The tablet is also MIL-STD-461F certified, meaning that the Latitude 7220 RE is both designed to avoid leaking electromagnetic interference, as well as being able to resist it.
Because of the significant chassis bulk required meet the durability requirements for a ruggedized device, the Latitude 7220 Rugged Extreme is neither small nor light; the device weighs in 1.33 kilograms, which is comparable to a full-blown 13.3-inch notebook. In fact, the new model is 60 grams heavier than the previous-generation Latitude 7212 Rugged Extreme.
From a technology perspective, one of the key improvements with the Latitude 7220 RE is its new display panel, which offers a peak luminance of 1000 nits and should be bright enough to provide decent image quality even under direct sunlight. The brightness of the screen can now be regulated on the front panel of the tablet, which should be rather convenient. Meanwhile, the tablet no longer has a dedicated Windows button, but the latter is still present on Dell’s optional IP65-rated keyboard cover with kickstand.
Under the hood of the new tablet is Intel’s 8th Generation Core i3/i5/i5 (Whiskey Lake) processor, which offers two or four cores along with Intel’s UHD Graphics 630. Depending on exact tablet SKU, that CPU can be accompanied by 8 GB or 16 GB of LPDDR3 memory and a PCIe 3.0 x4-based Class 35 or Class 40 M.2 SSD, with capacities ranging from 128 GB to 2 TB. The system can be powered by two hot-swappable batteries, each with a 34 Wh capacity (by default, the system includes only one), though Dell isn't promoting specific battery life figures since they expect the tablet's customers to have a pretty varied range of use cases.
Meanwhile, Dell's latest ruggedized tablet has also received a communications upgrade. The tablet not only offers Wi-Fi 5/6 and Bluetooth (which can be hardware disabled for military-bound devices), but also can include an optional Qualcomm Snapdragon X20 4G/LTE modem, as well as a FirstNet module to access networks for first responders.
As for wired I/O, the Latitude 7220 Rugged Extreme includes a USB 3.1 Type-C connector that can be used for charging and external display connectivity, a USB 3.0 Type-A port, an optional micro RS-232 port, a POGO connector for the keyboard, and a 3.5-mm audio jack for headsets. The tough tablet also features rear and front cameras, an SD card reader, an optional contactless smart card reader, as well as a touch fingerprint sensor. Meanwhile, notably unlike its predecessor, the 7220 no longer includes a GbE port, VGA, HDMI, nor some other legacy I/O options.
As far as security is concerned, Dell’s Latitude 7220 Rugged Extreme can be configured to cover all the bases. The tablet has a fingerprint reader, Dell’s ControlVault advanced authentication, Intel vPro remote management, a TPM 2.0 module, optional encryption for SSDs, and NIST SP800-147 secure platform.
|Specifications of the Dell Latitude 12 Rugged Extreme Tablets|
|Latitude 12 7212
|Features||Outdoor-readable display with gloved multi-touch AG/AR/AS/Polarizer and Gorilla Glass||Brightness: 1000 cd/m²
Outdoor-readable, anti-glare, anti-smudge, polarizer, glove-capable touchscreen
|CPU||Dual-Core 7th Gen Intel Core i5 CPUs (Skylake-U)
Dual-Core 7th Gen Intel Core i3/i5/i7 CPUs (Kaby Lake-U)
|Intel Core i7-8665U: 4C/8T vPro Intel Core i5-8365U: 4C/8T vPro
Intel Core i3-8145U: 2C/4T
|Graphics||Intel HD Graphics 520/620
|Intel UHD Graphics 630
|RAM||8 GB or 16 GB LPDDR3||8 GB or 16 GB LPDDR3-2133|
|Storage||128 GB SATA Class 20 SSD
256 GB SATA Class 20 SSD Opal 2.0 SED
256 GB SATA Class 20 SSD
256 GB PCIe NVMe Class 40 SSD Opal 2.0 SED
512 GB SATA Class 20 SSD Opal 2.0 SED
512 GB SATA Class 20 SSD
512 GB PCIe NVMe Class 40 SSD
1 TB SATA Class 20 SSD
1 TB PCIe NVMe Class 40 SSD
|M.2 PCIe 3.0 x4 SSDs:
Class 35: 128 GB;
Class 40: 256 GB, 512 GB, 1 TB, 2 TB;
Class 40 SED: 256 GB, 512 GB, 1 TB.
|Wireless LAN Options:
Intel Dual Band Wireless-AC 8265 with Bluetooth 4.2 + vPro Mobile broadband
Intel Dual Band Wireless-AC 8265 + No Bluetooth 4.2 Wireless Card
Qualcomm QCA61x4A 802.11ac Dual Band (2x2) Wireless Adapter+ Bluetooth 4.1
|Wireless LAN Options:
Intel Wireless-AC 9560, 2x2, 802.11ac with Bluetooth 5.0
Intel Wi-Fi 6 AX200, 2x2, 802.11ax with MU-MIMO, Bluetooth 5.0
Intel Wi-Fi 6 AX200, 2x2, 802.11ax with MU-MIMO, without Bluetooth
|Qualcomm Snapdragon X7 LTE-A for Win 10 (DW5811e Gobi5000) for Worldwide (Windows 7 and 10 options)
Qualcomm Snapdragon X7 LTE-A for Win 10 (DW5811e Gobi5000) for AT&T (Windows 7 and 10 options)
Qualcomm Snapdragon X7 LTE-A for Win 10 (DW5811e Gobi5000) for Verizon (Windows 7 and 10 options)
Qualcomm Snapdragon X7 LTE-A for Win 10 (DW5811e Gobi5000) for Sprint (Windows 7 and 10 options)
Dell Wireless 5816e multi-mode Gobi 5000 4G LTE WAN Card (Japan/ANZ only)
|DW5821E Qualcomm Snapdragon X20 4G/LTE Wireless WAN card for AT&T, Verizon, Sprint|
|GPS||Dedicated u-blox NEO-M8 GPS card|
|Additional||Dual RF-passthough (Wi-Fi and mobile broadband), Near field communication (NFC)||?|
|USB||3.1||1 × USB 3.0 Type-C w/ DP, PD|
|3.0||1 × USB 3.0 Type-A|
|Cameras||Front||Front-facing camera||5 MP RGB + IR FHD webcam with privacy shutter|
|Back||Rear-facing camera with flash LED||8 MP rear camera with flash and dual microphone|
|Security||Optional Security includes:
ControlVault advanced authentication;
Dell Security Tools;
Dell data protection encryption;
Contactless SmartCard reader; Fingerprint reader.
|Steel reinforced cable lock slot
Optional Security includes:
ControlVault advanced authentication;
Dell Security Tools;
Dell data protection encryption
Contactless/Contacted SmartCard reader;
NIST SP800-147 secure platform;
Dell Backup and Recovery.
|Other I/O||TRRS audio jack, micro RS-232 (optional), POGO, SD Card reader, etc.|
|Battery||34 Wh Primary battery||34 Wh Primary
34 Wh Secondary (optional?)
|Dimensions||Width||312 mm | 12.3 inch||312.2 mm | 12.29 inch|
|Height||203 mm | 8 inch||203 mm | 8 inch|
|Thickness||24 mm | 0.96 inch||24.4 mm | 0.96 inch|
|Weight||1270 grams (tablet)||1330 grams (tablet)|
|Operating System||Microsoft Windows 10 Pro 64 Bit
Microsoft Windows 10 Pro with Windows 7 Professional Downgrade (64 bit) - Skylake CPU required
|Microsoft Windows 10 Pro 64 Bit|
|Regulatory and Environmental Compliance||MIL-STD-810G||Transit drop (48”/1.22m; single unit; 26 drops), operating drop (36”/0.91m), blowing
rain, blowing dust, blowing sand, vibration, functional shock, humidity, salt fog, altitude, explosive atmosphere,
thermal extremes, thermal shock, freeze/thaw, tactical standby to operational.
|Operating thermal range||-20°F to 145°F (-29°C to 63°C)|
|Non-operating thermal range||-60°F to 160°F (-51°C to 71°C)|
|IEC 60529 ingress protection||IP-65 (dust-tight, protected against pressurized water)|
|Hazardous locations||ANSI/ISA.12.12.01 certification capable (Class I, Division 2, Groups A, B, C,D),
|ANSI/ISA.12.12.01 certification capable (Class I, Division 2, Groups A, B, C,D)|
|Electromagnetic interference||MIL-STD-461F certified||MIL-STD-461F and MIL-STD-461G|
|Optional Accessories||Dell Desktop Dock for the Rugged Tablet,
Dell Dock WD15,
Dell Power Companions,
Kickstand and Rugged RGB Backlit Keyboard cover,
Soft and Rigid Handle options,
Cross Strap, Active Pen,
Dell Wireless Keyboard and Mouse
|Rugged Tablet Dock
Keyboard with Kickstand
Havis Vehicle Dock
PMT Vehicle Dock
Gamber-Johnson Vehicle Dock
Extended I/O module
Dell monitors (with USB-C or over a USB-C-to-DP adapter)
Dell wireless keyboard and mice
|Price||Starting at $1,899|
The Dell Latitude 7220 Rugged Extreme tablet is now available from Dell starting at $1,899.
Some images are made by Getty Images and distributed by Dell
The Xeon-E family from Intel replaced the Xeon E3-1200 parts that were found common place in a lot of office machines and small servers. The Xeon E parts are almost direct analogues of the current leading consumer processor hardware, except with ECC memory support and support for vPro out of the box. Today’s launch is a secondary launch, with Intel having released the Xeon-E 2200 series some time ago for the cloud market, but this launch marks general availability for consumers and the small-scale server market.
The fate of Samsung's custom CPU development efforts has been making the rounds of the rumour mill for almost a month, and now we finally have confirmation from Samsung that the company has stopped further development work on its custom Arm architecture CPU cores. This public confirmation comes via Samsung’s HR department, which last week filled an obligatory notice letter with the Texas Workforce Commission, warning about upcoming layoffs of Samsung’s Austin R&D Center CPU team and the impending termination of their custom CPU work.
The CPU project, said currently to be around 290 team members large, started off sometime in 2012 and has produced the custom ARMv8 CPU microarchitectures from the Exynos M1 in the Exynos 8890 up to the latest Exynos M5 in the upcoming Exynos 990.
Over the years, Samsung’s custom CPU microarchitectures had a tough time in differentiating themselves from Arm’s own Cortex designs, never being fully competitive in any one metric. The Exynos-M3 Meerkat cores employed in the Exynos 9810 (Galaxy S9), for example, ended up being more of a handicap to the SoC due to its poor energy efficiency. Even the CPU project itself had a rocky start, as originally the custom microarchitecture was meant to power Samsung’s custom Arm server SoCs before the design efforts were redirected towards mobile use.
In a response to Android Authority, Samsung confirmed the choice was based on business and competitive merits. A few years ago, Samsung had told us that custom CPU development was significantly more expensive than licensing Arm’s CPU IP. Indeed, it’s a very large investment to make in the face of having the up-hill battle of not only to designing a core matching Arm’s IP, but actually beating them.
Beyond the custom CPU’s competitiveness, the cancellation likely is tied to both Samsung’s and Arm’s future CPU roadmaps and timing. Following Deimos (Cortex-A77) and Hercules (Cortex-A78?), Arm is developing a new high-performance CPU on the new ARMv9 architecture, and we expect a major new v9 little core to also accompany the Matterhorn design. It’s likely that Samsung would have had to significantly ramp up R&D to be able to intercept Arm's ambitious design, if even possible at all given the area, performance, and efficiency gaps.
In practice, the end result is bittersweet. On one hand, the switch back to Cortex-A CPUs in future Exynos flagship SoCs should definitely benefit SLSI’s offerings, hopefully helping the division finally achieve SoC design wins beyond Samsung’s own Mobile division – or dare I hope, even fully winning a Samsung Galaxy design instead of only being a second-source alongside Qualcomm.
On the other hand, it means there’s one less custom CPU development team in the industry which is unfortunate. The Exynos 990 with the M5 cores will be the last we’ll see of Samsung’s custom CPU cores in the near future, as we won't be seeing the in-development M6. M6 was an SMT microarchitecture, which frankly quite perplexed me as a mobile targeted CPU – I definitely would have wanted to see how that would have played out, just from an academic standpoint.
The SARC and ACL activities in Austin and San Jose will continue as Samsung’s SoC, AI, and custom GPU teams are still active, the latter project which seems to be continuing alongside the new AMD GPU collaboration and IP licensing for future Exynos SoCs.
Back in September, LG and NVIDIA teamed up to enable G-Sync variable refresh rate support on select OLED televisions. Starting this week and before the end of the year LG will issue firmware updates that add support for the capability on the company’s latest premium OLED TVs.
LG's 2019 OLED TVs have been making waves throughout the gaming community since their launch earlier this year. The TVs are among the first to support HDMI 2.1's standardized variable refresh rate technology, adding a highly demanded gaming feature to LG's already popular lineup of TVs. This has put LG's latest generation of TVs on the cutting edge, and, along with Microsoft's Xbox One X (the only HDMI-VRR source device up until now), the duo of devices has been serving as a pathfinder for HDMI-VRR in general.
Now, NVIDIA is getting into the game by enabling support for HDMI-VRR on recent video cards, as well as working with LG to get the TVs rolled into the company's G-Sync Compatible program. The two companies have begun rolling out the final pieces needed for variable refresh support this week, with LG releasing a firmware update for their televisions, while NVIDIA has started shipping a new driver with support for the LG TVs.
On the television side of matters, LG and NVIDIA have added support for the 2019 E9 (65 and 55 inches), C9 (77, 65 and 55 inches), and B9 (65 and 55 inches) families of TVs, all of which have been shipping with variable refresh support for some time now.
The more interesting piece of the puzzle is arguably on the video card side of matters, where NVIDIA is enabling support for the TVs on their Turing generation of video cards, which covers the GeForce RTX 20 series as well as the GeForce GTX 16 series of cards. At a high level, NVIDIA and LG are branding this project as adding G-Sync Compatible support for the new TVs. But, as NVIDIA has confirmed, under the hood this is all built on top of HDMI-VRR functionality. Meaning that as of this week, NVIDIA has just added support for HDMI's variable refresh standard to their Turing video cards.
While HDMI-VRR was introduced as part of HDMI 2.1, the feature is an optional extension to HDMI and is not contingent on the latest standard's bandwidth upgrades. This has allowed manufacturers to add support for the tech to HDMI 2.0 devices, which is exactly what has happened with the Xbox One X and now NVIDIA's Turing video cards. Which in the case of NVIDIA's cards came as a bit of a surprise, since prior to the LG announcement NVIDIA never revealed that they could do HDMI-VRR on Turing.
At any rate, the release of this new functionality gives TV gamers another option for smooth gaming on big-screen TVs. Officially, the TVs are part of the G-Sync Compatible program, meaning that on top of the dev work NVIDIA has done to enable HDMI-VRR, they are certifying that the TVs meet the program's standards for image stability (e.g. no artifacting or flickering). Furthermore, as these are HDR-capable OLED TVs, NVIDIA is also supporting HDR gaming as well, covering the full gamut of features available in LG's high-end TVs.
Ultimately, LG is the first TV manufacturer work with NVIDIA to get the G-Sync Compatible certification, which going into the holiday shopping season will almost certainly be a boon for both companies. So it will be interesting to see whether other TV makers will end up following suit.
Google on Friday announced that that it had reached an agreement to buy Fitbit, a leading maker of advanced fitness trackers. Google stressed that data obtained and processed by Fitbit’s devices will remain in appropriate datacenters and will not go elsewhere.
Under the terms of the agreement, Google will pay $2.1 billion in cash, valuing the company at $7.35 per share. In accordance with the deal, Google will become the sole owner of Fitbit, owning its IP and handling all hardware and software development and distribution.
The takeover of Fitbit is the latest step in Google’s ongoing strategy to make its Android platform more attractive to consumers. Fitbit has more than 28 million of active users, and while the company is far from the lion's share of the wearables market, it has a significantly bigger presence than the small number of Wear OS devices that Google's partners have been able to sell.
Overall, this is is the second major wearables-related acquisition for Google this year. Earlier this year the company also bought technology and R&D personnel from watch maker Fossil.
James Park, co-founder and CEO of Fitbit, said the following:
“Google is an ideal partner to advance our mission. With Google’s resources and global platform, Fitbit will be able to accelerate innovation in the wearables category, scale faster, and make health even more accessible to everyone. I could not be more excited for what lies ahead.”
The transaction will be closed in 2020 and from there expect Google to integrate Fibit’s IP into the Android platform.
Source: Google/Fitbit press release
Western Digital announced this week that it has started shipments of its first products based on 3D QLC NAND memory. The initial devices to use the highly-dense flash memory are retail products (e.g., memory cards, USB flash drives, etc.) as well as external SSDs. Eventually, high-density 3D QLC NAND devices will be used to build high-capacity SSDs that will compete against nearline hard drives.
During Western Digital's quarterly earnings conference call earlier this week, Mike Cardano, president and COO of the company, said that in the third quarter of calendar 2019 (Q1 FY2020) the manufacturer “began shipping 96-layer 3D QLC-based retail products and external SSDs.” The executive did not elaborate which product lines now use 3D QLC NAND, though typically we see higher capacity NAND first introduced in products such as high-capacity memory cards and external drives.
Western Digital and its partner Toshiba Memory (now called Kioxia) were among the first companies to develop 64-layer 768 Gb 3D QLC NAND back in mid-2017 and even started sampling of these devices back then, but WD/Toshiba opted not to mass produce the NAND. Meanwhile, in mid-2018, Western Digital introduced its 96-layer 1.33 Tb 3D QLC NAND devices that could either enable to build storage products with considerably higher capacities, or cut costs of drives when compared to 3D TLC-based solutions.
At present, Western Digital’s 1.33 Tb 3D QLC NAND devices are the industry’s highest-capacity commercial NAND chips, so from this standpoint the company is ahead of its rivals. But while it makes a great sense to use 1.33 Tb 3D QLC NAND for advanced consumer storage devices, these memory chips were developed primarily for ultra-high-capacity SSDs that could rival nearline HDDs for certain applications.
It is hard to say when Western Digital commercializes such drives as the company is only starting to qualify 96-layer 3D QLC NAND for SSDs, but it will definitely be interesting to see which capacity points will be hit with the said memory.
On a related note, Western Digital also said that in Q3 2019 (Q1 FY2020), bit production of 96-layer 3D NAND exceeded bit production of 64-layer 3D NAND.
Source: Western Digital
While GDDR6 is currently available at speeds up to 14Gbps, and 16Gbps speeds are right around the corner, if the standard is going to have as long a lifespan as GDDR5, then it can't stop there. To that end, Rambus this week demonstrated operation of its GDDR6 memory subsystem at a data transfer rate of 18 GigaTransfers/second, a new record for the company. Rambus’s controller and PHY can deliver a peak bandwidth of 72 GB/s from a single 32-bit GDDR6 DRAM chip, or a whopping 576 GB/s from a 256-bit memory subsystem, which is what we're commonly seeing on graphics cards today.
The Rambus demonstration involved the company’s silicon-proven GDDR6 PHY implemented using one of TSMC’s 7 nm process nodes, accompanied by Northwest Logic’s GDDR6 memory controller and GDDR6 chips from an unknown maker. According to a transmit eye screenshot published by Rambus, the subsystem worked fine and the signals were clean.
Both GDDR6 controller and PHY can be licensed from Rambus by developers of SoCs, so the demonstration is both a testament to how well the company’s highly-integrated 7 nm GDDR6 solution works, and a means to promote their IP offerings.
It is noteworthy that Rambus, along with Micron and a number of other companies, has been encouraging the use of GDDR6 memory in products besides GPUs for quite some time. Various accelerators for AI, ML, and HPC workloads as well as networking gear and autonomous driving systems greatly benefit from the technology's high memory bandwidth and are therefore a natural fit for GDDR6. The demonstration is meant to show companies developing SoCs that Rambus has a fine GDDR6 memory solution implemented using a leading-edge process technology that can be easily integrated (with the help of engineers from Rambus) into their designs.
For the graphics crowd, Rambus’ demonstration gives a hint of what to expect from upcoming implementations of GDDR6 memory subsystems and indicates that GDDR6 still has some additional room for growth in terms of data transfer rates.
One of the advantages of having a highly-integrated product stack is ability to fine tune performance of your devices when they work together. On the one hand, this allows to get higher performance while ensuring maximum compatibility and thus differentiate from rivals. On the other hand, this enables to sell more products per end user, sometimes at a premium. This is exactly what GIGABYTE is doing with its Aorus motherboards and Aorus Memory Boost feature of its Aorus RGB Memory modules.
GIGABYTE, which introduced its first DIMMs in mid-2018, is a relative newbie on the market of memory, so its DDR4 product lineup is currently limited to seven SKUs conservatively rated for DDR4-2666, DDR4-3200, and DDR4-3600 operation (whereas faster kits announced at CES 2019 are yet to be launched commercially). Meanwhile, the company appears to have some kind of secret weapon in the form of a special SPD setting called Aorus Memory Boost (AMB) that slightly increases speed of its top-of-the-range DRAM modules.
When used with select AMD X570 and Intel Z390-based GIGABYTE Aorus motherboards, the Aorus RGB Memory DDR4-3600 with CL18 19-19-39 16 GB dual-channel kits (GP-AR36C18S8K2HU416R and GP-AR36C18S8K2HU416RD) can automatically set themselves to DDR4-3700 or DDR4-3733 (Intel/AMD) mode when their AMB profile is activated.
It is unclear whether such an overclock affects timings in a bid to slightly increase data transfer rates, or GIGABYTE can guarantee stable operation at higher clocks on its motherboards due to superior design of the latter, but it is evident that the company’s modules work better with its platforms. The latter would not be particularly surprising as Aorus-branded mainboards are engineered to feature an overclocking headroom for CPU and DRAM, so GIGABYTE does not really take many risks here. Meanwhile, we can only wonder whether GIGABYTE’s Aorus Memory Boost will be available on its higher-end DDR4-4000+ modules that are harder to overclock and guarantee a long-term stability.
GIGABYTE’s Aorus RGB Memory DDR4-3600 16 GB dual-channel kits are already listed on the company’s website, so expect them to hit retail shortly. Prices are unknown, but it remains to be seen whether the manufacturer decides to capitalize on the Aorus Memory Boost and sell the modules at a slight premium.
Sony this week has revealed that the company will be building a new semiconductor fab to boost output of its CMOS sensors, as part of a broader effort to respond to growing demand for these products. The company will build the new fab at its Nagasaki Technology Center and expects it to tangibly increase their production of CMOS wafers.
Being one of the leading suppliers of CMOS camera sensors for smartphones, Sony earns billions of dollars selling them. In the third quarter (Q2 FY2019) Sony’s Imaging and Sensing Solutions (I&SS) division earned $2.871 billion in revenue (up 56.3% year-over-year) and $706 million in profits*. As of late March 2018, Sony’s CMOS production capacity was 100 thousand 300-mm wafer starts per month, and the company is gradually increasing its output by improving efficiency of its fab space utilization and outsourcing part of the production. But that may not be enough.
In the coming years demand for CMOS sensors is going to grow because of several factors: smartphones now use not two (for main and selfie cameras), but three or even more camera modules; smartphone sensors are getting larger; and more devices are going to get computer vision support, requiring more sensors there as well.
In order to satisfy demand for such products, the company constantly improves its fabs and expects to boost their total output capacity to around 138 thousand of wafer starts per month by late March 2021. Furthermore, Sony plans to invest billions of dollars (PDF, page 152) in fab upgrades as well as building an additional fab (or even fabs) at its Nagasaki Technology Center. The new manufacturing facility (or facilities) is expected to start production sometime during the company's 2021 fiscal year, which starts on April 1, 2021. That being said, it is reasonable to expect that Sony is aiming to start construction of the facility in the coming months.
It is noteworthy that Sony’s semiconductor division (which is now called I&SS) reportedly has not invested anything in brand-new production facilities for 12 years. The company did acquire a semiconductor fab from Toshiba and then re-purposed it to make sensors in 2016, but this was not a new fab. Apparently, Sony now forecasts such high demand for sensors in the coming years that it has decided to invest in all-new production lines.
Sony’s statement reads as follows:
“We expect demand for our image sensors to continue to increase from next fiscal year as well due to the adoption of multi-sensor cameras and larger- sized sensors by smartphone makers.
In order to respond to this strong demand, we have further improved the efficiency of space utilization in our existing factories and have raised our production capacity target for the end of March 2021 from 130,000 wafers per month to 138,000 wafers per month.
Moreover, we have decided to move forward in stages with the investment we had been considering to build new fabs at our Nagasaki facility to accommodate demand from the fiscal year beginning April 1, 2021.
Through this action, we are working to continue growing the I&SS business so as to achieve the mid-range targets we established at the IR Day this year: 60% revenue share of the image sensor market and 20-25% ROIC in the fiscal year ending March 31, 2026.”
*For the sake of clarity, it is necessary to note that Sony’s I&SS division still produces some chips for Sony’s needs, so not 100% of its revenue comes from image sensors
Western Digital has introduced its new WD Red SA500 family of specialized SSDs, which are designed for caching data in NAS devices. The drives are available in four different capacities from 500 GB to 4 TB to satisfy demands of different customers. To maximize their compatibility, the SSDs feature a SATA 6 Gbps interface and come in M.2-2280 or 2.5-inch/7-mm form-factors.
Now that many desktop PCs have either been replaced by laptops or are so small that they cannot house a decent number of capacious hard drives, NAS use is gaining traction among those individuals and small businesses who need to store fairly large amounts of data. To provide such customers high performance (which is comparable to that of internal storage), many NAS these days feature a 10GbE network adapter as well as a special bay (or bays) for a caching SSD. However, the vast majority of client SSDs on the market were not designed for pure caching workloads, which are more write-heavy than typical consumer workloads. Seagate with its IronWolf 110 was the first company to launch an SSD architected for NAS caching early this year and now Western Digital follows the suit with its WD Red SA500 family, which is broader than that offered by its rival.
While it's not being disclosed by the company, Western Digital’s WD Red SA500 SSDs are based on Marvell's proven 88SS1074 controller, and paired with the company’s 3D TLC NAND memory. When it comes to capacities, the new WD Red SA500 drives are available in two form-factors: M.2-2280 models offer 500 GB, 1 TB, and 2 TB capacities, whereas 2.5-inch/7-mm SKUs can store 500 GB, 1 TB, 2 TB and 4 TB of data.
Performance-wise, the WD Red SA500 offers up to 560 MB/s sequential read speeds, up to 530 MB/s sequential write speeds, and up to 95K/85K random read/write IOPS, which is in line with advanced client SATA SSDs. But the key difference between typical client drives equipped with the same controller and the WD Red SA500 is a special firmware optimized for more evenly mixed workloads and engineered to ensure longevity. By contrast, client SSDs are tailored mostly for fast reads.
As far as endurance is concerned, the WD Red SA500 SSDs are rated for 0.32 – 0.38 DWPD over a five-year warranty period, which is in line with that of modern desktop drives. This is admittedly not especially high for a drive that can fill itself in under an hour, but presumably Western Digital confident that the caching algorithms in modern NASes are not so aggressive that the drives will be extensively rewritten. Moreover, at the end of the day we are talking about consumer as well as SMB-class NASes, where the expected workloads are lower than with enterprise systems.
|The WD Red SA500 Caching SSDs for NAS|
|Capacity||500 GB||1 TB||2 TB||4 TB|
|NAND Flash||3D TLC NAND|
|Form-Factor, Interface||M.2||M.2-2280, SATA 6 Gbps||-|
|DFF||2.5-inch/7-mm, SATA 6 Gbps|
|Sequential Read||560 MB/s|
|Sequential Write||530 MB/s|
|Random Read IOPS||95K|
|Random Write IOPS||85K||82K|
|DRAM Buffer||Yes, capacity unknown|
|TCG Opal Encryption||?|
|Power Consumption||Avg Active||52 mW||60 mW||60 mW|
|Max. Read||2050 mW||2550 mW||3000 mW|
|Max. Write||3350 mW||3750 mW||3800 mW|
|Slumber||56 mW||56 mW|
|DEVSLP||5-7 mW||5-12 mW|
|MTBF||2 million hours|
Western Digital’s WD Red SA500 SSDs are currently available directly from the company, with broader availability expected in November. The cheapest 500 GB model costs $72 – $75 depending on the form-factor, the top-of-the-range M.2 2 TB SKU is priced at $297, whereas the highest-capacity 4 TB 2.5-inch model carries a $600 price tag.
Source: Western Digital
Among several items at its developers conference this week, Samsung revealed that it was working on a version of its always-connected Galaxy Book S laptop powered by Intel’s Lakefield processor. When it becomes available in 2020, the notebook is expected to be the first mobile PC powered by Intel’s hybrid SoC, which containing a mix of high-performance and energy-efficient cores.
There are many laptop users nowadays who want their PCs to be very sleek, offer decent performance, be always connected to the Internet, and to last for a long time on a charge. Modern premium x86-based notebooks are very compact and can be equipped with a 4G/LTE modem, but even configured properly, the extra radio brings a hit to battery life over a non-modem model. The immediate solution is of course to use Intel’s low-power/energy-efficient Atom SoCs or Qualcomm's Snapdragon processors tailored for notebooks, but this will have an impact on performance.
To offer both performance and energy efficiency for always-connected notebooks, Intel has developed its Lakefield SoC that features one high-performance Ice Lake core, four energy-efficient Tremont cores, as well as Gen 11 graphics & media cores. Internally, Intel’s Lakefield consists of two dies — a 10 nm Compute die and a 14 nm Base die — integrated into one chip using the company’s Foveros 3D packaging technology to minimize its footprint. Courtesy of Foveros, the chip measures 12×12 mm and can be integrated into a variety of emerging always-connected devices.
As it turns out, Samsung’s upcoming version of the 13.3-inch Galaxy Book S will be the first to use Intel’s Lakefield, where it will be paired with Intel’s 4G/LTE modem to offer Internet connectivity everywhere.
Samsung is not disclosing pricing or availability details for its Lakefield-powered Galaxy Book S; but since Intel plans to start production of the SoC this quarter, expect the machine to launch in 2020.
Razer this year has been on the rampage of new product announcements, bringing dozens of devices to the market and entering new categories. Adding to the company's lineup, this week the company introduced its Hammerhead wireless earbuds, which promise to reduce the audio lag that Bluetooth headphones are typically known for. The headset is compatible with all Bluetooth devices and supports touch controls to control calls, music, and virtual assistants.
Razer’s Hammerhead True Wireless earbuds use a 13-mm driver with a 20 Hz – 20 KHz frequency response, a 32 ± 15% Ohms impedance, and a 91 ± 3 dB @ 1 kHz sensitivity; as well as an omnidirectional MEMS microphone with a -42 ± 3 dB sensitivity, a ≥55 dB signal-to-noise ratio, and a 300 Hz – 5 kHz frequency response. Each earbud is equipped with a 275 mAh rechargeable Li-Po battery, with Razer touting a battery life of up to three hours.
The headset connects to smartphones (or other devices) using ‘a customized ultra-low latency’ Bluetooth 5.0 connection that reduces lag in Gaming Mode (enabled in a special app that accompanies the product) down to 60 ms. Low latency is particularly useful for playing games and watching movies as audio that is lagging behind video is clearly annoying. Razer does not say whether the fast BT connection is enabled by a special piece of hardware, though it looks like the manufacturer has customized its BT-enabling chip and radio using firmware tweaks.
Form-factor wise, the Hammerhead are black in-ear earbuds with silicon ear sleeves that are IP4X rated for sweat and splash protected. The earbuds are not meant to block all the environment noises and they also do not feature active noise cancellation, so they perform like the majority of headsets available today. As for controls, the earbuds can detect single press, double tap, triple tap, triple tap & hold last tap, and hold for two seconds gestures to control various aspects of their operation. The gestures can be customized in a companion app for Apple’s iOS and Google’s Android operating systems.
The Hammerhead True Wireless headset comes in a charging case that can charge it for up to four times, enabling up to 16 hours of total battery life, according to the manufacturer. The case connects to its power brick using a USB Type-C cable.
Razer’s Hammerhead wireless earbuds are now available directly from the company for $99.99 in the US and €119.99 in Europe.
Intel likes 5.0 GHz processors. The one area where it claims a clear advantage over AMD is in its ability to drive the frequency of its popular 14nm process. Earlier this week, we reviewed the Core i9-9990XE, which is a rare auction only CPU but with 14 cores at 5.0 GHz, built for the high-end desktop and high frequency trading market. Today we are looking at its smaller sibling, the Core i9-9900KS, built in numbers for the consumer market: eight cores at 5.0 GHz. But you’ll have to be quick, as Intel isn’t keeping this one around forever. Read on for the full review.
Intel has quietly added two new inexpensive processors into its Comet Lake-U lineup. The Pentium Gold 6405U and Celeron 5205U CPUs will be used for entry-level thin-and-light laptops that need one of the latest-generation processors, but are not designed for performance-demanding workloads.
Intel’s Pentium Gold 6405U and Celeron 5205U are dual-core processors that run at 2.4 GHz and 1.9 GHz, respectively. Both CPUs have TDPs of 15 Watts – the same as the rest of the Comet Lake-U family – and include 2 MB of L3 cache, Intel UHD Graphics, a dual-channel DDR4/LPDDR3 memory controller, and feature 12 PCIe 2.0 lanes for expansion. Both SKUs are considerably cheaper than the rest models in the Comet Lake series (which start at $281): the Pentium Gold 6405U processor carries a $161 recommended customer price, whereas the Celeron 5205U costs $107 when purchased in 1000-units quantities.
|Intel Comet Lake-U SKUs|
||Base GHz||1C Turbo
|Pentium 6405U||2C/4T||2.4||-||-||2 MB||15W||610?||950||2400||?||$161|
|Celeron 5205U||2C/2C||1.9||-||-||2 MB||15W||610?||900||2400||?||$107|
Up until now, Intel’s Comet Lake-U family included only four CPUs, three of which were aimed at premium laptops. The addition of considerably cheaper processors allows Intel to address more market segments with its Comet Lake products by equipping its partners to build cheaper systems using the latest motherboard designs.
Otherwise, as is almost always the case for low-end Core SKUs, these are presumably salvage chips from Intel's operations. The new Pentium and Celeron chips are clocked lower than the Core i3-10110U, allowing Intel to put to work silicon that otherwise wouldn't have been usable as a Core i3. Which for Intel is particularly important at a time where demand for inexpensive U-series mobile CPUs is running high, helping the company please its partners who have suffered from tight supply of Intel’s 14 nm processors in the recent quarters.
Source: Intel ARK (via SH SOTN)