Lateo.net - Flux RSS en pagaille (pour en ajouter : @ moi)

🔒
❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 20 septembre 2019AnandTech

ADATA Releases the XPG SX8100 SSD: Make It Fast & Hold the Bling

Par Anton Shilov

While many gaming-branded components come adorned in RGB LEDs, there is thankfully still a market for plainer and saner products. To that end, ADATA has introduced its new family of high-end SSDs — the XPG SX8100 — that promises leading-edge performance without any unnecessary bling.

Intending its XPG SX8100 SSDs as high-end parts aimed at performance-demanding consumers, ADATA will offer them in 512 GB, 1 TB, and 2 TB configurations. The drives are based on Realtek’s RTS5762 controller (8 NAND channels, PCIe 3.0 x4, NVMe 1.3, LDPC, etc.) and 3D TLC NAND, and like virtually all mainstream NVMe drives, the SX8100 comes in M.2-2280 form-factor. The new family of SSDs is ADATA’s second lineup of drives (after the XPG Spectrix S40G) to use Realtek's top-of-the-range controller.

As far as performance is concerned, ADATA rates the drives for up to 3.5 GB/s sequential read speeds and up to 3 GB/s sequential write speeds when SLC caching is used (data based on CDN benchmark, other benchmarks show lower numbers, more information is available here). As for random performance, the SX8100 drives can hit up to 300K/240K random read/write 4K IOPS, which is a bit lower when compared to the XPG Spectrix S40G.

One of the possible reasons why ADATA rates random performance of the XPG SX8100 below that of the blingy XPG Spectrix S40G could be because the new drives are not equipped with a heat spreader. While these are not necessary for moment-to-moment usage, they can help to sustain performance under high loads when these high-end controllers get hot. The upside to forgoing a heatsink however is that it allows the XPG SX8100 to be used with laptops, as well as any other devices that can't fit an M.2 drive with a heatsink.

When it comes to endurance and reliability levels, ADATA’s XPG SX8100 drives are covered with a five-year warranty and are rated for 320 TB, 640 TB or 1280 TB written, depending on the drive's capacity. Overall, the drives are good enough for around 0.3 DWPD over a five-year period, which in line with other modern consumer-grade SSDs.

ADATA XPG SX8100 Specifications
Capacity 512 GB 1 TB 2 TB
Model Number ASX8100NP-512GT-C ASX8100NP-1TT-C ASX8100NP-2TT-C
Controller Realtek RTS5762
NAND Flash 3D TLC NAND
Form-Factor, Interface M.2-2280, PCIe 3.0 x4, NVMe 1.3
Sequential Read 3500 MB/s
Sequential Write 2400 MB/s 3000 MB/s
Random Read IOPS 300K IOPS 290K IOPS 290K IOPS
Random Write IOPS 240K IOPS 240K IOPS 240K IOPS
Pseudo-SLC Caching Supported
DRAM Buffer Yes, using Realtek's Partial DRAM Firmware Architecture
Actual capacity is unknown
TCG Opal Encryption No
Power Management DevSleep, Slumber (0.14 W).
Warranty 5 years
MTBF 2,000,000 hours
TBW 320 TB 640 TB 1280 TB
MSRP $89.99 $159.99 $329.99
Additional Information Link

ADATA will start sales of its XPG 8100 SSDs in the near future for $89.99 - $329.99 depending on capacity. Expect real-world prices of these drives to be below those of the XPG Spectrix S40G (which uses the same controller) and more or less in line with those of the XPG 8200 Pro (which offers similar performance).

Related Reading:

Source: ADATA

Western Digital to Exit Storage Systems: Sells Off IntelliFlash Division

Par Anton Shilov

Western Digital this week announced that it has made a strategic decision to leave the market for dedicated storage systems, as further development of its IntelliFlash and ActiveScale businesses would require additional investments and management focus. The company will sell off its IntelliFlash business to DDN (a specialist in storage systems, AI, and big data) and will explore various strategic options for ActiveScale.

The storage systems market is rather lucrative, but extremely competitive. Over the years, both Western Digital (as well as its HGST division) and SanDisk acquired numerous companies that specialized on hardware and software for datacenter storage, as well as on all-flash storage arrays in order to build highly-competitive storage systems (more details in our coverage of the Western Digital - SanDisk acquisition). Because many product families overlapped each other when Western Digital took over SanDisk in 2016, numerous lineups were divested.

At present, Western Digital only offers IntelliFlash all-flash and hybrid storage systems as well as ActiveScale cloud storage systems. While both product lines look solid in general, they have to compete against broad families of storage systems designed by such giants as Dell EMC, HPE, IBM, NetApp, and Hitachi that control over 50% of the market (according to IDC). Competing against multi-billion enterprises is tough. Moreover, Western Digital supplies its products to many developers of storage systems and the latter certainly do not appreciate it when their suppliers compete against them.

After closing out its storage systems business, Western Digital will continue to offer its storage servers (including JBOX, JBOD, hybrid, and specialized machines) for customers with their own software and infrastructure. Furthermore, the company will keep developing its scalable and flexible OpenFlex NVMe-over-Fabric composable architecture. Essentially, Western Digital will refocus from storage systems to storage platforms, which is a more hardware-centric business.

Here is what Mike Cordano, president and chief operating officer of Western Digital, had to say:

“As we look to the future, scaling and accelerating growth opportunities for IntelliFlash and ActiveScale will require additional management focus and investment to ensure long-term success. By refocusing our Data Center Systems resources on our Storage Platforms business, we are confident that the Western Digital portfolio will be better positioned to capture significant opportunities ahead and drive long-term value creation.”

Under the terms of the agreement with DDN (DataDirect Networks), the latter will buyout the entire IntelliFlash business unit for an undisclosed sum. Furthermore, the two companies will expand their current collaboration through a multi-year strategic sourcing contract, under which DDN will increase its purchase of Western Digital’s HDDs and SSDs.

Western Digital and DDN expect the deal to close later this year.

Related Reading:

Source: Western Digital

AMD: Next Gen Threadripper and Ryzen 9 3950X, Coming November

Par Dr. Ian Cutress

In a shock email late on Friday, AMD has released a statement to clarify the situation it is in with its latest Ryzen processors. There's a positive, that the next generation of Threadripper processors will enter the market in November, but the negative is that AMD is delaying its release of the 16 core Ryzen 9 3950X until November as well, citing a high demand for these parts and time is needed to ensure that sufficient stock is available.

The statement from AMD says:

We are focusing on meeting the strong demand for our 3rd generation AMD Ryzen processors in the market and now plan to launch both the AMD Ryzen 9 3950X and initial members of the 3rd Gen AMD Ryzen Threadripper processor family in volume this November. We are confident that when enthusiasts get their hands on the world’s first 16-core mainstream desktop processor and our next-generation of high-end desktop processors, the wait will be well worth it.

As far as we understand, this is nothing to do with recent reports of TSMC requiring 6 months for new 7nm orders: the silicon for these processors would have been ordered months ago, with the only real factor being binning and meeting demand. It will be interesting to see how the intersection of the 16 core with next gen Ryzen will play out. 

Related Articles:

Western Digital Launches iNAND IX EM132: eMMC SSDs For Embedded Industrial Applications

Par Anton Shilov

Western Digital this week has introduced its first family of embedded eMMC storage devices for industrial and IoT applications. Based on the company's 64-layer BiCS3 3D TLC NAND memory, the new iNAND IX EM132 drives offer up to 310 MB/s read speeds as well as enhanced endurance and reliability by supporting various features designed specifically for embedded, commercial, industrial, and similar environments.

Western Digital’s iNAND IX EM132 embedded flash drives are based around an in-house controller that supports an eMMC 5.1 HS400 interface along with an advanced ECC, wear leveling, bad block management, and RPMB (replay protect memory block). The eMMC drives also support smart partitioning (multiple partitions with different features and purposes to provide device makers some additional flexibility), auto/manual data refresh (automatically rewrites all the information to ensure that even rarely accessed data is available when needed), as well as all the usual management and monitoring interfaces you'd expect from a contemporary SSD.

Available in an industry-standard BGA package that measures by 11.5×13×1 mm, Western Digital is offering capacities between 16 GB and 256 GB. When it comes to performance, the eMMC drives are rated for up to 310 MB/s sequential read speeds, up to 150 MB/s sequential write speeds, and up to 20/12.5K random read/write IOPS. As for endurance, the drives are rated for up to 693 TB to be written, though that rating is likely based on the high-capacity SKUs.

Western Digital will offer Commerical, Industrial Wide as well as Industrial Extended versions of its iNAND IX EM132 eMMC drives. The Industrial Wide devices feature an operating temperature rating between -25°C and 85°C, whereas the Industrial Extended can operate in the most extreme environments with temperature ranges between -40°C and 85°C.

Western Digital’s iNAND IX EM132 Embedded Flash Drives
  General Specifications
JEDEC Specification v5.1, HS400
Flash Type 64-layer BiCS3 3D TLC NAND
Density 16 GB, 32 GB, 64 GB, 128 GB, 256 GB
Sequential Read/Write 310 MB/s
150 MB/s
Random Read/Write 20K / 12.5K
Operating Temperature Industrial Wide: -40°C to 85°C
Industrial Extended: -40°C to 85°C
TBW up to 693 TB
Core Voltage 2.7 V - 3.6 V
I/O Voltage 1.7 V - 1.95 V or 2.7 V - 3.6 V
Package 153-ball FBGA (11.5 × 13.0 × 1.3 mm)

Related Reading:

Source: Western Digital

HP’s E344c: A 34-Inch Curved Ultra-Wide Productivity Monitor

Par Anton Shilov

Having launched a variety of curved ultrawide displays for gamers in the recent years, HP is rolling out similar monitors for business and professional users from many industries who want to boost their productivity. This week along with its flagship S430c curved LCD, HP introduced its more mainstream E344c Curved Monitor, which brings numerous contemporary features to the table and is aimed at a much broader commercial audience.

Offering a 21:9 ultrawide aspect ratio, the HP E344c Curved Monitor relies on a 34-inch SVA panel with a 3440×1440 resolution, 300 nits brightness, a 1000:1 contrast ratio, a 16 ms GtG response time, a 60 Hz refresh rate, and 178°/178° viewing angles. HP has designed this price-friendly monitor as a day-to-day work horse, so there is no factory calibration to speak of or support for wide color gamuts; in fact the company doesn't even officially disclose the monitor's sRGB gamut coverage. In which case, it's not unreasonable to guess that it may not cover 99% of it like some other models.

When it comes to connectivity, the E344c Curved Monitor resembles contemporary displays designed for professionals, offering one DisplayPort 1.2 input, one HDMI 2.0 port, and one USB Type-C input (DP alt mode). The LCD also has a dual-port USB 3.0 hub that is fed by a USB Type-B upstream port.

Since we are dealing with a display designed purely for work, HP did not equip it with speakers or even a headphone output. For those who do not want to use an external speaker system, HP proposes to get its S101 Sound Bar that attaches to the bottom of the monitor and uses a USB Type-A connector.

As far as ergonomics is concerned, like many other displays for office/home office environments, the HP E344c features a stand that can adjust height and tilt.

HP's 34-Inch Curved Display
  E344c Curved Monitor
Panel 34" SVA
Native Resolution 3440 × 1440
Brightness 400 cd/m²
Contrast 3000:1
Maximum Refresh Rate 60 Hz
Response Time 16 ms GtG
Viewing Angles 178°/178° horizontal/vertical
Curvature ?
Pixel Pitch 0.233 mm
Pixel Density 109 ppi
Anti-Glare Coating ?
Inputs 1 × DisplayPort 1.2
1 × HDMI 2.0
1 ×USB Type-C (with up to 22.5W PD)
USB Hub 2-port USB 3.0 hub
Stand Height: +/- 150 mm
Tilt: -5 to +20°
Swivel: ?
Audio none
Launch Price $599

HP will start sales of its E344c Curved Monitor on October 7. As expected from a 'working horse' type of displays, the LCD will not be too expensive and will carry a $599 price tag.

Related Reading:

Source: HP

CXL Consortium Formally Incorporated, Gets New Board Members & CXL 1.1 Specification

Par Anton Shilov

Over four years ago, Intel started to develop what is now known as Compute Express Link (CXL), an interface to coherently connect CPUs to all types of other compute resources. Over time, Intel collaborated with other industry behemoths, and early this year nine companies organized the CXL Consortium to jointly develop the technology as a new open standard. Over the past few months, dozens of additional companies have joined the consortium, and now the consortium itself has been formally incorporated this week, marking a major step in the development of CXL as an industry standard.

While incorporation itself doesn't change matters for CXL from a technical perspective, incorporating a group like the CXL Consortium is a fairly big deal, because this typically only happens with an industry standards group gets large enough and gains enough traction that its members are very confident the technology is soon to go into widespread use. This means that the CXL Consortium has been elevated to the same level as the USB-IF, VESA, and other standard groups. Which is to say, all signs point to CXL eventually winning the war of cache-coherent interconnects, and becoming a major, long-term industry standard.

Meanwhile, wasting no time, the newly-incorporated organization has named five additional members of its board of directors, and it has released version 1.1 of the CXL specification.

Support Growth & New BOD Members

Being a CPU-to-everything cache-coherent interconnect protocol, CXL competes in one way or another against such technologies as CCIX, Gen-Z, Infinity Fabric, NVLink, and OpenCAPI, so broad industry support is tremendously important for the technology. Originally founded by Alibaba, Cisco, Dell EMC, Facebook, Google, HPE, Huawei, Intel, and Microsoft, the CXL Consortium has gained over 50 additional members over the past few months. The consortium now counts nearly 70 companies and organizations in its ranks, from developers of CPUs, GPUs, FPGAs, SSDs, interconnects, servers, and other hardware as well as from software developers and cloud service providers.

Among the companies that recently joined the CXL Consortium are AMD, Arm, IBM, and Xilinx. To that end, the organization appointed five new members to its board of directors from AMD, Arm, IBM, Microchip, and Xilinx. The expanded board of directors now includes 13 members and looks as follows.

CXL Consortium: Members of the Board
Company Person Position
Alibaba Di Xu ?
AMD Nathan Kalyanasundharam Senior Fellow at AMD
Arm Dong Wei Standards Architect and Fellow
Cisco Sagar Borikar Principal Engineer, Data Center Systems Engineering
Dell EMC Kurtis Bowman Director of Technology and Architecture in Dell's Server CTO Office
Facebook Chris Petersen Hardware Systems Technologist
Google Rob Sprinkle Technical Lead for Platforms Infrastructure at Google
HPE Barry McAuliffe ?
IBM Steve Fields Fellow and Chief Engineer of Power Systems
Intel Jim Pappas Director of Technology Initiatives, Intel's Data Center Group.
Microchip Larrie Carr Fellow, Technical Strategy and Architecture, Data Center Solutions
Microsoft Leendert van Doorn  Distinguished Engineer
Xilinx Gaurav Singh Corporate Vice President

CXL 1.1 Published

Back in March, the nine founding members of the CXL Consortium published version 1.0 of the specification. By now, several refinements have been made, so this week the organization published version 1.1 of the spec. Unfortunately, the organization does not publicly disclose what changes it brings; though coming this soon after 1.0, it's likely little more than minor tweaks to address underdefined behavior and satisfy the needs of some of the new members.

As a refresher, CXL is designed to enable heterogeneous processing (by using accelerators) and memory systems (think memory expansion devices), the low-latency CXL runs on PCIe 5.0 PHY stack at 32 GT/s and supports x16, x8, and x4 link widths natively. CXL supports three protocols within itself: the mandatory CXL.io as well as CXL.cache for cache coherency, and CXL.memory for memory coherency that are needed to effectively manage latencies. When it comes to performance, a CXL-compliant device will enjoy 64 GB/s bandwidth in each direction when installed into a a PCIe 5.0 x16 slot. In addition, the protocol also supports degraded mode at 16.0 GT/s and 8.0 GT/s data rates as well as x2 and x1 links.

Related Reading:

Source: CXL Consortium

The TeamGroup L5 LITE 3D (480GB) SATA SSD Review: Entry-Level Price With Mainstream Performance

Par Billy Tallis

The TeamGroup L5 LITE 3D is an older SATA drive that has consistently been one of the cheapest drives on the retail market. Since it doesn't cut corners with a DRAMless design, it is a step up from entry-level drives and is still a reasonable alternative to mainstream SATA SSD from the top tier brands.

Samsung’s PCIe Gen 4 Enterprise SSDs Get Reliability & Performance Boost

Par Billy Tallis & Anton Shilov

Almost a year after outlining their first roadmap for PCIe 4.0 SSDs, Samsung's first two models are in mass production: the PM1733 and PM1735 high-end datacenter SSDs. Details about these new models have been slow to come out, but Samsung is now talking about three major improvements they bring over earlier SSDs in addition to the raw performance increases enabled by PCIe 4.0. The list of improvements includes fail-in-place (FIP) technology to boost reliability of drives, SSD virtualization technology to guarantee consistent performance for VDI and similar use cases, as well as V-NAND machine learning technology to predict and verify characteristics of NAND cells.

Fail-In-Place

Samsung’s fail-in-place (FIP) technology promises to allow the SSD to robustly handle hardware failures that would otherwise be fatal to the SSD, up to the failure of an entire NAND die. For the highest-capacity 30.72TB PM1733, the drive can keep running more or less normally even with the loss of any one of its 512 NAND flash dies. The drive will scan for corrupted or lost data, reconstruct it and relocate it to a still-working flash chip, and continue to operate with high throughput and QoS. In essence, this is like a RAID-5/6 array running in degraded mode instead of the whole array going offline. It's still wise to eventually replace a SSD after it suffers such severe malfunction, but Samsung's FIP technology means that replacement can be done at the operator's convenience instead of the problem causing immediate downtime.

The addition of fail-in-place doesn't change the fact that the PM1733 and PM1735 have write endurance ratings of 1 and 3 drive writes per day, respectively. The overall lifespan is still comparable to the previous generation of drives, but the chance of a premature death due to causes other than normal NAND wear has been greatly reduced.

Virtualization

Next up, Samsung has added virtualization technology to the PM1733 and PM1735 SSDs. Samsung has implemented the optional NVMe virtualization features based on Single-Root I/O Virtualization (SR-IOV), allowing a single NVMe SSD controller to provide numerous virtual controllers (up to 64 in the case of Samsung's drives). Each virtual controller can be assigned to a different VM running on the host system, and provide storage to that VM with no CPU overhead—the same as if the entire drive had been assigned to a single VM with PCIe passthrough. Storage capacity on each SSD can be flexibly allocated to different namespaces that can in turn be attached to the relevant virtual controller.

Machine Learning

The third technology introduced by Samsung is V-NAND machine learning. The company does not disclose precise details about how they are making use of machine learning, but only says that it is used to predict and analyze characteristics of flash cells, including by detecting variations among circuit patterns. With 3D NAND, it is increasingly difficult to get by with one size fits all strategies for cell programming, reading and error correction. Even tracking the P/E cycles each block has been through isn't enough; there can be significant variation between layers near the top and bottom of the 3D stack, and from one die to another. Samsung is hardly alone in turning to machine learning strategies to tackle these complexities. The new capability will ensure consistent performance and improved reliability of today’s drives powered by TLC V-NAND, but its importance will grow dramatically in the case of QLC V-NAND-based drives.

The first drives that can take advantage of the new features are already shipping to interested parties. The PM1733 and PM1735 are based on a common hardware platform. The PM1733 is rated for 1 DWPD and offers capacities up to 30.72 TB, while the PM1735 has more overprovisioning and lower usable capacities to reach 3 DWPD. Both models are available in either U.2 or PCIe add-in card form factors. The U.2 form factor gives a few more capacity options, while the add-in card versions have a PCIe 4.0 x8 interface to enable 25% higher sequential read performance (for other workloads, PCIe 4.0 x4 is fast enough to not be the bottleneck).

 

Related Reading:

Source: Samsung

GIGABYTE’s Aorus Gen4 AIC SSD 8 TB Launched: Up to 15 GB/s

Par Anton Shilov

After originally showcasing it at Computex a bit earlier this year, GIGABYTE has officially introduced its quad-SSD PCIe 4.0 adapter card, the AORUS Gen4 AIC. Designed house up to 4 NVMe SSDs, the card is essentially a multi-way M.2 adapter, allowing a PCIe 4.0 x16 slot to be used to drive four x4 SSDs. Fittingly, with so many high-end SSDs on a single board, the card also features an active cooling system to ensure that the drives run at consistent speeds even under high loads. Fully populated with PCIe 4.0 SSDs, the card is rated to provide up to a staggering 15 GB/s of throughput – at least, if you can come up with a workload that can saturate such a setup.

GIGABYTE’s Aorus Gen4 AIC SSD 8 TB is a PCIe 4.0 x16 board with eight PCIe Gen 4 re-drivers. The card in turn carries four 2 TB M.2-2280 SSDs based on Phison’s PS5016-E16 controller, which remains the only client SSD controller with a PCIe 4.0 x4 interface. The card also features a sophisticated cooling system comprising of a large copper heatsink, a 5-cm ball bearing fan, and a baseplate, along with eight thermal sensors to monitor everything. That monitoring, in turn, is provided by the Aorus Storage Manager software, which can also configure the cooling on the card and supports three fan operating modes, including Silent, Balanced, and Performance.

When running in RAID 0 mode, the Aorus Gen4 AIC SSD 8 TB offers up to 15 GB/s sequential read/write speeds, as well as 430K/440K read/write IOPS. It goes without saying that this is a throughput-focused card, as outside of the difficulty in even coming up with that many IOPS in a client workload, RAID modes don't really improve IOPS.

While the sequential performance of the Aorus Gen4 AIC SSD 8 TB looks extremely attractive, there is a caveat. The only enthusiast-class PCIe Gen 4-supporting platform today is AMD’s Ryzen 3000, and these CPUs only support 24 PCIe 4.0 lanes: x16 for an add-in-card, x4 for an NVMe SSD, and x4 to connect to the chipset. As a result, to make full use of the card you have to give up a board's sole PCIe 4.0 x16 slot for the SSD, which makes this a niche product for systems that don't need a powerful dGPU. Otherwise, installing the drive into a PCIe x16 slot controlled by AMD’s X570 chipset would cause it to be bottlenecked by the PCIe 4.0 x4 link between the chipset and the CPU.

That said, the Aorus Gen4 AIC SSD 8 TB can show itself in all the glory either in a workstation based on AMD’s EPYC 7000-series processor with up to 128 PCIe Gen 4 lanes, or, presumably, in a future high-end desktop based on next-generation AMD Threadripper CPU.

Related Reading:

Source: GIGABYTE (via Hermitage Akihabara)

Hier — 19 septembre 2019AnandTech

Sony’s Micro LED-Based Ultra-HD TVs Available to Consumers: 2K to 16K Resolutions, up to 790-Inches

Par Anton Shilov

Sony this month started to offer its Micro LED-based displays to well-funded consumers. Officially branded as Crystal LED direct view display systems (aka CLEDIS), these ultra high-end products were previously only available for commercial installations. Designed to offer superior contrasts, brightness levels, and viewing angles, Sony’s Crystal LED TVs are designed to replace projector-enabled home theaters and will be available in 2K, 4K, 8K, and 16K versions with sizes of up to 790 inches.

Sony’s Crystal LED display systems rely on bezel-less Micro LED modules that are built using 0.003-mm² individually-controlled LEDs. The modules offer up to 1000 nits peak brightness, around 1,000,000:1 contrast ratio, up to a 120 Hz refresh rate, as well as nearly 180° viewing angles. According to Sony, such a display can cover 140% of the sRGB color space or around 100% of the DCI-P3 color gamut.

Since the micro LED modules are rather large – even though they're the fraction of the size of a normal LED, the large number of micro LEDs adds up – the size of a Full-HD Crystal LED display system is around 110 inches in diagonal. Meanwhile the 4K unit doubles that, to 220 inches. Since we are dealing with devices that are designed to replace projection-powered home theaters, such sizes are well justified, but they are naturally too large for an average home.

Sony's Consumer Crystal LED Display Systems
  Full HD 4K 8K 16K
Number of CLED Modules 18 72 288 576
Diagonal 110-inches 220-inches 440-inches 790-inches
Dimensions (W×H) 8 ft × 4 ft
2.43 m × 1.22 m
16 ft × 9 ft
4.87 × 2.74
32 ft × 18 ft
9.75 × 5.48
63 ft × 18 ft
19.2 × 5.48
Approximate Price of CLEDs at $10,000 per unit $180,000 $720,000 $2,880,000 $5,760,000

Sony’s Crystal LED-based display systems for residential installation will be available through a select group of individually trained and certified Sony dealers. The devices will be supported by Sony’s technicians, who will be able to remotely monitor displays after their installations to provide ongoing service.

Sony is not publicly quoting prices for its consumer Crystal LED products, but there are estimates that each module costs around $10,000 per unit. This would mean that a Full-HD version, which consists of 18 modules, costs over $180,000, whereas a 4K system will be priced at over $720,000.

Related Reading:

Sources: Sony, TechHive

Huawei Launches Mate 30 & Mate 30 Pro 4G and 5G Variants: First Step Away From Google

Par Andrei Frumusanu

Today at Huawei’s global launch event in Munich, the company has detailed its new Mate 30 and Mate 30 Pro flagship devices. The two new phones continue Huawei’s focus on innovating in the photo capture departments, with the new Mate 30 Pro introducing innovative camera features and hardware. Naturally, as is tradition with the Mate series, it represents Huawei’s pioneer series in which it introduces the newest technologies, such as the new brand-new Kirin 990 as well as Kirin 990 5G. The new Mate 30 also introduce new designs and hardware builds – increasing battery life and minimising weight of the phones.

The Mate 30 Pro in particular introduces a new true edge-to-edge display that curves to the sides up to 90° - representing a brand-new form-factor and new ergonomics as Huawei makes away with physical buttons. Beyond all the hardware, the biggest news about Huawei’s newest devices is the fact that they will not come out of the box with Google Play or Google services preinstalled, representing a tectonic shift in the industry that’s bound to have reverberations for the next several years.

Huawei Mate 30 Series
  Mate 30 Mate 30 Pro
(Mate 30 Pro 5G)
SoC HiSilicon Kirin 990

2x Cortex-A76 @ 2.86 GHz
2x Cortex-A76 @ 2.09 GHz
4x Cortex-A55 @ 1.86 GHz
(HiSilicon Kirin 990 5G)

2x Cortex-A76 @ 2.86 GHz
2x Cortex-A76 @ 2.36 GHz
4x Cortex-A55 @ 1.95 GHz
GPU Mali G76MP16 @ 600MHz

(Mali G76MP16 @ 700MHz)
DRAM 8GB LPDDR4X 8GB LPDDR4X
Display 6.62" OLED
2340 x 1080 (19.5:9)

 
6.53" OLED
2400 x 1176 (18.4:9)

edge-to-edge
Size Height 160.8 mm 158.1 mm
Width 76.1 mm 73.1 mm
Depth 8.4 mm
(9.2mm)
8.8 mm
(9.5mm)
Weight 196 grams 198 grams
Battery Capacity 4100mAh (Rated)
4200mAh (Typical)

40W charging
4400mAh (Rated)
4500mAh (Typical)

40W charging
Wireless Charging 27W charging + reverse charging
Rear Cameras
Main 40MP f/1.8
RYYB sensor

27mm equiv. FL
40MP f/1.6 OIS
RYYB sensor

27mm equivl. FL
Telephoto 8MP f/2.4 OIS
3x Optical zoom
80mm equiv. FL
Wide 16MP f/2.2
Ulta wide angle
17mm equivl. FL


 
40MP f/1.8
RGGB sensor
Ultra wide angle
18mm equivl. FL

720p7680fps video capture
Extra - 3D Depth Camera
Front Camera 24MP f/2.0 32MP f/2.0
Storage 128 / 256GB
+ proprietary "nanoSD" card
I/O USB-C
3.5mm headphone jack
USB-C
Wireless (local) 802.11ac (Wifi 5),
Bluetooth 5.1
Cellular 4G LTE

(4G + 5G NR NSA+SA Sub-6GHz)
Splash, Water, Dust Resistance IP53
(no water resistance)
IP68
(water resistant up to 1m)
Dual-SIM 2x nano-SIM
Launch OS Android 10 w/ EMUI 10

without Google services
Launch Price 8+128 GB: 799€

 
8+256 GB: 1099€

(5G 8+256GB: 1199€)

Starting off with the heart of the phone, we’re seeing both the Mate 30 and Mate 30 Pro powered by the new Kirin 990 chipsets. As we’ve covered the silicon in more detail a few weeks ago at its launch, this year we’re actually talking about two distinct new chips: The regular Kirin 990, and the Kirin 990 5G. As the name reveals the difference, the 5G variant of the chip includes a new integrated modem with support of Sub-6GHz 5G NR connectivity.

The company this year was conservative in terms of the IP of the new Kirin chipset, as this year again unfortunately the release timing of the silicon wasn’t in sync with the newest generation designs from Arm. Thus the chip again makes use of the existing Cortex-A76 CPU core, but this time around it bumps up the frequency up to 2.86GHz in the two fastest CPU cores. Depending on whether you get the regular or the 5G variant you’ll end up with a further two A76 efficiency cores at either 2.09GHz or 2.36GHz, and the same differences are found in the A55 small cores, coming in a quad-core configuration with 1.86GHz or 1.95GHz. The GPU core configuration is a Mali-G76MP16 at either 600MHz or 700MHz. HiSilicon is able to use the higher frequencies on the 5G model as it’s manufactured on TSMC’s new N7+ manufacturing node which makes use of EUV, whereas the regular variant remains on the existing N7 node.

Huawei was able to increase the battery sizes of the phones to up to 4200mAh for the Mate 30 and 4500mAh for the Mate 30 Pro by increasing the density of the battery cells. This generation, Huawei has also paid more attention to the resulting weight of the phone, managing to remain under 200g at respectively 196g and 198g for the non-pro and Pro models.



Mate 30 Pro

The Mate 30 Pro is certainly the most interesting of the two new devices when it comes to their designs. The Mate 30 Pro employs a new true edge-to-edge OLED screen which curves around to 90° around the edges – essentially making this the first actually bezel-less phone out there as you’ll be seeing pure screen when viewing the phone from the front. Unfortunately, it seems Huawei this year has opted to go back to 1080p-class screens for the Pro Mate model, reducing it from the 1440p that was uniquely shipped last year with the Mate 20 Pro. Either the Mate 30 Pro will have outstanding battery life, or Huawei still hasn’t figured out how to efficiently implement 1440p in their phones.



Mate 30

The 30 Pro comes at a slightly weird resolution of 2400 x 1136 which results in an aspect ratio of 18.4:9. The reason for this is that there’s actually some pixels which are going to be part of the wrap-around part of the screen. The regular Mate 30 has a more traditional 2340 x 1080 19.5:9 resolution and aspect ratio.

Even though the Mate 30 Pro lists a 6.53” diagonal screen size, because of the wrap-around aspect of it, it’s only actually 73.1mm wide which is slightly smaller than the usual “large” form-factor we’re used to, similar to the Mate 20 Pro last year, which was 0.8mm narrower.

Both phones still come with display notches, however Huawei was able to reduce their size notably, and rationalises by having the full plethora of sensors available, including a 3D depth camera, the usual ambient & proximity sensors, the 32MP or 24MP front-facing camera, as well as introducing a new gesture sensor.

The big stars and a big part of the presentation today were the phone’s cameras. The Mate 30 was more conservative in this regard, and essentially, we’re seeing the same camera setup as on the P30, with the exception of the addition of OIS on the main sensor. The main camera sensor for both phones is again the 40MP RYYB sensor employed in the P30 series, however the Mate 30’s pictures will notably improve thanks to a newer generation ISP in the Kirin processors. The aperture on the Pro unit is larger f/1.6 while the regular Mate 30 will make due with f/1.8 optics.

The telephoto module on both phones is the trusted and good 3x telephoto lens and 8MP sensor with an f/2.4 aperture. Personally I wasn’t too convinced of the periscope 5x module of the P30 Pro so I’m glad Huawei stuck around with the more traditional module in the Mate 30’s.

It’s in the wide-angle lens where the two phones drastically differ. While the Mate 30 has seemingly the same 16MP f/2.2 module as on the P30, the new Mate 30 Pro introduces a brand-new and industry first sensor of its type. The Mate 30 Pro’s wide-angle is makes use of a new equally large sized 1/1.54” sensor with a 40MP resolution. The sensor is a regular Bayer RGGB layout, unlike the RYYB 40MP main camera sensor. It employs a wide f/1.8 aperture with a wide-angle view equivalent focal-length of 18mm.

The new sensor however no only serves as the wide-angle eyes for the phone, but also has unique video recording capabilities. Huawei now finally introduced 4K60 recording, and also supports HDR+ video formats. The most eye-brow raising feature of the new module however is its quoted 7680fps slow-motion capture. Huawei demoed 2000fps samples at the presentation, and lists the 7680fps mode as being able to be recorded at 720p.

No Google Play Services or Play Store

Probably the biggest and most important announcement today wasn’t the Mate 30 or Mate 30 Pro as devices, but the fact that the new units will not be released with Google’s services such as the Play Store or Play Services preinstalled on the phone. Unfortunately this is the end result of the ongoing trade-war between the US and China, with Huawei considering themselves as being used as bargaining chips and pawns by the US’s decision to block the company from all commercial interaction with US companies, something the company describes as having nothing to do with security or even 5G infrastructure concerns.

The implications here are huge both for Huawei as for the overall industry, and there will be no clear winners on either side, and on the long-term, it seems Google and the Android ecosystem has more to lose. Huawei is pushing forward with a full replacement of Google’s services, with the alternative being called “Huawei Mobile Services”, or HMS, in order to offer the same functionality that were offered by Google’s GMS.

In a follow-up interview with Richard Yu clarified one important question in regards to how users will have control over the software they’ll be able to install on the Mate 30 and Mate 30 Pro; beyond offering their own app store called the “Huawei App Gallery”, Yu said that it might be possible for users to install the GMS core onto the phones through either third-party app stores or websites. Huawei here is likely to resist as little as possible in terms of limitations as to what users will be able to do with their phones, and he also confirmed that users will be able to unlock the bootloaders for the Mate 30 and 30 Pro.

In case that the trade sanctions imposed by the US were to be lifted, Richard Yu explained that they’d be ready to “immediately” reintegrate the Google services and applications onto their firmware and push out updates to their phones.

In the meanwhile, Huawei is planning long-term and investing into their own HMS ecosystem. In order to attract developers and to gain traction, the company is making a available a $1bn fund for developers, ecosystem and marketing to offer alternatives for their users. Yu described one way to attract developers is that they will adopt a 15/85% share on app purchases, giving developers a larger piece of the pie than the 30/70 share that currently is in place for the Play Store and the iOS App Store. Huawei explains that they don’t want to do this, but given the circumstances, they’re forced to.

Availability & Pricing

The Mate 30 and Mate 30 Pro will be available in October in China and select European markets, with remaining European countries availability coming a bit later. The Mate 30 comes in a 8+128GB configuration for 799€, the Mate 30 Pro coming in a 8+128GB configuration for 1099€, and finally the 5G variant of the Mate 30 Pro coming at 1199€.

Related Reading:

The Huawei Mate 30 Launch Event Live Blog (Starts at 8am ET/12:00 UTC)

Par Dr. Ian Cutress

The busy fall period for smartphone launches continues. Today in Munich, Germany, Huawei is holding their launch event for their Mate 30 family of smartphones, the latest generation of flagship phones from the company. The underlying Kirin 990 SoC was already announced a couple of weeks back at IFA, and now we'll get to see the rest of what Huawei has in store for their next generation of smarphones.

Western Digital Reveals 18 TB DC HC550 'EAMR' Hard Drive

Par Anton Shilov

Marking an important step in the development of next-generation hard drive technology, Western Digital has formally announced the company’s first hard drives based on energy-assisted magnetic recording. Starting things off with capacities of 16 TB and 18 TB, the Ultrastar DC HC550 HDDs are designed to offer consistent performance at the highest (non-SMR) capacities yet. And, with commercial sales expected to start in 2020, WD is now in a position to become the first vendor in the industry to ship a next-generation EAMR hard drive.

18 TB Sans SMR

The Western Digital Ultrastar DC HC550 3.5-inch hard drive relies on the company’s 6th Generation helium-filled HelioSeal platform with two key improvements: the platform features nine platters (both for 16 TB and 18 TB versions), and they using what WD is calling an energy-assisted magnetic recording technology (EAMR). The latter has enabled Western Digital to build 2 TB platters without using shingled magnetic recording (SMR).

Since we are dealing with a brand-new platform, the Ultrastar DC HC550 also includes several other innovations, such as a new mechanical design. Being enterprise hard drives, the new platform features a top and bottom attached motor (with a 7200 RPM spindle speed), top and bottom attached disk clamps, RVFF sensors, humidity sensors, and other ways to boost reliability and ensure consistent performance. Like other datacenter-grade hard drives, the Ultrastar DC HC550 HDDs are rated for a 550 TB/annual workload, a 2.5 million hours MTBF, and are covered by a five-year limited warranty.

MAMR? HAMR? EAMR!

The research and development efforts of the hard drive manufacturers to produce ever-denser storage technology has been well documented. Western Digital, Seagate, and others have been looking at technologies based around temporarily altering the coercivity of the recording media, which is accomplished by applying (additional) energy to a platter while writing. The end result of these efforts has been the development of techniques such as heating the platters (HAMR) or using microwaves on them (MAMR), both of which allow a hard drive head to write smaller sectors. With their similar-yet-different underpinnings, this has lead to the catch-all term Energy-Assissted Magnetic Recording (EAMR) to describe these techniques.

Being a large corporation, Western Digital does not put all of its eggs into one basket, and as a result has been researching several EAMR technologies. This includes HAMR, MAMR, bit-patterned media (BPM), heated-dot magnetic recording (HDMR, BPM+HAMR) and even more advanced technologies.

At some point in 2017, the company seemed to settle on MAMR, announcing a plan to produce MAMR-based HDDs for high-capacity applications. Still, while the company focused on MAMR and, presumably for competitive reasons was publicly downplaying HAMR for a while, WD did not really stop investing in it.

Ultimately, having designed at least two EAMR technologies, Western Digital can now use either of them. Unfortunately, for those competitive (and to some degree political) reasons as before, the manufacturer also doesn't really want to disclose which of those technologies it's using. So while the new Ultrastar DC HC550 HDDs are using some form of an EAMR technology, WD isn't saying whether it's HAMR or MAMR.

As things stand, the only thing that the company has said on the matter is telling ComputerBase.de that the new drivers do not use a spin-torque oscillator, which is a key element of Western Digital's MAMR technology.

Here is an official statement from Western Digital:

“The 18 TB Ultrastar DC HC550 is the first HDD in the industry using energy assisted recording technology. As part of our MAMR development, we have discovered a variety of energy assisted techniques that boost areal density. For competitive reasons, we are not disclosing specific details about which energy assist technologies are used in each drive.”

With MAMR apparently eliminated, it would seem that WD is using a form of HAMR for the new drives. However at least for the time being, it's not something the company is willing to disclose.

IOPS-per-TB Challenge

Ultimately, whether HAMR or MAMR, the end result is that WD's EAMR tech has allowed them to increase their drive platter density and resulting drive capacities. Density improvements are always particularly welcome, as it should allow the HC550 to offer higher sequential performance than existing 7200 RPM hard drives. However, since the new storage devices feature a single actuator that enables around 80 IOPS random reads, IOPS-per-TB performance of the new units will be lower when compared to currently available high-capacity 10 – 14 TB HDDs (think 4 IOPS-per-TB vs. 5.7 – 8 IOPS-per-TB) and will require operators of large datacenters to tune their hardware and software to guarantee their customers appropriate QoS.

Unlike Westen Digital’s flagship 20 TB shingled magnetic recording (SMR) hard drive for cold storage applications, the company’s 16 TB and 18 TB nearly HDDs use energy-assisted conventional magnetic recording (CMR), which ensures predictable performance both for random read and write operations. As a result, while the Ultrastar DC 650 SMR HDD will be available only to select customers that can mitigate peculiarities of SMR, the Ultrastar DC 550 hard drives will be available to all clients that are satisfied with their IOPS-per-TB performance and will have qualified them in their datacenters.

Samples & Availability

Western Digital will ship samples of its EAMR-based Ultrastar DC HC550 16 TB and 18 TB hard drives to clients late this year and plans to initiate their volume ramp in 2020. One additional thing to note about the 16 TB EAMR-enabled HDDs is that these drives will likely be used primarily for technology validation, as there are commercial 16 TB CMR+TDMR available today that do not need extensive tests by operators of datacenters.

Related Reading:

Source: Western Digital

Logitech Unveils G604 Lightspeed Wireless Gaming Mouse: 15 Programmable Controls

Par Anton Shilov

Ever the purveyor of peripherals, Logitech is once again expanding its G series of mice with a new high-end wireless mouse for gamers. The Logitech G604 Lightspeed features the company’s latest high-precision sensor as well as 15 fully programmable controls that makes the mouse particularly useful for enthusiasts who play games that benefit from macros.

The Logitech G604 Lightspeed is based on the company’s Hero sensor, which a tracking resolution up to 16,000 DPI. That sensor is being paired with a 32-bit Arm Cortex-M-powered SoC, and on the communications side of matters the wireless mouse supports both Bluetooth and Logitech's proprietary Lightspeed wireless technology. The latter is designed to offer more performance and lower latency than standard Bluetooth, with Logitech offering much greater polling rates – up to 1000 Hz – when using Lightspeed.

In fact this same platform is used for other mice from Logitech, and as a result, the G604 supports the usual Logitech G-series features, such as automatic surface tuning. And that extends to battery life as well; Logitech is promissing a very long battery life for the mouse, rating it to run for up to 240 hours on a single AA battery.

From ergonomics point of view, the G604 Lightspeed is a successor of Logitech’s G602 launched several years ago. The new mouse features a similar shape, however Logitech says that they have refined the design to make it more comfortable and provider a better grip. Logitech’s G604 Lightspeed has 15 controls (including six thumb buttons for high demand actions) each of which can be reprogrammed.

Logitech will start sales of the G604 Lightspeed this fall at a price of $99.99.

Related Reading:

Source: Logitech

À partir d’avant-hierAnandTech

HP Launches Their S430c 43.4-Inch Ultrawide Curved Display

Par Anton Shilov

Along with their new Elite Dragonfly notebook, today HP is also rolling out its first ultra-wide curved display, which is being aimed at replacing dual-display setups used by business customers. The S430c Curved Ultrawide Monitor boasts a sizable 43.4-inch diagonal size, which is laid out in a 32:10 aspect ratio with an ultra-wide 4K resolution. Meanwhile, with its roots firmly in the business side of HP's lineup, the company is also outfitting the monitor with a bevy of business-focused features, such as docking capabilities and a pop-up webcam with IR sensors.

Internally, the HP S430c curved ultrawide monitor uses a 43.4-inch VA panel, which offers a 3840×1200 resolution framed in an 1800R curve. The monitor offers a max brightness of 350 nits, a 3000:1 contrast ratio, a 5 ms GtG response time, 178º/178º vertical/horizontal viewing angles, a 60 Hz refresh rate, and to top things off, the screen has an antiglare coating. Seeing as this isn't a video-focused monitor, HP is sticking just covering the sRGB color gamut (99%), which is the primary color space used by office and productivity applications.

Moving on, for connectivity the display has a DisplayPort 1.2 input, a HDMI 2.0 port, and two USB Type-C (DP alt-mode) inputs, allowing the monitor to be connected to virtually any PC. Both USB-C ports can deliver up to 85 W of power to their host laptops (with a total limit of 100W), meaning the monitor can charge even higher-performance 15.6-inch machines. Those USB-C ports also feed the monitor's built-in USB hub, giving the monitor four downstream USB Type-A ports.

Meanwhile for extra features, the S430c includes a pop-up Full HD webcam with IR sensors for Windows Hello, as well as two microphones. The display also supports HP’s Device Bridge technology, which allows the user to control two PCs at the same time on a split screen without a dedicated KVM.

Like other monitors for professionals, the HP S430c comes with a stand that can adjust height and tilt. Meanwhile, HP will also offer a VESA mount adapter for those who need it.

Sales of the HP S430c Curved Ultrawide Monitor will start on November 9. The monitor will retail for $999.

The S430c Curved Ultrawide Monitor
  General Specifications
Panel 43" VA
Native Resolution 3840 × 1200
Maximum Refresh Rate 60 Hz
Response Time 5 ms
Brightness 350 cd/m²
Contrast 3000:1
Backlighting LED
Viewing Angles 178°/178° horizontal/vertical
Curvature 1800R
Aspect Ratio 32:10
Color Gamut sRGB: 99%
Dynamic Refresh Rate Tech -
Pixel Pitch 0.274 mm²
Pixel Density 92.7 PPI
Inputs 1 × DisplayPort 1.2
1 × HDMI 2.0
2 × USB 3.1 Type-C (w/ 85 W PD)
Audio speakers (?)
3.5-mm audio jack
Webcam Full-HD IR webcam with microphones
USB Hub 4 × USB 3.0 Type-A connectors
Stand Height adjustment
Tilt: -5~20 degree
Power Standby: 0.5 W
Typical: 80 W
Maximum: 220 W
MSRP US: $999

Related Reading:

Source: HP

The OPPO Reno 10x Zoom Review: Bezeless Zoom

Par Andrei Frumusanu

The Oppo Reno 10x Zoom is another Snapdragon 855-based phone that was released earlier in the year, and while we did a quick hands-on test of the device back in May, we never really got to fully reviewing the unit until now. Beyond putting the Reno 10x through our usual testing suite, what’s interesting is that in this time Oppo has had the opportunity to refine the software, and we’ve seen particular improvements on the side of the camera with the introduction of a new low-light photography mode.

The device has two key characteristics: A full-screen minimal bezel display which is enabled by housing the front-camera in a mechanical motorised slide-out mechanism, and a triple-camera setup amongst which we find a “periscope” zoom camera module. Both of these features separately aren’t unique to the Reno 10x, however their combination is unique to Oppo.

Intel Core i9-9900KS TDP Details: ASUS Maximus XI Apex Support

Par Anton Shilov

Intel announced plans to launch its eight-core Core i9-9900KS processor along with its performance specifications quite a while ago, but the company did not disclose the TDP. As the processor will have an all-core base frequency of 4.0 GHz and an all-core turbo of 5.0 GHz, this number is vitally important for motherboard support. This week ASUS released a new BIOS version for some of its motherboards that adds support for the Core i9-9900KS and revealed the number. 

The Intel Core i9-9900 processor has a base frequency of 4.0 GHz as well as an all-core turbo frequency of 5.0 GHz, which essentially makes it an eight-core Coffee Lake Refresh silicon binned to hit higher clocks when cooling is good enough. As it turns out, in a bid to enable higher frequencies, Intel has increased the TDP all the way to 127 W (according to a listing at ASUS.com), which is considerably higher when compared to any existing (or historical) Intel’s CPU for mainstream platforms.

One thing that should be noted is that Intel only guarantees base frequency at a rated TDP (e.g., 4.0 GHz at 127 W), so everything above base (i.e., turbo clocks) means a higher power consumption. As a result, not only will the Core i9-9900KS require a motherboard that can supply 127 W of power and a cooling system that will dissipate 127 W of power, but it will need an advanced platform to hit the turbo clocks. Fortunately, there are plenty of high-end motherboards and coolers around to support the Core i9-9900KS. 

Intel 9th Gen Core 8-Core Desktop CPUs
AnandTech Cores Base
Freq
All-Core Turbo Single
Core Turbo
Freq
IGP DDR4 TDP Price
(1ku)
i9-9900KS 8 / 16 4.0 GHz 5.0 GHz 5.0 GHz UHD 630 2666 127 W ?
i9-9900K 8 / 16 3.6 GHz 4.7 GHz 5.0 GHz UHD 630 2666 95 W $488
i9-9900KF 8 / 16 3.6 GHz 4.7 GHz 5.0 GHz - 2666 95 W $488
i7-9700K 8 / 8 3.6 GHz 4.6 GHz 4.9 GHz UHD 630 2666 95 W $374
i7-9700KF 8 / 8 3.6 GHz 4.6 GHz 4.9 GHz - 2666 95 W $374

One thing to keep in mind is that the information about the TDP of the Core i9-9900KS comes from a third party (albeit a very reliable one), not from Intel. Intel has confirmed that the new Core i9-9900KS will be released in October.

Related Reading:

Source: ASUS

AMD’s New 280W 64-Core Rome CPU: The EPYC 7H12

Par Dr. Ian Cutress

If there’s something that gets everyone excited, it is more performance. On the Enterprise side, AMD has made big strides with its latest EPYC processor stack, featuring up to 64 cores per socket with 128 PCIe 4.0 lanes and 8-channel memory, featuring a very high performance per dollar in the marketplace. In order to coincide with the launch of the processor line-up in Europe today, AMD is unveiling a new chip to act as the new Halo product: the EPYC 7H12.

HP’s Unveils Elite Dragonfly Laptop: 13.3-Inch Convertible With a 24.5 Hour Battery Life

Par Anton Shilov

HP this morning is introducing its new flagship 13.3-inch convertible laptop, which the company is calling the Elite Dragonfly. The Project Athena-class laptop is designed to check all of the boxes for a high-end, compact laptop, offering premium features, a very low weight, and most interesting of all, an optional high-capacity battery that HP claims will run the laptop for over 24 hours.

The HP Elite Dragonfly notebook comes in a CNC-machined magnesium alloy chassis, which has allowed HP to reduce its weight to around 990 grams (in case of the low-weight SKU with a 38 Wh battery) and maintain a 1.61 cm z-height. According to HP, the chassis also meets the durability requirements for the MIL-STD 810G standard (including spill resistance), so it looks like HP has been able to cut down on weight without compromising the durability of the laptop. Meanwhile, the entire chassis is covered with an oleophobic coating, to make the entire laptop resistant to fingerprints and smudges.

Front and center of the convertible notebook is the 13.3-inch touch-enabled display, which is available in Full HD (1080p) or Ultra HD (4K) resolutions, and options include a version of the FHD panel that incorporates Intel's 1 Watt panel tech. The display panel itself is protected by Corning’s Gorilla Glass 5, and for the privacy-minded, HP is also offering their SureView privacy screen as an option.

HP says that it has taken it a long time to engineer a laptop that could include all of the Elite Dragonfly's features, and to that end it has to stick to Intel’s proven low-power 8th Gen Core i3/i5/i7 processors (Whiskey Lake). Despite usage of a previous-generation CPU, the Elite Dragonfly is compliant with Intel's Project Athena requirements, so overall experience should be in line with other laptops designed for that program. The CPU is accompanied by up to 16 GB of soldered-down dual-channel LPDDR3-2133 memory as well as an SSD with capacities going up to 2 TB. Higher-end SKUs will use a PCIe 3.0 x4 drive, whereas cheaper or specialized models will come with a SATA drive, allowing HP to offer a FIPS 140-2-certified drive to customers who need it.

Communications are critical for business these days, so this is where the Elite Dragonfly excels. The convertible laptop comes with Intel’s Wi-Fi 6 + Bluetooth 5 adapter, an optional Intel XMM 7360/7560 4G/LTE modem with 4x4 antennas, and an optional GbE adapter. Meanwhile, when it comes to wired connectivity the laptop includes a Thunderbolt 3-enabled USB-C port, a stand-alone USB-C port, a USB Type-A port, a full-size HDMI port, and a 3.5-mm audio connector. Speaking of audio, when not using headphones the PC has four Band & Olufsen-badged speakers as well as a microphone array as its disposal.

Being an Elite-branded laptop, the HP Dragonfly supports all the key security features that the manufacturer has to offer. In addition to HP SureView privacy screen as well as a 720p Privacy Camera (with or without IR sensor), the convertible supports HP’s Sure Sense, Sure Recover, and Embedded Reimaging technologies, a TPM 2.0 module, and an Absolute persistence module.

Meanwhile, when it comes to battery life, HP is making some bold claims, stating that that an Elite Dragonfly equipped with a Core i5 processor, 8 GB of RAM, a 128 GB SSD, and a 1-Watt Full-HD display, and a 56.2 Wh battery can last for up to 24 hours and 30 minutes on a single charge. These results are based on MobileMark 2014, a relatively light workload, so results will vary with the workload used. Meanwhile, machines with other configurations (e.g. a smaller battery) will last for a shorter amount of time.

HP intends to start sales of its Dragonfly laptops on October 25. Prices for the entry-level Dragonfly convertibles will start at $1,549, but higher-performance SKUs will cost significantly more. In addition the the PC itself, the company will offer a travel mouse as well as a leather sleeve.

Related Reading:

Source: HP

GIGABYTE’s Aorus CV27Q Curved ‘Tactical’ Monitor: 165 Hz QHD With FreeSync 2

Par Anton Shilov

GIGABYTE has introduced a new display aimed at hardcore gamers, incorporating a multitude of capabilities aimed at the target audience. Dubbed the ‘Tactical Monitor’, the Aorus CV27Q is a QHD curved LCD that's able to run at up to 165Hz, and includes support for AMD’s FreeSync 2 refresh rate technology. The gaming-focused monitor also includes active noise canceling, GameAssist OSD functions, and RGB stripes that can be controlled using the company’s software.

The GIGABYTE Aorus CV27Q is based on an 8-bit 27-inch curved VA panel featuring a 2560×1440 resolution, 400 nits peak brightness, a 3000:1 static contrast ratio, a maximum refresh rate of 165Hz, a 1 ms MPRT response time, and 178°/178° viewing angles. The panel also sports a 1500R curvature, which means that it provides a wider field of view than most 27-inch LCDs available today.

As mentioned previously, the Aorus CV27Q is an AMD FreeSync 2-certified monitor, meaning that the display meets AMD's minimum requirements for HDR contrast ratios and color gamuts, as well as supporting direct-to-display tonemapping, and low framerate compensation (LFC) mode. Officially, the monitor is able to hit 90% of the DCI-P3 color gamut, and while it meets the requirements for HDR it only hits the minimum, with an HDR brightness of 400 nits (and matching DisplayHDR 400 certification). Judging from Gigabyte's specifications, it looks like this is an edge-lit monitor – Gigabyte doesn't list how many zones it has – which would be consistent with that performance level. As for FreeSync 2 range, the manufacturer says it is between 48 Hz and 165 Hz.

Meanwhile, GIGABYTE has informed us that they have also submitted the device to NVIDIA for G-Sync Compatible certification, so that the monitor's variable refresh modes can be used with GeForce cards. Whether this happens is ultimately up to NVIDIA – which is why GIGABYTE isn't advertising it as a feature quite yet – but as the company already has other monitors that have been certified by NVIDIA, GIGABYTE should have the expertice to pass certification here as well.

Moving on to gaming-specific features of the Aorus CV27Q, one of the capabilities that GIGABYTE is especially proud of is its 2nd Generation active noise canceling (ANC) technology. Here, ANC uses a special chip along with a dual mic setup to remove ambient noises from the background of the microphone feed. Meanwhile on the output side of matters, GIGABYTE claims that the monitor offers a 120 dB signal-to-noise-ratio (SNR), with the monitor able to support high impedance headphones up to 600 Ohm.

Another interesting capability is Black Stabilizer 2.0 that promises to improve details of dark parts of a scene without affecting other areas. This sounds vaguely like local dimming, however with an edge-lit monitor it's not clear that this monitor will have enough zones to use it effectively. Other features driven by the firmware include crosshair, aim stabilizer (which reduces motion blur in fast-paced scenes, though GIGABYTE does not disclose how it does it), timer & counter, as well as OSD Sidekick that allows to tune the monitor to a particular game or situation.

To connect the GIGABYTE Aorus CV27Q to PCs and consoles, the monitor has one DisplayPort 1.4 and two HDMI 2.0 connectors. Furthermore, the LCD has a dual-port USB 3.0 hub as well as as 3.5-mm audio jacks for headphones and a mic. As far as ergonomics is concerned, the display comes with a stand that can adjust height, tilt, and swivel.

The GIGABYTE Aorus CV27Q
  General Specifications
Panel 27" 8-bit VA
Native Resolution 2560 × 1440
Maximum Refresh Rate 165 Hz
Response Time 1 ms MPRT
Brightness 400 cd/m² (peak)
Contrast 3000:1
Backlighting ELED (Edge-Lit LED)
Viewing Angles 178°/178° horizontal/vertical
Curvature 1500R
Aspect Ratio 16:9
Color Gamut >?% sRGB/BT.709
90% DCI-P3
16.7 million colors
DisplayHDR Tier 400
Dynamic Refresh Rate Tech AMD FreeSync 2
NVIDIA G-Sync Compatible (applied for official certification which is yet to be received)
Pixel Pitch 0.2335 mm²
Pixel Density 109 PPI
Inputs 1 × DP 1.4
2 × HDMI 2.0
Audio 3.5 mm input and output
USB Hub 2 × USB 3.0 Type-A connectors
1 × USB 3.0 Type-B input
USB Hub Tilt: -5° ~ +21°
Swivel: -20° ~ +20°
Height: +/- 130 mm
MSRP $459.99

Set to be available shortly, the GIGABYTE Aorus will cost $459.99, which is a tad higher when compared to other mid-range FreeSync 2 curved displays, but extra features tend to come at a premium.

Related Reading:

Source: GIGABYTE’s Aorus

NVIDIA Announces Call of Duty: Modern Warfare Game Bundle for GeForce RTX 20 Cards

Par Ryan Smith

With the arrival of Fall also comes the biggest quarter of the year for new game releases, and to that end NVIDIA is updating their hardware game bundles. This morning the company is announcing a new bundle for their GeForce RTX cards, which will see the latest Call of Duty game, Modern Warfare, included with the cards as well as systems containing them. This latest bundle is currently scheduled to run through mid-November, or until NVIDIA updates it once more.

Like previous NVIDIA GeForce RTX game bundles, the Call of Duty: Modern Warfare bundle is focused on including a flagship game that showcases the features of NVIDIA’s newest cards. In this case, Modern Warfare checks all of the boxes; along with being a high-profile game in and of itself, the game is receiving (practically obligatory) support for ray tracing via DXR, as well as adaptive shading support.

Digging into the bundle itself, as this is a single game bundle, NVIDIA’s deal is pretty straightforward. The company will be including the game with all of their GeForce RTX cards, from the RTX 2060 up to the RTX 2080 Ti. This offer also applies to many desktop and laptop systems including these cards as well, so long as the vendor is a participating NVIDIA partner.

NVIDIA Current Game Bundles
(September 2019)
Video Card Bundle
GeForce RTX 20 Series (All) Call of Duty: Modern Warfare (2019)
GeForce GTX 16 Series (All) None

Meanwhile, the fact that this is an RTX-only bundle means that NVIDIA’s GTX 16 series cards are being left out. The company has not launched a bundle for those cards, so at least for the time being, only the RTX 20 cards are getting a game bundle.

Finally, as always, codes must be redeemed via NVIDIA Redemption portal on a system with a qualifying graphics card installed. More information and details can be found in the terms and conditions. Be sure to verify the participation of any vendors purchased from, as NVIDIA will not give codes for purchases made from non-participating sellers.

Reaching for Turbo: Aligning Perception with AMD’s Frequency Metrics

Par Dr. Ian Cutress

For those that keep a close eye on consumer hardware, AMD recently has been involved in a minor uproar with some of its most vocal advocates about the newest Ryzen 3000 processors. Some users are reporting turbo frequencies much lower than advertised, and a number of conflicting AMD partner posts have generated a good deal of confusion. AMD has since posted an update identifying an issue and offering a fix, but part of all of this comes down to what turbo means and how AMD processors differ from Intel. We’ve been living on Intel’s definitions of perceived standards for over a decade, so it’s a hard nut to crack if everyone assumes there can be no deviation from what we’re used to. In this article, we’re diving at those perceived norms, to shed some light on how these processors work.

CEVA Announces NeuPro-S Second-Generation NN IP

Par Andrei Frumusanu

It’s been a few years since machine learning and neural networks first started to be the hot new news topic. Ever since then, the market has transformed a lot and a lot of companies and the industry as a whole has shifted from a notion of “what can we do with this” to rather a narrative of “this is useful, we should really have it”. Although the market is very much far from being mature, it’s no longer in the early wild-west stages that we saw a few years ago.

A notable development in the industry is that there’s been a whole lot of silicon vendors who have chosen to develop their own IP instead of licensing things out – in a sense IP vendors were a bit behind the curve in terms of actually offering solutions, forcing in-house developments in order for their product not to fall behind in competitiveness.

Today, CEVA announces the new second generation of NeuPro neural networks accelerators, the new NeuPro-S. The new offering improves and advances the capabilities seen in the first generation, with CEVA also improving vendor flexibility and a new product offering that embraces the fact that a wide range of vendors now have their own in-house IP.

The NeuPro-S is a direct successor to last year’s first-generation NeuPro IP, improving on the architecture and microarchitecture. The core improvements of the new generation lie around the way the block now improves and handles memory, including new compression and decompression of data. CEVA quotes figures such as 40% reduces memory footprint and bandwidth savings, all while enabling energy efficiency savings of up to 30. Naturally this also enables for an increase in performance, claiming up to 50% higher peak performance in a similar hardware configuration versus the first generation.

Diving deeper into the microarchitectural changes, innovations of the new generation includes new weight compression as well as network sparsity optimisations. The weight data is retrained and compressed via CDNN via CEVA’s offline compiler and remains in a compressed form in the machine’s main memory – with the NeuPro-S decompressing in real time via hardware.

In essence, the new compression and sparsity optimisation sound similar to what Arm is doing in their ML Processor with zero-weight pruning in the models. CEVA further goes on to showcase the compression rate factors that can be achieved – with the factor depending on the % of zero-weights as well as the weight sharing bit-depth. Weight-sharing is a further optimisation of the offline compression of the model which reduces the actual footprint of the weight data by sharing finding commonalities and sharing them across each other. The compression factors here range from 1.3-2.7x in the worst cases with few sparsity improvements to up to 5.3-.7x in models with significant amount of zero weights.

Further optimisations on the memory subsystem level includes a doubling of the internal interfaces from 128-bit AXI to 256-bit interfaces, enabling for more raw bandwidth between the system, CEVA XM processor and the NeuPro-S processing engine. We’ve also seen an improvement of the internal caches, and CEVA describe the L2 memory utilisation to have been optimised by better software handling.

In terms of overall scaling of the architecture, the NeuPro-S doesn’t fundamentally change compared to its predecessor. CEVA doesn’t have any fundamental limit here in terms of the implementation of the product and they will build the RTL based on a customer’s needs. What is important here is that there’s a notion of clusters and processing units within the clusters. Clusters are independent of each other and cannot work on the same software task – customers would implement more clusters only if they have a lot of parallel workloads on their target system – for example this would make sense in an automotive implementation with many camera streams, but wouldn’t necessarily see a benefit in a mobile system. The cluster definition is a bit odd and wasn’t quite as clear whether it’s actually any kind of hardware delimitation, or the more likely definition of software operation of different coherent interconnect blocks (As it’s all still connected via AXI).

Within a cluster, the mandatory block is CEVA’s XM6 vision and general-purpose vector processor. This serves as the control processor of the system and takes care of tasks such as control flow and processing of fully-connected layers. CEVA notes that processing of ML models can be processed fully independently by the NeuPro-S system, whereas maybe other IPs need to still rely on maybe the CPU for some processing of some layers.

The NeuPro-S engines are naturally the MAC processing engines that add the raw horsepower for wider parallel processing and getting to the high TOPS figures. A vendor needs at minimum a ratio of 1:1 XM to NeuPro engines, however it may chose to employ more XM processors which may be doing separate computer visions tasks.

CEVA allows allow scaling of the MAC engine size inside a single NeuPro-S block, which ranges from 1024 8x8 MACs to up to 4096 MACs. The company also allows for different processing bit-depths, for example allowing 16x16 as it still sees the need for some use cases that take advantage of the higher precision 16-bit formats. There are also mixed format configurations like 16x8 or 8x16 where the data and weight precision can vary.

In total, a single NeuPro-S engine in its maximum configuration (NPS4000, 4096 MACs) is quoted as reaching up to 12.5 TOPS on a reference clock of 1.5GHz. Naturally the frequency will vary based on the implementation and process node that the customer will deploy.

As some will have noted in the block diagram earlier, CEVA also now allows the integration of third-party AI engines into their CDNN software stack and to interoperate with them. CEVA calls this “CDNN-Invite”, and essentially the company here is acknowledging the existence of a wide-range of custom AI accelerators that have been developed by various silicon vendors.

CEVA wants to make available their existing and comprehensive compiler and software to vendors and enable them to plug-in their own NN accelerators. Many vendors who chose to go their own route likely don’t have quite as extensive software experience or don’t have quite as much resources developing software, and CEVA wants to enable such clients with the new offering.

While the NeuPro-S would remain a fantastic choice for generic NN capabilitites, CEVA admits that there might be custom accelerators out there which are hyper-optimised for certain specific tasks, reaching either higher performance or efficiency. Vendor could thus have the best of both worlds by having a high degree of flexibility, both in software and hardware. One could choose to use the NeuPro-S as the accelerator engine, use just their own IP, or create a system with both units. The only requirement here is that a XM processor be implemented as a minimum.

CEVA claims the NeuPro-S is available today and has been licensed to lead customers in automotive camera applications. As always, silicon products are likely 2 years away.

Related Reading:

Wi-Fi 6 Is Officially Here: Certification Program Begins

Par Anton Shilov

Wi-Fi Alliance on Monday officially started its Wi-Fi 6 certification program, informally kicking off the widescale adoption of the new Wi-Fi standard. As with the group's previous certification programs, the Wi-Fi 6 certification program is focused on verifying the interoperability and feature sets of IEEE 802.11ax devices, ensuring that they work well with each other and that the devices feature all of the required performance and security capabilities of the new standard.

Wi-Fi Alliance's certification comes as device manufacturers have already been shipping Wi-Fi 6 products for the last several months – essentially seeding the hardware ecosystem to get to this point. So the first task for the group's members and test labs will be to certify existing Wi-Fi 6 devices. This includes existing access points, routers, and client devices, including Samsung’s Galaxy Note 10, which has become the first smartphone to receive certification.

Under the hood, the new standard takes a bit of a departure from past Wi-Fi iterations by focusing more on improving performance in shared environments, as opposed to solely boosting peak device transfer rates. To that end, while the maximum throughput supported by Wi-Fi 6 is 2.4 Gbps, the crucial improvement of the Wi-Fi 6/802.11ax technology the standard's enhanced spectral efficiency. Among other things, the technology adds OFDMA (Orthogonal Frequency-Division Multiple Access) to allow different devices to be served by one channel, by dedicating different sub-carriers for individual client devices. Wi-Fi 6 also adds mandatory support for MU-MIMO – a feature first added in 802.11ac Wave 2 – as well as transmit beamforming for better reaching individual clients.

In fact, even existing Wi-Fi 5 (802.11ac) client devices can benefit from a Wi-Fi 6 (802.11ax) AP, though Wi-Fi 6 Certified devices will deliver the best results.

Meanwhile, Wi-Fi Alliance mandates that Wi-Fi 6 certified devices support WPA3 security, 1024-QAM, 160 MHz channels, and that devices support target wake time (a battery-saving tech that minimizes device check-ins).

Finally, along with the launch of the certification program itself, the Wi-Fi Alliance has already certified its first dozen devices. The following network adapters, chipsets, and access points have all been Wi-Fi 6 certified:

  • Broadcom BCM4375
  • Broadcom BCM43698
  • Broadcom BCM43684
  • Cypress CYW 89650 Auto-Grade Wi-Fi 6 Certified
  • Intel Wi-Fi 6 (Gig+) AX200 (for PCs)
  • Intel Home Wi-Fi Chipset WAV600 Series (for routers and gateways)
  • Marvell 88W9064 (4x4) Wi-Fi 6 Dual-Band STA
  • Marvell 88W9064 (4x4) + 88W9068 (8x8) Wi-Fi 6 Concurrent Dual-Band AP
  • Qualcomm Networking Pro 1200 Platform
  • Qualcomm FastConnect 6800 Wi-Fi 6 Mobile Connectivity Subsystem
  • Ruckus R750 Wi-Fi 6 Access Point
Wi-Fi Names and Performance
Naming Peak Performance
New Name IEEE
Standard
1x1
Configuration
2x2
Configuration
3x3
Configuration
Wi-Fi 4 802.11n 150 Mbps 300 Mbps 450 Mbps
Wi-Fi 5 802.11ac 433 Mbps over 80MHz

867 Mbs over 160MHz
867 Mbps over 80MHz

1.69 Gbps over 160MHz
1.27 Gbps over 80 MHz

2.54 Gbps over 160 MHz
Wi-Fi 6 802.11ax 867 Mbs over 160MHz

depends
1.69 Gbps over 160MHz

on network
2.54 Gbps over 160 MHz

configuration

Related Reading:

Source: Wi-Fi Alliance

The Corsair K63 Wireless Mechanical Keyboard Review: PC Gaming Untethered

Par E. Fylladitakis

Today we are taking a look at a wireless mechanical keyboard from Corsair, the K63. Designed with living room gaming in mind, the K63 seeks to combine the benefits for mechanical keyboards with the convenience of wireless communication, with a battery life lengthy enough for long gaming sessions.

TCL Shows Off 132-Inch Micro LED 4K UHDTV: 24,000,000 Micro LEDs

Par Anton Shilov

Direct view Micro LED displays are a relatively new display technology that so far has been publicly demonstrated only by Samsung and Sony, the two of which tend to experiment with variety of technologies in general. At IFA last week TCL, a major maker of televisions, threw its hat into the ring by demonstrating its ultra-large Micro LED-based Ultra-HD TV.

Dubbed the Cinema Wall 132-Inch 4K, TCL’s Micro LED television uses 24,000,000 individually controlled LEDs as RGB subpixels, and features a 1,500 nits max brightness level as well as a 2,500,000 contrast ratio (good enough to compete against OLEDs). The manufacturer claims that the TV can display a wide color gamut, but does not disclose whether they're using DCI-P3 or BT.2020.

Like other early-generation display products, TCL is not revealing if and when plans to release its 132-inch 4K Micro LED TV commercially, but the fact that that it has a device that is good enough to be shown in public (see the video by Quantum OLED channel here) is an important step. Just like other makers of Micro LED televisions, TCL might want to increase peak brightness supported by these devices, as many modern titles are post-produced using Dolby’s Pulsar reference monitor for Dolby Vision HDR, which has a peak brightness level of 4000 nits.

Numerous TV makers are currently investigating Micro LED technology as a viable alternative to OLED-based screens. While OLEDs tend to offer superior contrast ratio when compared to LCDs, they have a number of trade-offs, including off-axis color shifting, ghosting, burn-in, etc. WOLED has mitigated some of these issues, but it has also introduced others due to the inherient limitations of using color filters.

By contrast Micro LED TVs are expected to be free of such drawbacks, while still retaining the advantages of individual LEDs like brightness, contrast, fast response time, and wide viewing angles. As an added bonus, Micro LED TVs will not need any bezels and can be made very thin.

Related Reading:

Sources: Quantum OLED, MicroLED.info, LEDs Inside

Corsair Reveals Vengeance LPX DDR4-4866 Memory Kit

Par Anton Shilov

Corsair on Thursday released its fastest memory kit to date, the Vengeance LPX DDR4-4866, aimed at the most performance-hungry enthusiasts. The modules are specifically tested for compatibility AMD’s Ryzen 3000/X570 platforms, though they can work with Intel-based PCs too.

Corsair’s Vengeance LPX DDR4-4866 memory kit consists of two 8 GB memory modules (CMK16GX4M2Z4866C18) featuring a CL18 26-26-46 latency and a 1.5V voltage. The unbuffered DIMMs rely on Micron’s cherry-picked DRAM devices as well as Corsair’s custom 10-layer PCB. The modules are traditionally equipped with aluminum heat spreaders, and are compatible Corsair’s Vengeance Airflow fan to improve cooling.

The manufacturer claims that it has tested its Vengeance LPX DDR4-4866 modules with AMD’s Ryzen 3000-series processors paired with ASUS's ROG Crosshair VIII Formula, the MSI MEG X570 Godlike, and the MSI Prestige X570 Creating motherboards. Meanwhile, since the UDIMMs feature an XMP 2.0 SPD, they will be able to work with Intel Z390-based platforms at DDR4-4800 as well.

For those who need high-end performance and RGB LEDs as well, Corsair will also offer Vengeance RGB Pro DDR4-4700 16 GB kit. The RGB Pro kit cannot be equipped with a fan, but it will still feature the same DRAM chips, a custom PCB, an XMP 2.0 profile, and aluminum heat spreaders.

Being a true flagship offering, Corsair’s 16 GB Vengeance LPX DDR4-4866 memory kit is expensive to say the least: in the US the kit costs $984, whereas in Europe it is priced at €1,064,99.

There is one thing to note about Corsair’s Vengeance LPX DDR4-4866 and Vengeance RGB Pro DDR4-4700 memory kits. AMD as well as third-party observers sayd that the Ryzen 3000 processors show the highest memory subsystem performance when frequencies of Infinity Fabric (fClk), memory controller (uClk), and DRAM (mClk) are equal (i.e., the fClk to mClk ratio is set at 1:1). Which can be an issue, as few Ryzen CPUs can support such high fClk clocks; so using exceptionally fast DDR4 memory modules (e.g., DDR4-4000+) may be unfavorable in many cases. That said, it remains to be seen what kind of advantages will Corsair’s DDR4-4700 and DDR4-4866 kits bring.

Related Reading:

Source: Corsair

HTC Unveils Final Specs & Availability Date of Cosmos VR Headset for PCs

Par Anton Shilov

HTC this week announced final specifications as well as availability date of the Vive Cosmos, its next-generation tethered VR headset, which promises numerous improvements along with modularity for further upgrades. Among the key features of the Vive Cosmos are higher resolution displays, full 6DoF inside-out tracking built around six cameras and integrated sensors, a lower weight, as well as all-new knuckles-style controllers. The headset is available for pre-order now and will ship in early October.

Originally unveiled at CES early this year, the Vive Cosmos head mounted display (HMD) is equipped with two 3.4-inch RGB LCD screens, each offering a per eye resolution of 1440×1700 (2880×1700 combined resolution), a 90 Hz refresh rate, and a 110-degree field of view (officially, this is the same as the original Vive). On which note, HTC has been relatively mum on the optics used, though it has been confirmed that they're continuing to use Fresnel lenses.

As this is a tethered headset by default, in its standard configuration the HMD connects to a host PC using via DisplayPort 1.2 and USB 3.0. Alternatively, the VR HMD can be equipped with a WiGig-based wireless adapter from HTC.

The VR headset comes with a built-in inside-out 6-degree-of-freedom (6DoF) positional tracking enabled by six cameras, a G-sensor, as well as a gyroscope, which is an important distinction from the original Vive (and Vive Pro), as it does not require any external sensors for tracking. This greatly simplifies the setup process and removes some of the friction from using the device, though as a realistic assessment it's unlikely to be quite as stable as using external sensors. Like all HTC Vive HMDs, the Cosmos has its own spatial audio-supporting stereo headphones as well as microphones.

The new headset also comes with brand-new knuckle-style controllers, which are tracked by the HMD as part of its inside-out tracking. The controllers feature touch sensitivity, two application buttons, a trigger, a joystick, a bumper, and a grip button; all the common controls found on current-generation VR controllers. There is one notable caveat about the Cosmos controllers though: they are powered by two AA batteries and cannot be recharged from outside, which means that users will need to swap batteries after they get run down.

A unique capability of the Vive Cosmos is modular design of its front panel, which can be detached and replaced by another one, allowing upgrades and new features to be added. Fittingly, the very first ‘mod’ is the Vive Cosmos External Tracking Mod, and it is designed to allow the headset to be tracked externally using the SteamVR ecosystem's existing Lighthouse base stations (though this also means the Cosmos controllers cannot be used). This one will be available in Q1 2020 for under $200.

The new HTC Vive Cosmos has a lower weight compared to its predecessors, the manufacturer says without elaborating. Meanwhile, the HMD continues to use a headstrap similar to that of the Vive Pro, with a sizing dial and enhanced ergonomics that balance the weight for added comfort.

HTC’s Vive Cosmos VR headset will be launching on October 3rd for $699. However for anyone looking to get started right away, HTC has already started taking pre-orders this week.

Related Reading:

Source: HTC

Arm Joins CXL Consortium

Par Anton Shilov

Arm has officially joined the Compute Express Link (CXL) Consortium in a bid to enable its customers to implement the new CPU-to-Device interconnect and contribute to the specification. Arm was among a few major technology companies that was yet to join the CXL consortium and given the number of chips that use Arm’s IP, its support is hard to overestimate.

Arm is not completely new to CXL. The company has been participating in CXL workgroups and has provided technological and promotional resources to support development of the technology. The formal joining of the CXL consortium indicates the company’s commitment to provide its customers a full software framework to CXL, though the company does not say anything about plans to add appropriate logic to its upcoming AMBA PCIe Gen 5 PHY implementations.

Arm is a board member in the PCI SIG and the Gen-Z Consortium. Besides, the company supports its own CCIX interface for inter-package chip-to-chip interface. By supporting CXL, Arm will enable its clients to build CPUs or accelerators that support low-latency cache coherency as well as memory semantics between processors and accelerators.

Arm says that CCIX, which supports full cache coherency, will be used as an inter-package chip-to-chip interface for heterogeneous system-on-packages. Meanwhile, since this functionality is not in the scope of CXL at present, it will not compete against Arm’s version of CCIX.

Related Reading:

Source: Arm

The Xiaomi Mi9 Review: Flagship Performance At a Mid-Range Price

Par Andrei Frumusanu

We’re edging towards to latter half of 2019 and the next and last upcoming wave of device releases, however among the many device releases of the year one device we missed to review was the new Xiaomi Mi9. The phone was amongst the earliest releases of the year, being actually representing one of the first Snapdragon 855 devices announced back in February.

Xiaomi’s always been an interesting vendor that stood out alongside Huawei as one of the bigger Chinese vendors that have a larger presence in the west. Particularly last year and especially this year Xiaomi has made a lot of progress in terms of their push in European markets by officially releasing and offering their flagship devices in different market. The Mi9, as opposed to past iterations, thus no longer represents being a special case or import device, but rather a simple official Amazon purchase.

Today the Mi9 can be had for even less than its original 445€ launch price, being available for less than 400€, whilst still offering flagship performance, a triple camera setup, a great screen, all in a compact and attractive package. We’ll go over the device and exactly investigate how Xiaomi is able to offer such hardware at a low price, if there’s compromises and where they lie.

Giveaway: QNAP TS-932X NAS & Seagate IronWolf Drive Bundle

Par Ryan Smith

We’re back this week with another giveaway, this time courtesy of Seagate. After giving away some of their new Ironwolf 110 SSDs a couple of months back, this month the company has decided to up the ante. Rather than just giving away the SSDs, this time the company will be giving away a complete NAS setup, comprised of QNAP TS-932X-2G 9 bay NAS, as well as one of each of Seagate’s IronWolf Pro 16TB HDD and Ironwolf 110 240GB SSD.

Starting things off, we have QNAP’s TS-932X-2G, a business-class NAS. This is one of the company’s compact 9 bay NASes, sporting 5 3.5-inch SATA drive bays along with another 4 2.5-inch SATA bays. The NAS is designed particularly for tiered storage, with the 3.5-inch bays being ideal for HDDs, while the 2.5-inch bays can hold SSDs (or in a pinch, 2.5-inch HDDs). Under the hood, the 932X is based on a quad core ARM Cortex A57-based SoC, the Alpine AL-324, which runs at 1.7GHz. This specific model comes with 2GB of DDR4 pre-installed in the single SO-DIMM slot, though it can be upgraded.

In terms of I/O, the NAS comes with a trio of USB 3.0 Type-A ports, among other things. But perhaps the most interesting feature here is the NAS’s Ethernet support: a pair of GigE RJ45 ports, along with a pair of 10GigE SFP+ ports. Owing to its business-focused design, QNAP has opted for SFP+ ports, which means that the NAS can be equipped with any of several different flavors of 10GigE depending on what kind of cabling you’d like to use. The one downside to this is that it means the ports aren’t actually usable without buying a transceiver, so there’s an additional cost (10GBASE-T transceivers are ~$50) before 10GigE is actual usable.

QNAP TS-932X NAS
  TS-932X-2G
CPU Model Alpine AL-324 (Cortex-A57)
Cores 4C
Freq. 1.7 GHz
Encryption Acceleration 256-bit AES
Memory Speed DDR4, one SO-DIMM slot
Capacity 2 GB, single-channel
Bays 5 × 3.5"
4 × 2.5"
Storage interface SATA 6 Gbps
Ethernet 2 × GbE
2 × 10 GbE SFP+
Audio 1 speaker
1 × 3.5mm audio out
USB 3 × USB 3.0 Type-A  
Other I/O Copy button, buzzer, LED notifications, etc.
Dimensions Height 183 mm | 7.19"
Width 225 mm | 8.85"
Depth 224 mm | 8.8"
Power Consumption Standby 21.66 W
Operating 42.15 W
OS QNAP QTS 4.3
MSRP $599

Seagate IronWolf HDD & SSD

Meanwhile from Seagate, we have a pair of IronWolf drives from them. For mass storage, the company is including their top capacity 16TB IronWolf HDD. A recently launched product from the company, the 16TB IronWolf is a helium-based 7200 RPM drive, and the highest capacity IronWolf drive from the company to date. As part of the IronWolf family it’s specifically designed for use in NASes, incorporating the necessary sensors and low-vibrational design to best handle being packed in tight with a number of other actively running HDDs.

Seagate is also including one of their IronWolf SSDs as well, the 240GB version of the IronWolf 110. The drives, based on 3D TLC NAND with sustained performance numbers of 560 / 535 MBps sequential reads / writes, support a relatively hearty 1 DWPD endurance, despite the usual read-heavy scenarios that NASes drive. This makes them well suited for use as cache drives, which is exactly what Seagate is going for in this giveaway with the QNAP NAS.

Seagate Ironwolf 110 Series Specifications
Capacity 240 GB 480 GB 960 GB 1920 GB 3840 GB
Form Factor 2.5" 7mm SATA
NAND Flash 3D TLC
Sequential Read 560 MB/s
Sequential
Write
345 MB/s 535 MB/s
Random Read 55k IOPS 75k IOPS 90k IOPS 90k IOPS 85k IOPS
Random
Write
30k IOPS 50k IOPS 55k IOPS 50k IOPS 45k IOPS
Idle Power 1.2 W
Active Power 2.3 W 2.7 W 3.2 W 3.4 W 3.5 W
Warranty 5 years
Write
Endurance
435 TB
1 DWPD
875 TB
1 DWPD
1750 TB
1 DWPD
3500 TB
1 DWPD
7000 TB
1 DWPD

The giveaway is running through September 27th and is open to all US residents (sorry, ROW!). You can enter below, and you can find more details (and the full discussion) about the giveaway over on the AnandTech Forums.

AnandTech Seagate IronWolf + QNAP NAS Giveaway

Patriot Launches Viper VP4100 PCIe Gen 4 SSDs: Up to 5 GB/s

Par Anton Shilov

Patriot’s Viper Gaming division this week officially introduced its first PCIe 4.0 SSDs, several weeks ahead of schedule. The Viper VP4100 drives use Phison’s PS5016-E16 controller and generally resembles competing products. However, because of a custom firmware, the SSDs may differ a bit as compared to other E16 drives.

Available in 1 TB as well as 2 TB configurations and equipped with Phison’s PS5016-E16 controller as well as 3D TLC NAND memory, the Patriot Viper VP4100 is rated for up to 5000 MB/s sequential read speeds, up to 4400 MB/s sequential write speeds, as well an 800K peak read/write random IOPS. While the rated sequential write speed of the VP4100 is 100 MB/s lower than other drives based on the same controller, its rated random read/write performance is 50K IOPS higher, which looks like a reasonable tradeoff because random speeds usually have a more significant impact on end user experience.

Patriot's Viper VP4100 SSDs
Capacity 1 TB 2TB
Model Number VP4100-1TBM28H VP4100-2TBM28H
Controller Phison PS5016-E16 (PCIe 4.0 x4)
NAND Flash 3D TLC NAND
Form-Factor, Interface M.2-2280, PCIe 4.0 x4, NVMe 1.3
Sequential Read 5000 MB/s
Sequential Write 4400 MB/s
Random Read IOPS 800K IOPS
Random Write IOPS 800K IOPS
Pseudo-SLC Caching Supported
DRAM Buffer 1 GB 2 GB
TCG Opal Encryption No
Power Management ?
Warranty 5 years
MTBF ? hours
TBW 1800 TB 3600 TB
MSRP $399.99 $599.99

To make sure that performance of the Patriot Viper VP4100 SSD is consistent under high loads, the manufacturer equipped the drives with an external thermal sensor as well as an aluminum heat spreader.

Patriot’s Viper VP4100 SSD will be covered by a 5-year warranty and will be available in the near future. The 1 TB model will carry a recommended price tag of $399.99, whereas the 2 TB version will be priced at $599.99.

Related Reading:

Source: Patriot

MSI’s Prestige PC341WU 5K 34-Inch Professional Monitor Now Available

Par Anton Shilov

MSI entered the display market just a couple of years ago, and relatively rare for the the commodity-driven monitor market, MSI has opted to spend a good deal of effort putting together monitors to address niche markets. One of such monitors is the Prestige PC341WU, a 5K LCD designed for professional/prosumer users who require high color accuracy.

MSI’s Prestige PC341WU uses LG’s 34-inch Nano-IPS panel, which is a 21:9 aspect ratio panel with a 5120×2160 resolution. The monitor sports a 450 nits typical brightness, 600 nits peak brightness, a 1200:1 contrast ratio, a 8 ms response time, and a 60 Hz refresh rate. Being a professional monitor, the LCD can display 1.07 billion colors (8-Bit+FRC) and reproduce 100% of the sRGB and 98% of the DCI-P3 color spaces. Furthermore, the monitor carries VESA’s DisplayHDR 600 badge, so it has to support at least HDR10. Unfortunately, MSI doesn't list anything about factory calibration for the display.

The ultra-wide 5K monitor offers DisplayPort, HDMI, and USB Type-C inputs. This allows it to be compatible with all PCs available today, including those that only feature Thunderbolt 3 or USB-C ports. In addition, the LCD has a dual-port USB 3.0 hub, an SD card reader, and audio connectors.

Like other professional-grade monitors, the MSI Prestige PC341 supports Picture-in-Picture (PIP) and Picture-by-Picture (PBP) capabilities that are required by those who connect more than one PCs to a single display. Also, it features a special Creator OSD to enable professional to fine-tune the monitor for their needs. Last but not least, the LCD comes with an adjustable stand that can regulate height, tilt, and swivel.

The Prestige PC341WU will be available in the US starting from September 16, 2019, at an MSRP of $1,199.99. For a limited time, before the end of this month, B&H will offer the display with a $200 gift card.

The MSI Prestige 5K Display
  Prestige PC341WU
Panel 34-inch Nano IPS
Native Resolution 5120 × 2160
Maximum Refresh Rate 60 Hz
Response Time 8 ms GtG
Brightness 450 cd/m² (typical)
450 cd/m² (peak)
Contrast 1200:1
Backlighting LED
Viewing Angles 178°/178° horizontal/vertical
Curvature -
Aspect Ratio 21:9
Color Gamut 100% sRGB/BT.709
98% DCI-P3
DisplayHDR Tier 600
Dynamic Refresh Rate Tech -
Pixel Pitch 0.1554 mm²
Pixel Density 163 PPI
Inputs DisplayPort
HDMI
USB Type-C
Audio 3.5 mm output
3.5 mm input
USB Hub 2 × USB 3.0 Type-A connectors
1 × USB 3.0 Type-B input
Card Reader SD Card Reader
Stand Adjustments Height: ? mm
Tilt: -?˚ -?˚
Swivel: -?˚ - ?˚
MSRP $1199.99

Related Reading:

Source: MSI

Need for Speed: The LG UltraGear (27GN750) 240 Hz IPS Monitor with G-Sync

Par Anton Shilov

LG has expanded its family of UltraGear displays aimed at hardcore and esports gamers. The newest model, the UltraGear 27GN750, supports a 240 Hz maximum refresh rate as well as NVIDIA’s G-Sync variable refresh rate technology. The LG UltraGear 27GN750 is the industry’s first IPS monitor featuring such a high refresh rate along with the G-Sync technology.

Based on the so-called ‘fast IPS’ 27-inch panel, the LG UltraGear 27GN750 has a 1920×1080 resolution, 400 nits brightness, a 1000:1 contrast ratio, 178°/178° viewing angles, a 1 ms GtG response time, and a variable refresh rate of up to 240 Hz supported by NVIDIA’s G-Sync technology. Unfortunatelly, LG does not disclose the range of the VRR supported by the LCD.

The 27-inch gaming monitor can display 16.78 million of colors and can reproduce 99% of the sRGB color space. Furthermore, it also carries VESA’s DisplayHDR 400 badge and therefore supports HDR10 transport.

Because LG’s UltraGear monitors are designed predominantly for gamers, they support numerous features aimed at this audience, including LG’s Dynamic Action Sync mode, Black Stabilizer, and Crosshair.

As far as connectivity is concerned, the LG UltraGear 27GN750 has one DisplayPort, two HDMI inputs, as well as a dual-port USB hub.

The LG UltraGear Display with a 240 Hz Refresh Rate
  UltraGear 27GN750
Panel 27-inch class IPS
Native Resolution 1920 × 1080
Maximum Refresh Rate 240 Hz
Dynamic Refresh Technology NVIDIA G-Sync
Range ?
Brightness 400 cd/m²
Contrast 1000:1
Viewing Angles 178°/178° horizontal/vertical
Response Time 1 ms GtG
Pixel Pitch ~0.27675 mm²
Pixel Density ~82 PPI
Color Gamut Support 99% sRGB
Inputs 1×DP 1.2
2×HDMI 2.0
Audio headphone out
Stand ?
Warranty ? years
MSRP ?

Being one of the leading makers of high-end displays and offering hundreds of models, LG introduced its separate UltraGear brand targeted at demanding gamers only in mid-2019, somehow later than its competitors. The addition of the rather unique (as of today) UltraGear 27GN750 featuring a 240 Hz refresh rate enables the company to address a new market segment of gamers that require maximum performance yet demand quality of an IPS panel. In fact, this is the world’s second IPS LCD featuring a 240 Hz refresh rate and its only competitor is Dell's Alienware 27 model AW2720HF.

Related Reading:

Source: LG

Matrox Acquired by Co-Founder

Par Anton Shilov

Matrox on Monday announced that Lorne Trottier, a co-founder of Matrox, has acquired 100% ownership of the Matrox group of companies, which includes three divisions: Matrox Imaging, Matrox Graphics, and Matrox Video.

Founded in 1976 by Lorne Trottier and Branko Matić, Matrox may not be a widely-known name among the PC crowd these days as it has been years since the company released its own GPU and essentially quit the market of consumer graphics cards. Back in the day, Matrox’s Parhelia and Millennium G400/G450/G550 graphics cards provided superior 2D image quality (something that was very important back in the CRT era), but failed to offer competitive performance in 3D games. This failure led the company to leave the market of consumer graphics cards and focus on niche markets instead. Back in 2014 Matrox officially ceased to design its own graphics processor IP and has been using AMD’s Radeon GPUs coupled with its renowned software since then.

In fact, when it comes to multi-display graphics cards and other graphics solutions for various purposes as well as for specialized niche solutions for video and imaging applications, Matrox has rather unique offerings. Serving aerospace, broadcast, financial, cinematography, digital signage, and other industries, Matrox almost certainly earns good profit margins.

It is hard to say how change of the ownership will affect product development and roadmap of Matrox, but usually such changes focuse the companies on their key products, which enables growth.

Since Matrox has always been a privately held company, financial terms of the deal were not disclosed.

Here is what Lorne Trottier had to say:

“This next phase represents a renewed commitment to our valued customers, suppliers, and business partners, as well as to our 700 dedicated employees worldwide. At Matrox, our culture is defined by our passion for technological innovation and product development. We maintain the highest degree of corporate responsibility vis-a-vis production quality and industry standards. I am extremely proud of our accomplishments over our 40-plus-year history and would like to thank my co-founder for his contributions.”

He added:

“I look forward to championing a corporate culture defined by forward-thinking business practices, transparency, and teamwork. I am excited to lead this great organization as we implement growth initiatives. Matrox is a great Canadian success story. We owe this success and our bright prospects to the talented and dedicated people at all levels of this organization.” 

Related Reading:

Source: Matrox

Western Digital 20 TB HDD: Crazy Capacity for Cold Storage

Par Anton Shilov

As operators of cloud datacenters need more storage capacity, higher capacity HDDs are being developed. As data hoarders need more capacity, higher capacity HDDs are needed. Last week Western Digital introduced its new Utrastar DC HC650 20 TB drives - hitting a new barrier in rotating data. 

The drives feature shingled magnetic recording (SMR) technology, which layers data on top of another much like a shingled roof, and therefore is designed primarily for write once read many (WORM) applications (e.g., content delivery services). Western Digital’s SMR hard drives are host managed, so they will be available only to customers with appropriate software.

Western Digital’s Utrastar DC HC650 20 TB is based on the company’s all-new nine-platter helium-sealed enterprise-class platform, a first for the company. The new 3.5-inch hard drives feature a 7200 RPM spindle speed and will be available with a SATA 6 Gbps or SAS 12 Gbps interface depending on the SKU. Since the product is not expected to be available immediately, the manufacturer does not disclose all of its specifications just yet, but has stated that key customers are already in the loop.

Featuring a very high per-platter capacity of around 2.2 TB, the Utrastar DC HC650 20 TB HDDs offer a higher sequential read performance than its predecessors, but its IOPS per TB performance is lower than that of older HDDs. That said, Western Digital’s clients who will use the 20 TB SMR HDDs will need to manage the physical limitations of SMR, by maximizing sequential writes.

As far as availability is concerned, the 20 TB version of the Ultrastar DC HC650 SMR drives will be available as samples by the end of the year. Actual shipments will start once the drives are qualified by customers. Because the HDDs will be available to select customers only, Western Digital does not publish per-unit pricing.

Related Reading

Source: Western Digital

Apple Announces 10.2-Inch, A10-Powered 7th Gen iPad: Launching Sept. 30th for $329

Par Ryan Smith

As part of today’s fall keynote presentation for mobile devices, Apple took the wraps off of the latest iteration of their entry-level iPad. Now entering its 7th generation, the new iPad continues to retain most of the classic tablet’s design elements and features, however strictly speaking, Apple has finally moved past the tablet’s classic 9.7-inch size. As part of an effort to align the entry-level iPad with Apple’s higher-end iPad Air, the company has ever so slightly enlarged the tablet, with the latest model filling out to 10.2 inches diagonal.

Size increases aside, however, the latest iPad still takes up the same spot within Apple’s lineup as the previous iPad model. With Apple holding to the $329 retail price for the base 32 GB model ($299 education), this is Apple’s entry-level iPad, optimized for content consumption and some very light content creation. The latter, in turn, actually gets a small boost in this generation, with the addition of Apple’s Smart Connector, allowing the tablet to be used with Apple’s matching Smart Keyboard.

Apple iPad Comparison
  iPad Air
(2019)
iPad 7th Gen
(2019)
iPad 6th Gen
(2018)
SoC Apple A12 Bionic
2x Vortex
4x Tempest

4-core "G11P" GPU
Apple A10
2x Apple Hurricane
4x Apple Zephyr

6 Core PowerVR GPU
Display 10.5-inch
2224x1668
IPS LCD
DCI-P3/True Tone
500 Nits Brightness
Fully Laminated
10.2-inch
2160x1620

IPS LCD
500 Nits Brightness
9.7-inch
2048x1536
IPS LCD
Size Height 250.6 mm 250.6 mm 240 mm
Width 174.1 mm 174.1 mm 169.5 mm
Depth 6.1 mm 7.5 mm 7.5 mm
Weight 456 grams (Wi-Fi) 483 grams (Wi-Fi) 469 grams (Wi-Fi)
RAM 3GB LPDDR4X 2GB? LPDDR4 2GB LPDDR4
NAND 64GB / 256GB 32GB / 128GB
Battery 30.2 Wh 32.4 Wh
Front Camera 7MP, f/2.2
HDR, WCG
Retina Flash
1.2MP, f/2.2
HDR
Retina Flash
Rear Camera 8MP,  f/2.4, AF
HDR, WCG
8MP,  f/2.4, AF
HDR
Cellular Gigabit-class LTE-A 2G / 3G / 4G LTE
SIM Size NanoSIM + eSIM NanoSIM
Wireless 802.11a/b/g/n/ac 2x2 MIMO
BT 5.0
802.11a/b/g/n/ac 2x2 MIMO
BT 4.2
Connectivity USB-C
Apple Smart Connector
Lightning
Apple Smart Connector
Lightning
Launch OS iOS 12 iOS 13 iOS 11
Launch Price (Wi-Fi / Cellular)

$499/$629 (64G)
$649/$779 (256G)
(Wi-Fi / Cellular)

$329/$459 (32G)
$429/$559 (128G)
(Wi-Fi / Cellular)

$329/$459 (32G)
$429/$559 (128G)

In the case of the 7th generation iPad, taking a quick look at the specs actually tells us most of what we need to know about Apple’s new tablet. In short, Apple has made the tablet a bit larger than its predecessor, but little else. Based upon the same A10 SoC, same 32.4 Watt-hour battery, and the same camera modules, there’s not a whole lot new for the new iPad beyond its size. So by and large, the 7th generation iPad is pretty much a side-grade to the previous iPad.

The important part for Apple here is that the latest model of the tablet, besides being a bit larger overall – with size continuing to be important to consumers – is that this aligns the design of the iPad with the new iPad Air (again). Specifically, the 7th generation iPad gets the same 250.6mm x 174.1mm footprint as Apple’s higher-end tablet. This means that the two tablets can share a lot of accessories that are designed to match the size of a tablet – case in point, Apple’s Smart Keyboard, which fits both the iPad and the iPad Air. The iPad is still a good 23% thicker, so cases and the like will still need to take this into account, but it means the iPad Air is no longer alone with its slightly enlarged footprint.

Blowing up the tablet also means that Apple has moved on to a slightly larger display panel. Owing to its thicker bezels, the 7th gen iPad doesn’t get the same 10.5-inch screen as the iPad Air, but rather it gets a 10.2-inch IPS LCD. Apple has opted to retain their same “retina” PPI of 264, so as a result the resolution on the new iPad is just a bit higher, shifting up to 2160x1620, still following the classic 4:3 aspect ratio. Meanwhile, the new iPad is also the first time Apple’s entry-level tablet is getting an official brightness rating, with Apple rating it for 500 nits, the same as the iPad Air. It should be noted, however, that this is as far as the iPad goes; the Air retains other advantages such as the laminated panel and wide color gamut.

As this is an entry-level iPad, there isn’t much in terms of frills to talk about from a feature perspective. Apple has retained the use of the Touch ID-equipped home button, and the 3.5mm jack has thankfully not been excised from this model. Meanwhile Apple’s technical specifications do note that the tablet now includes a dual microphone setup to improve audio pickup, which is something that’s been restricted to the Air models (classic and modern) up until now. Apple Pencil support has also returned, with the tablet continuing to support the first-gen pencil, but not the more intricate second-gen pencil used with the current iPad Pro models.

Curiously, however, Apple hasn’t unified the tablet lineups in terms of I/O ports: the 7th generation iPad is still using Apple’s Lightning connector, rather than the USB-C connector of the iPad Air and iPad Pro. So while many Air/Pro accessories will work with the new entry-level iPad, anything expecting that USB-C port will not. In that respect the new iPad is closer to being backwards compatible with the now legacy iPads than it is being unified with the newer models.

Rounding out the package, Apple has interestingly opted not to scale up their battery at all for the new iPad, even with its larger size. As with its predecessor, the 7th generation iPad packs in a 32.4 Wh battery, which even with the slightly larger screen, Apple is still rating as being capable of driving the tablet for up to 10 hours. Consequently, while this is technical minutiae that Apple will never get in to, I’m curious whether Apple has even changed parts here, or if they’re still using the exact same battery as the 6th gen iPad as a means to keep down costs.

Unfortunately, the newest iPad isn’t going to do anything about improving the tablet’s performance, as Apple is once again using the A10 SoC, first introduced for the iPhone 7. Though by no means a slouch, A10 is among Apple’s older SoCs – the company only supports iPads and iPhones going back to the A9 – and as a result it comes with the same basic image processing and Wi-Fi capabilities as the earlier iPad. And, while Apple doesn’t disclose memory capacity, because of the Package-on-Package nature of the A10’s memory, the SoC is almost certainly still the same 2GB version as before.

Last but not least, however, the new iPad does get a small boost to its cellular capabilities. The 7tn generation iPad seems to be borrowing from the iPad Air here once again, incorporating a similar “Gigabit-class” LTE radio, which will allow for faster transfer speeds than the older iPad’s sub-Gigabit radio. And on a technical note, like the iPad Air, Apple has done away with CDMA support for the new iPad; now it’s solely GSM/UMTS/LTE, meaning that in the unlikely event it falls back from 4G LTE, the iPad can’t use Verizon and Sprint’s 3G CDMA networks.

Wrapping things up, the 7th generation iPad will come in Apple’s usual mix of colors, capacities, and Wi-Fi/Cellular feature sets. The lineup will continue to start at $329 for the base-model 32GB Wi-Fi version, while an upgrade to 128GB of storage will cost another $100, and adding cellular is a $130 upgrade. Apple will begin selling the tablets shortly after the new iPhone 11 series goes on sale, with the iPad set to being shipping on September 30th.

Apple Announces New iPhone 11, iPhone 11 Pro, & iPhone 11 Pro Max

Par Andrei Frumusanu

Apple’s new iPhone Special Event just finished up at the Steve Jobs theatre in Cupertino – and as expected we saw the launch of a new generation of iPhones – the new iPhone 11 series. The new iPhone 11 is the successor to the iPhone XR of last year and is projected to again be Apple’s most successful device for the year, upgrading the camera system with new photography experiences as well as introducing the new A13 chipset.

The new iPhone 11 Pro and 11 Pro Max are Apple’s first iPhones with the “Pro” designation, and are the successors to last year’s XS and XS Max. They also bring the new back glass design, but this time include three camera modules, and this year Apple also upgrades the display panel to make it much brighter and much more efficient.

The Apple 2019 iPhone Event Live Blog (10am PT)

Par Andrei Frumusanu

The fall season is approaching yet again, and it’s time again for another round of iPhone updates, representing Apple’s newest 2019 mobile hardware. The event should be starting at 10am PT, and the show again takes place on the Apple Park campus in the Steve Jobs theatre.

This year we’re expecting a new refresh of last year’s iPhone XS, XS Max and XR models. We’re still not quite sure what Apple is going to be calling the new phones, but if the numerous leaks prove to be true, we’ll be seeing incremental design updates with a new in-vogue triple-camera setup as being the key new features of the phones, as well as naturally Apple introducing new internal hardware such as the new Apple A13 SoC, which might bring some surprises to the table this year.

Nokia 7.2 Launched: 6.3-Inch PureDisplay, 48MP Camera, Snapdragon 660

Par Anton Shilov

HMD Global has announced its new ‘performance mainstream’ smartphone, the Nokia 7.2. It is aimed at the mass market, yet features premium capabilities, like a large HDR10-capable display with PurePlay enhancements, and a triple-module camera with a 48 MP sensor. When compared to its predecessor in the same price segment, the Nokia 7.2 upgrades itself in every important aspect like the screen size, performance, and imaging capabilities.

The design language of the Nokia 7.2 is somewhat different when compared to its ancestor, the Nokia 7.1, as well as other advanced Nokia handsets available today. The chassis is symmetric with very smooth edges to ensure a pleasant grip. The handset no longer has sharp/diamond-cut edges that were meant to ensure firm grip and give a somewhat special feeling. There is a reason for that. The enclosure no longer uses an aluminum unibody frame, but features a frame made of a polymer composite along with Corning Gorilla Glass 3 on both sides. Nokia says that the polymer composite it uses is twice as strong as polycarbonate at half the weight of aluminum. Use of the polymer instead of metal enabled Nokia to install a 6.3-inch LCD and boost battery capacity while maintaining weight of the phone at around 160 grams (same as predecessor).

Speaking of the display, the Nokia 7.2 features a 6.3-inch IPS LCD with a 2244×1080 resolution as well as Nokia’s PureDisplay hardware and software technology enabled by a PixelWorks chip that can process HDR10 content, upscale SDR content to HDR, as well as adjust brightness and contrast dynamically to provide the best possible image quality both indoors and outdoors.

Inside the Nokia 7.2 is the Qualcomm Snapdragon 660 that integrates eight Kryo cores (so, four semi-custom Cortex-A73 and four semi-custom Cortex-A53 cores) as well as an Adreno 512 graphics core, and an X12 LTE modem. The application processor is paired with 4 or 6 GB of LPDDR4 memory as well as 64 GB or 128 GB of NAND flash storage. Meanwhile, the device is equipped with a 3,500 mAh battery that can be fast charged.

As noted above, the Nokia 7.2 got significant upgrades when it comes to imaging. The main camera module is a 48 MP equipped with Zeiss optics, an ultrawide 8 MP sensor, a 5 MP depth module, and a LED flash. Also, there is a 20 MP camera for selfies on the front of the phone. To take advantage of the new triple-module camera as well as the new front sensor, Nokia developed its new camera software that takes advantage of the new hardware and supports ‘AI-powered night mode’ (which is probably a way to call Google Android’s Night Mode). Besides, there is Pro Camera Mode that enables a precise control of white balance, ISO, aperture, and shutter speed. Obviously, there are various refinements when it comes to the selfie camera too.

Physical interfaces of the Nokia 7.2 include a fingerprint reader on the back, power, volume and Google Assistant buttons, as well as a USB Type-C for data and power. For those who care, there still is a 3.5-mm audio jack for headsets.

General Specifications of the Nokia 7.2
  Nokia 7.2
Good
Nokia 7.2
Better
Display Size 6.3" IPS
Resolution 2280×1080 (19:9)
PPI 400 PPI
Cover Gorilla Glass 3
Processor PixelWorks
SoC Snapdragon 636
 Kryo 260
4 × Kryo 260 Gold (semi-custom Cortex-A73 cores) @ 2.2 GHz
4 × Kryo 260 Silver (semi-custom Cortex-A53 cores) @ 1.84 GHz
GPU Adreno 512
RAM 4 GB LPDDR4 6 GB LPDDR4
Storage 64 GB + microSD 128 GB + microSD
Networks GSM GPRS (2G), UMTS HSPA (3G), LTE (4G)
SIM Size Nano SIM
SIM Options Dual SIM, second SIM slot is used by microSD card
Local Connectivity 802.11ac Wi-Fi, BT 5.0, NFC,
3.5mm jack,
USB 2.0 Type-C
Front Camera 20 MP
Rear Camera Main: 48 MP, f/1.8, 0.88µm, Quad-Pixel, PDAF
Ultrawide: 8 MP, f/2.2
Depth: 5 MP, (f/2.4, 1.12µm ?)
Flash: LED
Battery 3,500 mAh
Dimensions Height 159.9 mm | 6.3 inches
Width 71.8 mm | 2.8 inches
Thickness 8 mm | 0.31 inches
Weight 160 grams | 5.63 ounces
Launch OS Android 9.0

The Nokia 7.2 smartphone will be available in Cyan Green, Charcoal, and Ice finishes later this month. The 4 GB + 64 GB model will cost €299, whereas the more advanced 6 GB + 128 GB SKU will be priced at €349.

Related Reading:

Sources: Nokia, GSMArena

GOODRAM Reveals IRDM Ultimate X: A Lineup of PCIe 4.0 x4 SSDs

Par Anton Shilov

GOODRAM has introduced its first SSDs featuring a PCIe 4.0 x4 interface designed for new-generation high-end PCs. Set to be available in 500 GB, 1 TB and 2 TB configurations, the drives are based on Phison’s PS5016-E16 controller.

Just like other PCIe 4.0 x4 SSDs powered by the E16, GOODRAM’s IRDM Ultimate X SSDs use 3D TLC NAND memory. From performance point of view, the manufacturer promises up to 5000 MB/s sequential read speed, up to 4500 MB/s sequential write speed as well a 750K read/write random IOPS for 1 TB and 2 TB drives, which is in line with other products based on the Phison’s PS5016-E16 controller. Meanwhile, the cheapest 500 GB version provides a lower write speed as well as random performance.

In a bid to ensure consistent performance under high loads, the GOODRAM IRDM Ultimate X SSDs are equipped with an aluminum heat spreader, which as with other drives suggests a compatibility focus on desktop PCs.

GOODRAM's IRDM Ultimate X Specifications
Capacity 500 GB 1 TB 2TB
Model Number ? ? ?
Controller Phison PS5016-E16
NAND Flash 3D TLC NAND
Form-Factor, Interface M.2-2280, PCIe 4.0 x4, NVMe 1.3
Sequential Read 5000 MB/s
Sequential Write 2500 MB/s 4500 MB/s
Random Read IOPS 550K IOPS 750K IOPS
Random Write IOPS 400K IOPS 750K IOPS
Pseudo-SLC Caching Supported
DRAM Buffer Yes, capacity unknown
TCG Opal Encryption No
Power Management ?
Warranty 5 years
MTBF ? hours
TBW ? ? ?
MSRP ? ? ?

One interesting feature of GOODRAM’s IRDM Ultimate X SSDs mentioned by PCLab.pl is its five-year warranty, a rare peculiarity for consumer drives these days. As for availability, expect the Ultimate X SSDs to be available this November. Prices will obviously depend on capacity.

Related Reading

Source: GOODRAM (via PCLab.pl)

Intel Documents Show Driver Support for Unannounced 400-Series Chipsets

Par Anton Shilov

Intel does not often disclose its own chipset names in advance, but from time to time we get glimpses into accidental publication. This week, driver documents from the company show software support for unannounced 400-series and 495 chipsets, which are led to believe will be for future generations of products, following on from the 300-series products.

As it turns out, Intel’s chipset drivers have supported the company’s 400-series and 495 chipsets as of mid-August. Software support may indicate that the launch of Intel’s new platforms is imminent. Meanwhile, we can only guess about their specifications and capabilities.

Another interesting addition to Intel’s family of chipsets is the H310D PCH, found in the same document. Based on its name, we can suspect that this is a yet another version of the entry-level H310, but we have no idea about its peculiarities. The original H310 was built on 14nm, the H310C was built on 22nm, so who knows what the H310D will be.

Related Reading

Source: Intel (via Twitter/momomo_us)

Acer’s ConceptD 9 Pro: A 17.3-Inch Convertible w/ Core i9 & Quadro RTX 5000

Par Anton Shilov

In the recent years, leading makers of gaming PCs have been experimenting with unorthodox form-factors in an attempt to maximize performance and improve overall experience. Having learnt from its Predator Triton laptops, Acer applied its expertise to mobile workstations and this week introduced one of the industry’s first convertible notebooks featuring Intel’s Core i9 CPU and NVIDIA’s Quadro RTX 5000 GPU.

The Acer ConceptD 900 Pro is a 17.3-inch convertible PC that uses the chassis originally developed for the Predator Triton 900 gaming PC. The chassis features Acer’s CNC-machined Ezel Aero Hinge that can flip, extend, or recline the display in a bid to offer the most optimal position for creativity. The notebook also places its mechanical keyboard to its front side to improve cooling for high-TDP components while retaining a relatively low z-height. Speaking of cooling, it is necessary to note that the PC uses Acer’s 4th Generation cooling system featuring metallic Aeroblade 3D fans.

To comply with requirements of graphics professionals, the ConceptD 900 Pro is equipped with a Pantone Validated 4K Ultra-HD display that can cover 100% of the Adobe RGB color space and is factory calibrated with a Delta E <1 color accuracy. Furthermore, the convertible workstation comes with a Wacom EMR stylus with 4,096 levels of pressure sensitivity that is magnetically attached to the machine.

When it comes to the insides, the ConceptD 900 Pro packs up to Intel’s 9th Gen Core i9 CPU with eight cores, NVIDIA’s Quadro RTX 5000 GPU, 32 GB of DDR4 RAM, as well as two PCIe 3.0 x4 M.2 SSDs operating in RAID mode for ultimate reliability or performance.

Being a 17.3-inch powerhouse, Acer’s ConceptD 900 Pro is certainly not an ultraportable machine. The system weighs around 4.1 kilograms and is around 2.4 cm (0.94 inch) thick. Considering that we are dealing with an extremely capable machine in a unique form-factor, the weight and thickness are quite justified for those who actually need it.

Acer intends to start sales of its flagship ConceptD 9 Pro convertible workstation in EMEA sometimes in November at prices starting at €5,499.

Related Reading:

Source: Acer

Samsung’s 8K QLED TV 55-Inch: A More Affordable 8K Ultra-HD TV

Par Anton Shilov

Being flagship televisions available today, 8K Ultra-HD TVs not only feature a resolution of 7680×4320 pixels, but also pack all the latest technologies that manufacturers have to offer these days and therefore can provide ultimate experience even with 4K or 2K content. Samsung’s Q900 family of 8K TVs do exactly that, but because of its premium positioning, the company offered them in large sizes, which means price tags excessive for most. Up until this week.

At IFA, Samsung introduced its smallest 8K UHDTV to date: the Q900R 55-inch model QN55Q900RBFXZA, which costs significantly less than the rest of the SKUs in the lineup.

The television uses Samsung’s IPS-class 7680×4320 panel backed by a quantum dot-enhanced LED backlight that promises FALD-like operation, which Samsung dubs Direct Full Array 16X technology (in case of the 55-inch model). The TV features a peak brightness of 4000 nits, which is the maximum luminance at which HDR content is mastered these days. Speaking of HDR, the Q900-series officially supports HDR10, HDR10+, and HLG formats, but not Dolby Vision (at least for now). As far as color gamut is concerned, the Q900-series can reproduce 100% of the DCI-P3 space.

Just like its bigger brothers, the Samsung Q900R 55-inch uses the company’s Quantum Processor 8K as its brain. The SoC is responsible for all decoding, upscaling, and other operations. Among the capabilities of the chip that Samsung is particularly proud of is its proprietary 8K AI Upscaling technology, which is designed to enhance the quality of digital content to panel’s native resolution (does not work with PCs, games, analogue content, etc.). Furthermore, the SoC is also able to interpolate content to 240 FPS and supports AMD’s FreeSync/HDMI Variable Refresh Rate technologies.

Last but not least, the UHDTV comes with a 60-W 4.2-channel audio subsystem.

While technological excellence of Samsung’s Q900-series Ultra-HD televisions is well known, the key feature of the 55-inch model is its price. The 8K television carries a price tag of $2,499, which is in line with higher-end 4K TVs. Considering the fact that retail prices tend to fall below MSRPs, the 55-inch Q900 will likely be considerably more widespread than its larger counterparts.

Related Reading:

Source: Samsung

Dynaboook Reveals Tecra X50: A Lightweight 15.6-Inch Laptop with a 10+ Hrs Battery Life

Par Anton Shilov

Dynabook, formerly PC division of Toshiba, today introduced its flagship Tecra laptop aimed at corporate, business, and education users. The Tecra X50 comes with a 15.6-inch IGZO laptop, weighs around 1.4 kilograms and can work for over 10 hours on one charge depending on the workload.

15.6-inch notebooks are considered as workhorses that spend most of their life on the desk, so very few companies try to make them truly lightweight and friendly to road warriors. Dynabook appears to be one of such companies, and at 1.42 kilograms, the Tecra X50 is among the lightest laptops featuring a 15.6-inch Full-HD IGZO screen on the market. The mobile PC uses an Onyx Blue magnesium alloy chassis featuring a 17.6 mm z-height, which explains how Dynabook has managed to reduce the weight of the Tecra X50 to the ballpark of a 13.3-inch class laptop. Magnesium alloy is of course stronger than plastic used for some ultra-low-weight 15.6-inch machines, so while the Tecra X50 is not the lightest 15.6-incher available today, it offers a combination of sturdy design and relatively low weight.

Inside the Tecra X50, there is up to Intel’s 8th Generation Core i7-8665U (Whiskey Lake) processor with Intel UHD Graphics 620 accompanied by up to 32 GB of dual-channel DDR4-2400 memory as well as an up to 1 TB PCIe 3.0 x4 NVMe SSD. On the connectivity side of matters, the Tecra X50 features Intel’s Wi-Fi 5 or Wi-Fi 6 with Bluetooth 5.0 wireless technologies, two Thunderbolt 3 ports, two USB 3.0 connectors, an HDMI output, a microSD card reader, and a 3.5-mm connector for headsets.

Since the Tecra X50 is designed for corporate and business users, Dynabook put a lot of emphasis on manageability and security. The system can be powered by a vPro-enabled CPU, it has a TPM 2.0 chip inside, it has a Synaptics fingerprint reader, and an HD webcam with IR sensors for Windows Hello as well as a privacy shutter.

Other features of the Dynabook Tecra X50 worth talking about include AccuPoint joystick-like pointing device, a spill-resistant keyboard, stereo speakers with harman/kardon badge, and a microphone array.

One of the key selling points of the Tecra X50 is its battery life. The machine comes with a built-in 48 Wh battery and this battery can power the machine for over 10 hours on one charge depending on the configuration and workload, according to Dynabook. Since the notebook uses an IGZO display that consumes a lower amount of power than traditional LCDs, it is logical to expect the Tecra X50 to last longer than competitors. Meanwhile, actual configuration matters a lot. Higher-end Tecra X50 notebooks with Intel's Core i7, dual-channel memory, touch screen, and an advanced SSD will last for about 10:45 hours on one charge, which is rather good. Meanwhile, lower-end Core i3-based configs with 4 GB of RAM and a non-touch display can last for 17+ hours in the lab, according to Dynabook. Keep in mind that the test results were achieved only in the lab using MobileMark 2014, so the real world battery life is something that will depend on tasks, exact system specification, and other factors.

Dynabook's Tecra X50
  General Specifications
PLR33U-0KP004
PLR33U-0KQ004
Long-Lasting Version
Display 15.6" 1920×1080 IGZO
or
15.6" 1920×1080 IGZO with 10-point multitouch
15.6" 1920×1080 IGZO
CPU up to Intel Core i7-8665U Intel Core i3-8xxxU
Graphics HD Graphics 620 (24 EUs) HD Graphics 620 (24 EUs)
RAM up to dual-channel 32 GB DDR4 4 GB DDR4
Storage Up to 1 TB SSD (PCIe) 256 GB SSD
Wi-Fi PLR33U-0KP004: Intel Wireless-AC 9560 (802.11ac)
PLR33U-0KQ004: Intel Wi-Fi 6 AX200 (802.11ax)
?
Bluetooth Bluetooth 5 ?
USB 3.0 2 × USB 3.0 Type-A
TB3 2 × Type-C TB3/USB 3.1 ports (also used for charging, external display, etc.)
Card Reader MicroSD
Fingerprint Sensor Yes
Other I/O Webcam with RGB + IR sensors and shutter, microphone, stereo speakers, audio jack, anti-spill keyboard, AccuPoint joystick
Battery 48 Wh, up to 10 hours 45 minutes 48 Wh, up to 17+ hours
Dimensions Thickness 17.6 mm | 0.69 inches
Width 359 mm | 14.1 inches
Depth 250 mm | 9.8 inches
Weight Starting at 1.42 kg (3.13 lbs)
Price ? ?

Dynabook intends to start sales of the Tecra X50 in the near future at prices starting from $1,544.

Related Reading:

Source: Dynabook

New Uses for Smartphone AI: A Short Commentary on Recording History and Privacy

Par Dr. Ian Cutress

This opinion piece is reactionary to recent announcements.

Having just attended the Huawei keynote here at the IFA trade show, there were a couple of new features enabled through AI that were presented on stage that made the hair on the back of my neck stand on end. Part of it is just an impression on how quickly AI in hand-held devices is progressing, but the other part of it makes me think to how it can be misused.

Let me cover the two features.

 

"Real-Time Multi-Instance Segmentation"

Firstly, AI detection in photos is not new. Identifying objects isn’t new. But Huawei showed a use case where several people were playing musical instruments, and the smartphone camera could detect both the people from the background, and the people from each other. This allowed the software to change the background, from an indoor scene to an outdoor scene and such. What this also enabled was that individuals could be deleted, moved, or resized. Compare the title image to this one, where people are deleted and the background moved.

What does this mean? People can be removed from photos. Old lovers can be removed from those holiday photographs. Individuals can easily be removed (or added) from the historical record. The software would automatically generate the background behind them (if it’s the original background), and the size of people could even be changed. This was not only photographs, but video. The image blow shows one person increased in size, but it could just as easily be something significant.

Now I know that these algorithms already exist on photo editing software on a PC, if you know how to use it. I know that the demo that Huawei showed on stage was more of a representative aspect to AI on a smartphone, but I could imagine something similar coming to a smartphone, and being performed on a smartphone, and the goal to make it as easy to use as possible on a smartphone. How we in future might interpret the actions of our past selves (or past others) may have to take into account the level of access (and ease of use) in the ability to modify images and video.

 

Detecting Health Rate with Cameras

The second feature was related to Health and AR. By using a pre-trained algorithm, Huawei showed the ability for your smartphone to detect your heart rate simply by the front facing camera (and assuming the rear facing camera too). It does this by looking at small facial movements between video frames, and works on the values it predicts per pixel to get an overall picture.

Obviously, it isn’t meant to be used as a diagnostic tool (at least, I hope not). I could imagine similar technology being used with IP cameras for a home security system perhaps, and when it detects an elderly relative in distress, it can perform the appropriate action. But it lends itself to abuse, if you are able to use it on other people unsuspectingly. Does that constitute an invasion of privacy? Does it work on these smartphones with 10x zoom? I’m not sure I’m qualified to answer those questions. 

 

A big part of me wants to see technology moving forward, with development and progression from generation to generation. But in seeing these two technology features today, there’s the tiniest part that doesn’t sit right, unless the correct security procedures are in place, such as edited images/videos have a signature marker, or only pre-registered people on a smartphone can have their heartbeat measured. Hopefully my initial fears aren't as serious as they first appear.

 

Huawei Announces Kirin 990 and Kirin 990 5G: Dual SoC Approach, Integrated 5G Modem

Par Dr. Ian Cutress

For the last 3 years, Huawei has announced its next generation SoC at the IFA technology show here in Berlin. In every occasion, the company promotes its hardware, using the latest process technologies, the latest core designs, and its latest connectivity options. The flagship Kirin processor it announces ends up in every major Huawei and Honor smartphone for the next year, and the Kirin 990 family announced today is no different. With the Mate 30 launch happening on September 19th, Huawei lifted the lid on its new flagship chipset, with a couple of twists.

Netgear Expands 802.11ax Portfolio with Orbi Wi-Fi 6 Mesh System and Nighthawk EAX80 Extender

Par Ganesh T S

As part of IFA 2019, Netgear has a number of new announcements across different product lines. The wireless networking products are of particular interest to us. We had attended Qualcomm's Wi-Fi 6 Day last month, and I had tweeted about Netgear's Orbi Wi-Fi 6 (RBK850) that was showcased at the event. Things are being made official today, with additional details becoming available.

Netgear's Orbi systems need little introduction, given their wide retail reach and popularity. At CES 2019, the company had divulged some details about the meshing together of Orbi and Wi-Fi 6. The key to the great performance of the Orbi RBK50 (802.11ac) was the dedicated 4x4 wireless backhaul between the router and the satellites. This left two 2x2 streams (one in 5 GHz and one in 2.4 GHz) available for the client devices connected to either member of the kit. The Orbi RBK850 (the kit carries the RBK852 designation) retains the same 4x4 backhaul, but makes the move from Wi-Fi 5 to Wi-Fi 6. In theoretical terms, the wireless backhaul is now 2.4 Gbps (4x4:4 / 80MHz 802.11ax) compared to 1.73 Gbps in the RBK50. The clients also get 4x4:4 streams from the satellite or the router, with one set of spatial streams dedicated to 2.4 GHz duties / 1.2 Gbps, and another to 5 GHz duties / 2.4 Gbps. Wired backhaul is also supported (the dedicated wireless backhaul spatial streams are disabled in that case), just like the Orbi RBK50.

As announced at Qualcomm's Wi-Fi 6 Day, the Orbi RBK852 is based on Qualcomm's Networking Pro 1200 platform. It will be available next month and the kit (a single router and satellite) will be priced at $700.

In other Orbi news, Netgear is announcing that the Orbi Voice and Outdoor Orbi satellites for the original Orbi (802.11ac) are getting a 'Universal Mode' update, enabling them to act as extenders for any router (even non-Netgear ones). This is a welcome addition to the Orbi family's feature set, and will help the company draw more people into the Orbi ecosystem.

Netgear is also announcing the Nighthawk EAX80 Wi-Fi 6 wireless extender today. It is based on a Broadcom chipset and meant to complement the Wi-Fi 6 routers already in the market.

Netgear is aiming to promote ease of extender use with an app-based configuration flow. The EAX80 will be available later this month for $250.

Based on reader feedback for previous Wi-Fi 6 articles, I brought up two questions for Netgear related to the above announcements - one related to the pricing of the Orbi RBK852 at $700 (a tad too high?), and another related to the consumer appetite for Wi-Fi 6 equipment given the current draft nature of the 802.11ax standard.

On the cost aspect, Netgear noted that the premium Wi-Fi 6 Nighthawk routers priced around the $300 - $400 range have been selling relatively well. Given that a mesh system is essentially the hardware for at least two wireless routers in one kit, the pricing is justified. Regarding the consumers' ability to stomach a $700 expense for a Wi-Fi system, Netgear pointed to internal surveys that showed consumers treating Orbi-like Wi-Fi systems as long-term investments (3-5 years). Given that these are folks who have invested in the latest premium notebooks and phones (Wi-Fi 6 clients), Netgear believes that the target market would not be put off by the price tag of the Orbi Wi-Fi 6 kit.

Apropos the Wi-Fi 6 standard's pending ratification, Netgear believes that the issues currently holding back Wi-Fi 6 in the draft stage are all controllable via the firmware, and will not require any hardware fixes. Since ongoing firmware updates have pretty much become the norm for most electronic products nowadays, any changes in the standard between now and eventual ratification can also make it to units already deployed in the field. It must also be noted that a final standard is needed to ensure maximum inter-operability between Wi-Fi 6 clients and APs from different vendors. Given that Netgear has systems based on silicon from all three major chipset vendors (Qualcomm, Broadcom, and Intel), interoperability issues should not be much of a concern for their customers.

Overall, we see that the Wi-Fi 6 market is poised to take off with the ongoing launch of multiple Wi-Fi 6 client systems and phones. The rollout of DOCSIS 3.1 as well as FTTH ISPs has brought gigabit Internet to many households, and consumers' appetite for practical gigabit Wi-Fi has been whetted. Netgear's 802.11ax portfolio expansion is happening at the right time for the company to take advantage of the current state of the market.

Lenovo’s ThinkVision S28u-10: A 4K Business Display

Par Anton Shilov

Lenovo has introduced its new business and prosumer-oriented display that brings together an ultra-high-definition resolution, an accurate color reproduction as well as reduced emission of blue light to improve eye comfort.

The Lenovo ThinkVision S28u-10 monitor is based on a 28-inch IPS panel of 3840x2160 resolution that can display 1.07 billion of colors and reproduce 99% of the sRGB color space as well as 90% of the DCI-P3 color gamut. For some reason, Lenovo says nothing about support of the Adobe RGB color space, which is often required by designers and photographers. Since we are dealing with an IPS display, it is reasonable to expect it to feature all known IPS peculiarities.

As is standard with Lenovo's monitors designed for business and prosumer market segments, the ThinkVision S28u-10 comes in a chassis that can adjust its tilt, but for those who need additional flexibility it has VESA mounts. As for connectivity, the LCD has a DisplayPort and an HDMI input.

One of the key selling points of the ThinkVision S28u-10 display is TÜV Rhineland’s Eye Comfort certification, which, as the name suggests, is designed to ensure that the monitor is good for prolonged use. The certificate requires a display to reduce blue light content, flicker, and reflection as well as provide consistent image quality from different viewing angles. Specialists from TÜV Rhineland test displays in accordance with safety and health requirements set in Europe, US, UK, and Hong Kong.

Brief Specifications of the Lenovo ThinkVision S28u-10
  S28u-10
Panel 28" IPS
Native Resolution 3840 × 2160
Maximum Refresh Rate 60 Hz (?)
Response Time ? ms
Brightness ? cd/m²
Contrast 1,000:1
Viewing Angles 178°/178° horizontal/vertical
Pixel Pitch 0.1614 mm²
Pixel Density 157 ppi
Display Colors 1.07 billion (?)
Color Gamut Support DCI-P3: 90%
sRGB/Rec 709: 99%
Adobe RGB: ?
Stand Tilt and height adjustable
Inputs 1 × DisplayPort 1.2
1 × HDMI 2.0
PSU External (?)
Launch Price & Date October 2019
?

Lenovo’s ThinkVision S28u-10 monitor will be available in October. Pricing should follow shortly.

Related Reading:

Source: Lenovo

Lenovo’s Yoga C940 15.6-Inch: Eight Cores and GTX 1650

Par Anton Shilov

Lenovo today introduced its brand-new Yoga C940 convertible laptop with a 15.6-inch display that is aimed at performance-driven consumers and creative professionals. In addition to a large screen, the new hybrid notebook got a discrete GeForce GTX GPU, a first for the product family.

At first glance, it looks like the Lenovo Yoga C940 15.6 is an extension of the Yoga 9-series family with a product featuring a 15.6-inch Full-HD or Ultra-HD HDR-supporting display and better graphics. It comes in an all-metal Iron Grey CNC-milled chassis featuring a 360° watchband hinge that looks very similar to the chassis used by the Yoga C940 14. Meanwhile, the addition of a discrete GPU, use of Intel’s 9th Generation Core processor with up to eight cores, and some other factors (like a numpad) somewhat change positioning of the system enabling Lenovo to address demanding consumers and creative professionals who need CPU and GPU horsepower more than they need other features (more on that later).

The Lenovo Yoga C940 15.6-inch is based on Intel’s 9th Generation Core i7 or Core i9 processor with six or eight cores that is accompanied by NVIDIA’s GeForce GTX 1650, a combination that guarantees rather decent performance. The system can be equipped with up to 16 GB of DDR4 memory as well as an up to 2 TB SSD (see general specifications in the table below).

Other features of the Yoga C940 15.6-inch are generally similar to its smaller brother, the C940 14-inch. The system features Wi-Fi 5 or Wi-Fi 6, two Thunderbolt 3 ports, and one USB 3.1 (Gen1) Type-A connector. In addition, the convertible comes with a Dolby Atmos-supporting rotating soundbar, a far field microphone array supporting Alexa, Wake on Voice and similar functionality, a fingerprint reader, and a webcam with Lenovo’s TrueBlock privacy shutter.

The 15.6-inch version of the Yoga C940 is 17.5 mm thick and weighs around 1.9 kilograms, which is generally in line with contemporary convertible machines of this size. The Full-HD version is rated for 12 hours of operation on one charge, whereas the Ultra-HD models are expected to work for 9 hours.

Lenovo's Yoga C940 15.6-Inch
  Yoga C940 14-Inch FHD
C940-15IRH
Yoga C940 14-Inch UHD
C940-15IRH
Display Type  IPS IPS
Resolution 1920×1080 3840×2160
Brightness 500 cd/m² 500 cd/m²
Color Gamut 72% NTSC 72% NTSC
Touch Yes Yes
HDR DisplayHDR 400 DisplayHDR 400
CPU Intel's 9th Generation Core i7/i9
Graphics Intel UHD 620 + NVIDIA GeForce GTX 1650
RAM Core i9: 16 GB DDR4
Core i7: 12 GB or 16 GB DDR4
Storage PCIe 3.0 x4 SSD: 256 GB, 512 GB, 2 TB
Optane Memory H10: 32 GB 3D XPoint + 512 GB QLC
Optane Memory H10: 32 GB 3D XPoint + 1 TB QLC
Wi-Fi Wi-Fi 5 or Wi-Fi 6
Bluetooth Bluetooth 5
Thunderbolt 2 × USB Type-C TB3 ports
USB 1 × USB 3.1 Gen 1 Type-A
Fingerprint Sensor Yes
Webcam HD camera with IR and TrueBlock shutter
Other I/O Far-field microphone, Dolby Atmos soundbar, TRRS audio jack trackpad, etc.
Battery Capacity ? Wh
Life up to 12 hours up to 9 hours
Dimensions Thickness 17.5 mm | 0.69 inches
Width 355.5 mm | 13.1 inches
Depth 238.5 mm | 9.39 inches
Weight 1.9 kilograms | 4.19 lbs
Operating System Windows 10

Lenovo plans to start sales of its Yoga C940 15.6-inch hybrid laptops this October at prices starting at $1709.99.

Now, a couple of words about positioning of the Yoga C940 15.6. The evolution of Lenovo’s high-end convertible laptops is an interesting story by itself. Historically, Lenovo had two 13/14-inch class advanced convertibles: ThinkPad X1 Yoga with a decent Intel Iris-branded integrated GPU in a carbon fiber chassis as well as Yoga 9-series with a regular integrated GPU in an all-metal chassis. Last year, the company adopted an aluminum chassis for its ThinkPad X1 Yoga, but removed the superior Iris graphics. Effectively, Lenovo left the market of convertibles with decent graphics to its competitors like Dell and HP who offer rather advanced Envy x360 2-in-1 and XPS 2-in-1 systems with discrete GPUs. With the Yoga C940 15.6 featuring up to eight-core CPU along with NVIDIA’s GeForce GTX 1650 graphics processor, Lenovo is returning to the market of high-end convertibles aimed at those who value performance most of all.

Related Reading:

Source: Lenovo

❌