NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open logical device interface specification for accessing non-volatile storage media attached via a PCI Express (PCIe) bus. The acronym NVM stands for non-volatile memory, which is often NAND flash memory that comes in several physical form factors, including solid-state drives (SSDs), PCI Express (PCIe) add in cards and other forms such as M.2 cards. NVM Express, as a logical device interface, has been designed from the ground up to capitalize on the low latency and internal parallelism of solid-state storage devices.
By its design, NVM Express allows host hardware and software to fully exploit the levels of parallelism possible in modern SSDs. As a result, NVM Express reduces I/O overhead and brings various performance improvements relative to previous logical-device interfaces, including multiple, long command queues, and reduced latency. (The previous interface protocols were developed for use with far slower hard disk drives (HDD) where a very lengthy delay (relative to CPU operations) exists between a request and data transfer, where data speeds are much slower than RAM speeds, and where disk rotation and seek time give rise to further optimization requirements.)
NVM Express devices exist both in the form of standard-sized PCI Express expansion cards and as 2.5-inch form-factor devices that provide a four-lane PCI Express interface through the U.2 connector (formerly known as SFF-8639). SATA Express storage devices and the M.2 specification for internally mounted computer expansion cards also support NVM Express as the logical device interface.
Video NVM Express
Specifications
Specifications for NVMe released to date include:
- 1.0e (Jan. 2013)
- 1.1b (July 2014)
- 1.2 (Nov. 2014)
- 1.2a (Oct. 2015)
- 1.2b (June 2016)
- 1.2.1 (June 2016)
- 1.3 (May 2017)
- 1.3a (Oct. 2017)
- 1.3b (May. 2018)
Main change between 1.2 and 1.3 and new features are:
- Identify Namespace return list of Namespace Identifiers
- A list of Namespace Identification Descriptor structures is returned to the host for the namespace specified.
A number of differences and optional features have been added, including: Device Self-Test, Sanitize, Directives, Boot Partition, Telemetry, Virtualization Enhancements, NVMe-MI Management Enhancements, Host Controlled Thermal Management, Timestamp, Emulated Controller Performance Enhancement, along with a number of changes from past behaviour, see: NVMe 1.3 Changes Overview (.PDF download webpage) for details.
Maps NVM Express
Background
Historically, most SSDs used buses such as SATA, SAS or Fibre Channel for interfacing with the rest of a computer system. Since SSDs became available in mass markets, SATA has become the most typical way for connecting SSDs in personal computers; however, SATA was designed primarily for interfacing with mechanical hard disk drives (HDDs), and it became increasingly inadequate for SSDs, which improved in speed over time. For example, within about 5 years of mass market mainstream adoption (2005-2010) many SSDs were already held back by the comparatively slow data rates available for hard drives--unlike hard disk drives, some SSDs are limited by the maximum throughput of SATA.
High-end SSDs had been made using the PCI Express bus before NVMe, but using non-standard specification interfaces. By standardizing the interface of SSDs, operating systems only need one driver to work with all SSDs adhering to the specification. It also means that each SSD manufacturer does not have to use additional resources to design specific interface drivers. This is similar to how USB mass storage devices are built to follow the USB mass-storage device class specification and work with all computers, with no per-device drivers needed.
History
The first details of a new standard for accessing non-volatile memory emerged at the Intel Developer Forum 2007, when NVMHCI was shown as the host-side protocol of a proposed architectural design that had Open NAND Flash Interface Working Group (ONFI) on the memory (flash) chips side. A NVMHCI working group led by Intel was formed that year. The NVMHCI 1.0 specification was completed in April 2008 and released on Intel's web site.
Technical work on NVMe began in the second half of 2009. The NVMe specifications were developed by the NVM Express Workgroup, which consists of more than 90 companies; Amber Huffman of Intel was the working group's chair. Version 1.0 of the specification was released on 1 March 2011, while version 1.1 of the specification was released on 11 October 2012. Major features added in version 1.1 are multi-path I/O (with namespace sharing) and arbitrary-length scatter-gather I/O. It is expected that future revisions will significantly enhance namespace management. Because of its feature focus, NVMe 1.1 was initially called "Enterprise NVMHCI". An update for the base NVMe specification, called version 1.0e, was released in January 2013. In June 2011, a Promoter Group led by seven companies was formed.
The first commercially available NVMe chipsets were released by Integrated Device Technology (89HF16P04AG3 and 89HF32P08AG3) in August 2012. The first NVMe drive, Samsung's XS1715 enterprise drive, was announced in July 2013; according to Samsung, this drive supported 3 GB/s read speeds, six times faster than their previous enterprise offerings. The LSI SandForce SF3700 controller family, released in November 2013, also supports NVMe. Sample engineering boards with the PCI Express 2.0 ×4 model of this controller found 1,800 MB/sec read/write sequential speeds and 150K/80K random IOPS. A Kingston HyperX "prosumer" product using this controller was showcased at the Consumer Electronics Show 2014 and promised similar performance. In June 2014, Intel announced their first NVM Express products, the Intel SSD data center family that interfaces with the host through PCI Express bus, which includes the DC P3700 series, the DC P3600 series, and the DC P3500 series. As of November 2014, NVMe drives are commercially available.
In March 2014, the group incorporated to become NVM Express, Inc., which as of November 2014 consists of more than 65 companies from across the industry. NVM Express specifications are owned and maintained by NVM Express, Inc., which also promotes industry awareness of NVM Express as an industry-wide standard. NVM Express, Inc. is directed by a thirteen-member board of directors selected from the Promoter Group, which includes Cisco, Dell, EMC, HGST, Intel, Micron, Microsoft, NetApp, Oracle, PMC, Samsung, SanDisk and Seagate.
In September 2016, the CompactFlash Association announced that it would be releasing a new memory card specification, CFexpress, which uses NVMe.
NVMeOF
In September 2014, a standard for using NVMe over Fibre Channel (FC) was proposed. NVM Express over Fabrics (NVMeOF) is a communication protocol that allows one computer to access block-level storage devices attached to another computer via remote direct memory access, via FC or via TCP/IP. The standard for this protocol was published by NVM Express, Inc. in 2016.
The following drivers implement the NVMeOF protocol:
- the Linux NVMeOF initiator and target drivers
- the Storage Performance Development Kit (SPDK) NVMeOF initiator and target drivers.
- an NVMeOF initiator driver for Microsoft Windows
Comparison with AHCI
The Advanced Host Controller Interface (AHCI) has the benefit of wide software compatibility, but has the downside of not delivering optimal performance when used with SSDs connected via the PCI Express bus. As a logical interface, AHCI was developed when the purpose of a host bus adapter (HBA) in a system was to connect the CPU/memory subsystem with a much slower storage subsystem based on rotating magnetic media. As a result, AHCI introduces certain inefficiencies when used with SSD devices, which behave much more like DRAM than like spinning media.
The NVMe device interface has been designed from the ground up, capitalizing on the low latency and parallelism of PCI Express SSDs, and complementing the parallelism of contemporary CPUs, platforms and applications. At a high level, the basic advantages of NVMe over AHCI relate to its ability to exploit parallelism in host hardware and software, manifested by the differences in command queue depths, efficiency of interrupt processing, the number of uncacheable register accesses, etc., resulting in various performance improvements.
The table below summarizes high-level differences between the NVMe and AHCI logical device interfaces.
Operating system support
- Chrome OS
- On February 24, 2015, support for booting from NVM Express devices was added to Chrome OS.
- DragonFly BSD
- The first release of DragonFly BSD with NVMe support is version 4.6.
- FreeBSD
- Intel sponsored a NVM Express driver for FreeBSD's head and stable/9 branches. The nvd(4) and nvme(4) drivers are included in the GENERIC kernel configuration by default since FreeBSD version 10.2.
- Haiku
- Haiku support for NVMe is planned; however no work has been completed yet.
- illumos
- illumos received support for NVMe on October 15, 2014.
- iOS
- With the release of the iPhone 6S and 6S Plus, Apple introduced the first mobile deployment of NVMe over PCIe in smartphones. Apple followed these releases with the release of the iPad Pro and iPhone SE that also use NVMe over PCIe.
- Linux
- Intel published an NVM Express driver for Linux, which was merged into the Linux kernel mainline on 19 March 2012, with the release of version 3.3 of the Linux kernel.
- A scalable block layer for high-performance SSD storage, known as blk-multiqueue or blk-mq and developed primarily by Fusion-io engineers, was merged into the Linux kernel mainline in kernel version 3.13, released on 19 January 2014. This leverages the performance offered by SSDs and NVM Express, by allowing much higher I/O submission rates. With this new design of the Linux kernel block layer, internal queues are split into two levels (per-CPU and hardware-submission queues), thus removing bottlenecks and allowing much higher levels of I/O parallelization.
- As of version 4.0 of the Linux kernel, released on 12 April 2015, VirtIO block driver, the SCSI layer (which is used by Serial ATA drivers), loop device driver, unsorted block images (UBI) driver (which implements erase block management layer for flash memory devices) and RBD driver (which exports Ceph RADOS objects as block devices) have been modified to actually use this new interface; other drivers will be ported in the following releases.
- NetBSD
- NetBSD has support for NVMe in the development version (NetBSD-current). Implementation is derived from OpenBSD 6.0.
- OpenBSD
- Development work required to support NVMe in OpenBSD has been started in April 2014 by a senior developer formerly responsible for USB 2.0 and AHCI support. Support for NVMe has been enabled in the OpenBSD 6.0 release.
- OS X/macOS
- In the 10.10.3 update for OS X Yosemite, Apple introduced support for NVM Express. The Retina MacBook and 2016 MacBook Pro use NVMe over PCIe as the logical device interface.
- Solaris
- Solaris received support for NVMe in Oracle Solaris 11.2.
- VMware
- Intel has provided an NVMe driver for VMware, which is included in vSphere 6.0 and later builds, supporting various NVMe devices. As of vSphere 6 update 1, VMware's VSAN software-defined storage subsystem also supports NVMe devices.
- Windows
- Microsoft added native support for NVMe to Windows 8.1 and Windows Server 2012 R2. Native drivers for Windows 7 and Windows Server 2008 R2 have been added in updates.
- The OpenFabrics Alliance maintains an open-source NVMe Windows Driver for Windows 7/8/8.1 and Windows Server 2008R2/2012/2012R2, developed from the baseline code submitted by several promoter companies in the NVMe workgroup, specifically IDT, Intel, and LSI. The current release is 1.5 from December 2016.
Software support
- QEMU
- NVMe is supported by QEMU since version 1.6 released on August 15, 2013.
- UEFI
- An open source NVMe driver for UEFI is available on SourceForge.
References
External links
- Official website
- LFCS: Preparing Linux for nonvolatile memory devices, LWN.net, April 19, 2013, by Jonathan Corbet
- Multipathing PCI Express Storage, Linux Foundation, March 12, 2015, by Keith Busch
Source of article : Wikipedia