OCP3.0 - CONNECTX ETHERNET ADAPTER CARDS FOR OCP SPEC 3.0 HIGH PERFORMANCE 10/25/40/50/100/200 GBE ETHERNET ADAPTER CARDS IN THE OPEN COMPUTE ...

Page created by Annette Matthews
 
CONTINUE READING
OCP3.0 - CONNECTX ETHERNET ADAPTER CARDS FOR OCP SPEC 3.0 HIGH PERFORMANCE 10/25/40/50/100/200 GBE ETHERNET ADAPTER CARDS IN THE OPEN COMPUTE ...
OCP3.0

                                                                                            †

ConnectX® Ethernet Adapter Cards for OCP Spec 3.0
High Performance 10/25/40/50/100/200 GbE Ethernet Adapter
Cards in the Open Compute Project Spec 3.0 Form Factor

                                                            † For illustration only. Actual products may vary.
OCP3.0 - CONNECTX ETHERNET ADAPTER CARDS FOR OCP SPEC 3.0 HIGH PERFORMANCE 10/25/40/50/100/200 GBE ETHERNET ADAPTER CARDS IN THE OPEN COMPUTE ...
OCP3.0

Mellanox® Ethernet adapter cards in the OCP 3.0 form factor support speeds from 10 to 200 GbE. Combining leading
features with best-in-class efficiency, Mellanox OCP cards enable the highest data center performance.
World-Class Performance and Scale                                                             Complete End-to-End Networking
Mellanox 10, 25, 40, 50, 100 and 200 GbE adapter cards deliver industry-leading               ConnectX OCP 3.0 adapter cards are part of Mellanox’s 10, 25, 40, 50, 100 and 200 GbE
connectivity for performance-driven server and storage applications. Offering high            end-to-end portfolio for data centers which also includes switches, application acceleration
bandwidth coupled with ultra-low latency, ConnectX adapter cards enable faster access         packages, and cabling to deliver a unique price-performance value proposition for network
and real-time responses.                                                                      and storage solutions. With Mellanox, IT managers can be assured of the highest
Complementing its OCP 2.0 offering, Mellanox offers a variety of OCP 3.0-compliant            performance, reliability and most efficient network fabric at the lowest cost for the best
adapter cards, providing best-in-class performance and efficient computing through            return on investment.
advanced acceleration and offload capabilities. These advanced capabilities that free up
                                                                                              In addition, Mellanox NEO®-Host management software greatly simplifies host network
valuable CPU for other tasks, while increasing data center performance, scalability and
                                                                                              provisioning, monitoring and diagnostics with ConnectX OCP3.0 cards, providing the
efficiency, include:
                                                                                              agility and efficiency for scalability and future growth. Featuring an intuitive and
• RDMA over Converged Ethernet (RoCE)                                                         graphical user interface (GUI), NEO-Host provides in-depth visibility and host networking
• NVMe-over-Fabrics (NVMe-oF)                                                                 control. NEO-Host also integrates with Mellanox NEO, Mellanox’s end-to-end data-
• Virtual switch offloads (e.g., OVS offload) leveraging ASAP2 - Accelerated Switching and    center orchestration and management platform.
  Packet Processing®
• GPUDirect® communication acceleration                                                       Open Compute Project Spec 3.0
• Mellanox Multi-Host® for connecting multiple compute or storage hosts to a single           The OCP NIC 3.0 specification extends the capabilities of OCP NIC 2.0 design
  interconnect adapter                                                                        specification. OCP 3.0 defines a different form factor and connector style than OCP 2.0.
• Mellanox Socket Direct® technology for improving the performance of multi-socket servers.   The OCP 3.0 specification defines two basic card sizes: Small Form Factor (SFF) and
• Enhanced security solutions                                                                 Large Form Factor (LFF). Mellanox OCP NICs are currently supported in a SFF.*
                                                                                              * Future designs may utilize LFF to allow for additional PCIe lanes and/or Ethernet ports,
OCP3.0 - CONNECTX ETHERNET ADAPTER CARDS FOR OCP SPEC 3.0 HIGH PERFORMANCE 10/25/40/50/100/200 GBE ETHERNET ADAPTER CARDS IN THE OPEN COMPUTE ...
OCP3.0

OCP 3.0 also provides additional board real estate, thermal capacity, electrical interfaces, network                        ConnectX OCP3.0 Ethernet Adapter Benefits
interfaces, host conflagration and management. OCP 3.0 also introduces a new mating technique that
                                                                                                                            •   Open Data Center Committee (ODCC) compatible
simplifies FRU installation and removal, and reduces overall downtime.
                                                                                                                            •   Supports the latest OCP 3.0 NIC specifications
The table below shows key comparisons between the OCP Specs 2.0 and 3.0.
                                                                                                                            •   All platforms: x86, Power, Arm, compute and storage
                                                    OCP Spec 2.0                              OCP Spec 3.0                  •   Industry-leading performance
                                                                                                                            •   TCP/IP and RDMA - for I/O consolidation
         Card Dimensions                     Non-rectangular (8000mm2)                  SFF: 76x115mm (8740mm2)
                                                                                                                            •   SR-IO virtualization technology: VM protection and QoS
            PCIe Lanes                                 Up to x16                              SFF: Up to x16
                                                                                                                            •   Cutting-edge performance in virtualized Overlay Networks
                                           Up to 67.2W for PCIe x8 card;
   Maximum Power Capability                                                                  SFF: Up to 80W                 •   Increased Virtual Machine (VM) count per server ratio
                                           Up to 86.4W for PCIe x16 card
    Baseband Connector Type                        Mezzanine (B2B)                          Edge (0.6mm pitch)

       Network Interfaces
                                             Up to 2 SFP side-by-side or
                                                                                   Up to two QSFP in SFF, side-by-side      TARGET APPLICATIONS
                                                2 QSFP belly-to-belly
                                                                                                                            •   Data center virtualization
       Expansion Direction                                N/A                                      Side                     •   Compute and storage platforms for public & private clouds
      Installation in Chassis               Parallel to front or rear panel          Perpendicular to front/rear panel      •   HPC, Machine Learning, AI, Big Data, and more
             Hot Swap                                      No                          Yes (pending server support)         •   Clustered databases and high-throughput data warehousing
       Mellanox Multi-Host                           Up to 4 hosts                 Up to 4 hosts in SFF or 8 hosts in LFF   •   Latency-sensitive financial analysis and high frequency trading
  Host Management Interfaces                          RBT, SMBus                            RBT, SMBus, PCIe                •   Media & Entertainment
                                                                                                                            •   Telco platforms
  Host Management Protocols                          Not standard                          DSP0267, DSP0248

For more details, please refer to the Open Compute Project (OCP) Specifications.
OCP3.0 - CONNECTX ETHERNET ADAPTER CARDS FOR OCP SPEC 3.0 HIGH PERFORMANCE 10/25/40/50/100/200 GBE ETHERNET ADAPTER CARDS IN THE OPEN COMPUTE ...
OCP3.0
Ethernet OCP 3.0 Adapter Cards
Specs & Part Numbers
                                                                                                      Mellanox
 Max Network Speed Interface Type PCIe                                 ConnectX®                     Multi-Host /           Crypto Option(b) Default OPN(c)
                                                                                                    Socket Direct (a)
         2x 25GbE              SFP28                 Gen3.0 x8         ConnectX®-4 Lx                                                               MCX4621A-ACAB

                                                     Gen3.0 x16        ConnectX -5®
                                                                                                                                                    MCX562A-ACAB

                                                     Gen4.0 x16        ConnectX®-6 Dx                                                               MCX623432AC-ADAB
         2x 50GbE              QSFP28                Gen3.0 x16        ConnectX®-5                                                                  Contact Mellanox
                               SFP56                 Gen4.0 x16        ConnectX -6 Dx
                                                                                  ®
                                                                                                                                                    MCX623432AC-GDAB
                               QSFP28                Gen4.0 x16        ConnectX®-6 Dx                                                               Contact Mellanox
         1x 100GbE             QSFP28                Gen4.0 x16        ConnectX®-5 Ex                                                               MCX565M-CDAB
                               QSFP56                Gen4.0 x16        ConnectX®-6 Dx                                                               Contact Mellanox
         2x 100GbE             QSFP28                Gen3.0 x16        ConnectX -5®
                                                                                                                                                    MCX566A-CCAI
                                                     Gen4.0 x16        ConnectX -5 Ex
                                                                                  ®
                                                                                                                                                    MCX566A-CDAB

                               QSFP56                Gen4.0 x16        ConnectX®-6 Dx                                                               MCX623436AC-CDAB
         1x 200GbE             QSFP56                Gen4.0 x16        ConnectX -6 Dx
                                                                                  ®
                                                                                                                                                    MCX623435AC-VDAB

         2x 200GbE             QSFP56                Gen4.0 x16        ConnectX®-6                                                                  MCX613436A-VDAI

(a) When using Mellanox Multi-Host or Mellanox Socket Direct in virtualization or dual-port use cases, some restrictions may apply. For further details, contact Mellanox Customer Support.
(b) Crypto-enabled cards also support secure boot and secure firmware update.
(c) The last digit of each OPN-suffix displays the OPN’s default bracket option: B = Pull tab Thumbscrew; I = Internal Lock; E = Ejector Latch. For other bracket types, contact Mellanox.
Note: ConnectX-5 Ex is an enhanced performance version that supports PCIe Gen4 and higher throughput.
Note: Additional flavors with Mellanox Multi-Host, Mellanox Socket Direct, or Crypto-disabled/enabled are available; contact Mellanox for details.
OCP3.0 - CONNECTX ETHERNET ADAPTER CARDS FOR OCP SPEC 3.0 HIGH PERFORMANCE 10/25/40/50/100/200 GBE ETHERNET ADAPTER CARDS IN THE OPEN COMPUTE ...
OCP3.0

I/O Virtualization and Virtual Switching                                                    Flexible Multi-Host Technology  ®

Mellanox ConnectX Ethernet adapters provide comprehensive support for virtualized           Innovative Mellanox Multi-Host technology provides high flexibility and major savings
data centers with Single-Root I/O Virtualization (SR-IOV), allowing dedicated adapter       in building next generation, scalable, high-performance data centers. Mellanox Multi-
resources and guaranteed isolation and protection for virtual machines within the           Host connects multiple compute or storage hosts to a single interconnect adapter,
server. I/O virtualization gives data center managers better server utilization and LAN     separating the adapter PCIe interface into multiple and independent PCIe interfaces,
and SAN unification while reducing cost, power and cable complexity.                        without any performance degradation.
Moreover, virtual machines running in a server traditionally use multilayer virtual
switch capabilities, like Open vSwitch (OVS). Mellanox’s ASAP2 - Accelerated Switch         Security From Zero Trust to Hero Trust
and Packet Processing® technology allows for the offloading of any implementation           In an era where privacy of information is key and zero trust is the rule, Mellanox
of a virtual switch or virtual router by handling the data plane in the NIC hardware, all   ConnectX OCP 3.0 adapters offer a range of advanced built-in capabilities that bring
the while maintaining the control plane unmodified. This results in significantly higher    security down to the end-points with unprecedented performance and scalability.
vSwitch/vRouter performance without the associated CPU load.                                Mellanox offers options for AES-XTS block-level data-at-rest encryption/decryption
                                                                                            offload starting from ConnectX-6. Additionally, ConnectX-6 Dx includes IPsec and TLS
                                                                                            data-in-motion inline encryption/decryption offload.
RDMA over Converged Ethernet (RoCE)                                                         ConnectX-6 Dx also enables hardware-based L4 firewall, which offloads stateful
Mellanox RoCE doesn’t require any network configurations, allowing for seamless             connection tracking protection.
deployment and efficient data transfers with very low latencies over Ethernet               All Mellanox ConnectX OCP 3.0 adapters support secure firmware update, ensuring
networks — a key factor in maximizing a cluster’s ability to process data                   that only authentic images produced by Mellanox can be installed; this is regardless of
instantaneously. With the increasing use of fast and distributed storage, data centers      whether the installation happens from the host, the network, or a BMC.
have reached the point of yet another disruptive change, making RoCE a must in              For an added level of security, ConnectX-6 Dx uses embedded Hardware Root-of-Trust
today’s data centers.                                                                       (RoT) to implement secure boot.
OCP3.0 - CONNECTX ETHERNET ADAPTER CARDS FOR OCP SPEC 3.0 HIGH PERFORMANCE 10/25/40/50/100/200 GBE ETHERNET ADAPTER CARDS IN THE OPEN COMPUTE ...
OCP3.0

Accelerated Storage                                                                          Enhancing Machine Learning Application Performance
Mellanox adapters support a rich variety of storage protocols and enable partners to build   Mellanox adapters with built-in advanced acceleration and RDMA capabilities deliver
hyperconverged platforms where the compute and storage nodes are co-located and share        best-in-class latency, bandwidth and message rates, and lower CPU utilization. Mellanox
the same infrastructure. Leveraging RDMA, Mellanox adapters enhance numerous storage         PeerDirect® technology with NVIDIA GPUDirect™ RDMA enables adapters with direct peer-
protocols, such as iSCSI over RDMA (iSER), NFS RDMA, and SMB Direct to name a few.           to-peer communication to GPU memory, without any interruption to CPU operations. Mellanox
Moreover, ConnectX adapters also offer NVMe-oF protocols and offloads, enhancing             adapters also deliver the highest scalability, efficiency, and performance for a wide variety
utilization of NVMe based storage appliances.                                                of applications, including bioscience, media and entertainment, automotive design,
                                                                                             computational fluid dynamics and manufacturing, weather research and forecasting, as
Another storage related hardware offload is the Signature Handover mechanism based on
                                                                                             well as oil and gas industry modeling. Thus, Mellanox adapters are the best NICs for machine
an advanced T-10/DIF implementation.
                                                                                             learning applications.

Host Management                                                                              Mellanox Socket Direct®
Mellanox host management sideband implementations enable remote monitor and
                                                                                             Mellanox Socket Direct technology brings improved performance to multi-socket servers
control capabilities using RBT, MCTP over SMBus, and MCTP over PCIe – Baseboard
                                                                                             by enabling direct access from each CPU in a multi-socket server to the network through
Management Controller (BMC) interface, supporting both NC-SI and PLDM management
                                                                                             its dedicated PCIe interface. With this type of configuration, each CPU connects directly
protocols using these interfaces. Mellanox OCP 3.0 adapters support these protocols to
                                                                                             to the network; this enables the interconnect to bypass a QPI (UPI) and the other CPU,
offer numerous Host Management features such as PLDM for Firmware Update, network
boot in UEFI driver,UEFI secure boot, and more.                                              optimizing performance and improving latency. CPU utilization improves as each CPU
                                                                                             handles only its own traffic, and not the traffic from the other CPU. Mellanox’s OCP 3.0
                                                                                             cards include native support for socket direct technology for multi-socket servers and can
                                                                                             support up to 8 CPUs.
OCP3.0 - CONNECTX ETHERNET ADAPTER CARDS FOR OCP SPEC 3.0 HIGH PERFORMANCE 10/25/40/50/100/200 GBE ETHERNET ADAPTER CARDS IN THE OPEN COMPUTE ...
OCP3.0

Broad Software Support                                                                                                        Multiple Form Factors
All Mellanox adapter cards are supported by a full suite of drivers for Linux major                                           In addition to the OCP Spec 3.0 cards, Mellanox adapter cards are available in other form
distributions, Microsoft® Windows®, VMware vSphere® and FreeBSD®. Drivers are also                                            factors to meet data centers’ specific needs, including:
available inbox in Linux main distributions, Windows and VMware.                                                              • OCP Specification 2.0 Type 1 & Type 2 mezzanine adapter form factors, designed to
                                                                                                                                mate into OCP servers.
                                                                                                                              • Standard PCI Express (PCIe) Gen3 and Gen4 adapter cards.

                                                                                      †

                                                                            OCP3.0 Adapter Card                                                  OCP2.0 Adapter Card                        Standard PCI Express Adapter Card

                                                                                                                         NOTES:
                     350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085                                                 (1) This brochure describes hardware features and capabilities. Please refer to the driver release notes on mellanox.com
                     Tel: 408-970-3400 • Fax: 408-970-3403                                                                   for feature availability.
                                                                                                                         (2) Product images may not include heat sync assembly; actual product may differ.
                     www.mellanox.com
                                                                                                                         † For illustration only. Actual products may vary.

© Copyright 2020. Mellanox Technologies. All rights reserved.
Mellanox, Mellanox logo, ConnectX, GPUDirect, Mellanox PeerDirect, Mellanox Multi-Host, Mellanox Socket Direct and ASAP2 - Accelerated Switch and Packet Processing are registered trademarks of Mel-
                                                                                                                                                                                                                                           060275BR
lanox Technologies, Ltd. Mellanox NEO-Host is a trademark of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.                                                                                       Rev 1.4
You can also read