Hamburgnet - Storage, Server, Unix, Cluster & Consulting

preload
InfiniBand Technologie von Ihrem Systemhaus in Hamburg.
Hamburgnet, langjähriger Mellanox Partner seit 2006 mit der höchsten Partnerstufe.
Verkauf, Beratung und Integration der Produkte in Ihre Infrastruktur.

InfininBand Host-Channel-Adapter

InfiniBand VPI Karten

Advanced Levels of Data Center IT Performance & Efficiency

InfiniBand/VPI KartenINFO

ConnectX-5 VPI Karten

ConnectX 5 MCX555A-ECAT MCX556A-ECAT MCX556A-EDAT
Bezeichnung MCX555A-ECAT MCX556A-ECAT MCX556A-EDAT
Ports 1 2 2
Connector QSFP28 QSFP28 QSFP28
Host Bus PCIe 3.0 PCIe 3.0 PCIe 4.0
Speed EDR (100Gb/s), 100GbE EDR (100Gb/s), 100GbE EDR (100Gb/s), 100GbE
Lanes x16 x16 x16
ConnectX®-5

ConnectX-4 VPI Karten

ConnectX 4 MCX455A-FCAT MCX456A-FCAT MCX455A-ECAT MCX456A-ECAT
Bezeichnung MCX455A-FCAT MCX456A-FCAT MCX455A-ECAT MCX456A-ECAT
Ports 1 2 1 2
Connector QSFP28 QSFP28 QSFP28 QSFP28
Host Bus PCIe 3.0 PCIe 3.0 PCIe 3.0 PCIe 3.0
Speed FDR (56Gb/s), 40/56GbE FDR (56Gb/s), 40/56GbE EDR (100Gb/s), 100GbE EDR (100Gb/s), 100GbE
Lanes x16 x16 x16 x16
ConnectX 4 MCX453A-FCAT MCX454A-FCAT
Bezeichnung MCX453A-FCAT MCX454A-FCAT
Ports 1 2
Connector QSFP28 QSFP28
Host Bus PCIe 3.0 PCIe 3.0
Speed FDR (56Gb/s), 40/56GbE FDR (56Gb/s), 40/56GbE
Lanes x8 x8
ConnectX®-4

Connect-IB® InfiniBand Host Channel

Connect-IB

MCB191A-FCAT MCB192A-FCAT MCB193A-FCAT MCB194A-FCAT
Bezeichnung MCB191A-FCAT MCB192A-FCAT MCB193A-FCAT MCB194A-FCAT
Ports 1 2 1 2
Connector QSFP QSFP QSFP QSFP
Host Bus PCIe 3.0 PCIe 3.0 PCIe 3.0 PCIe 3.0
Speed FDR IB (56Gb/s) FDR IB (56Gb/s) FDR IB (56Gb/s) FDR IB (56Gb/s)
Lanes x8 x8 x16 x16
Connect-IB®

ConnectX-3 VPI Karten

Single/Dual-Port Adapters with Virtual Protocol Interconnect®

ConnectX 3 MCX35XX MCX35XX
Bezeichnung MCX353A-QCBT*1
MCX353A-TCBT*2
MCX353A-FCBT*3
MCX354A-QCBT*1
MCX354A-TCBT*2

MCX354A-FCBT*3
Port Anzahl 1x (Single) 2x (Dual)
VPI Ports *1 = QDR IB (40Gb/s) + 10GbE
*2 = FDR10 IB (40Gb/s) + 10GbE
*3 = FDR IB (56Gb/s) + 40GbE
*1 = QDR IB (40Gb/s) + 10GbE
*2 = FDR10 IB (40Gb/s) + 10GbE
*3 = FDR IB (56Gb/s) + 40GbE
Connector QSFP QSFP
ASIC & PCI Dev ID ConnectX®-3 4099 ConnectX®-3 4099
Host Bus PCIe 3.0 PCIe 3.0
Speed 8.0 GT/s 8.0 GT/s
Lanes x8 x8
OS Support Novell SLES, Red Hat Enterprise Linux (RHEL), and other Linux distributions. Microsoft Windows Server 2008/CCS 2003, HPC Server 2008. OpenFabrics Enterprise Distribution (OFED). OpenFabrics Windows Distribution (WinOF). VMware ESX Server 3.5, vSphere 4.0/4.1
Features VPI, Hardware-based Transport and Application Offloads, RDMA, GPU Communication Acceleration, I/O Virtualization, QoS and Congestion Control; IP Stateless Offload; Precision Time Protocol
ConnectX®-3

ConnectX-3 Pro VPI Karten

Single/Dual-Port Adapters with Virtual Protocol Interconnect®

ConnectX 3 MCX35XX MCX35XX
Bezeichnung MCX311A-XCCT*1
MCX313A-BCCT*2
MCX353A-FCCT*3
MCX353A-TCCT*4
MCX312B-XCCT*1
MCX314A-BCCT*2
MCX354A-FCCT*3
MCX354A-TCCT*4
Port Anzahl 1x (Single) 2x (Dual)
VPI Ports *1 = 10GbE
*2 = FDR IB (56Gb/s) + 40 GbE
*3 = FDR IB (56Gb/s) + 40GbE
*4 = FDR10 IB (40Gb/s) + 10GbE
*1 = 10GbE
*2 = FDR IB (56Gb/s) + 40GbE
*3 = FDR IB (56Gb/s) + 40GbE
*4 = FDR10 IB (40Gb/s) + 10GbE
Connector QSFP QSFP
Host Bus PCIe 3.0 PCIe 3.0
Speed 8.0 GT/s 8.0 GT/s
Lanes x8 x8
OS Support Novell SLES, Red Hat Enterprise Linux (RHEL), and other Linux distributions. Microsoft Windows Server 2008/CCS 2003, HPC Server 2008. OpenFabrics Enterprise Distribution (OFED). OpenFabrics Windows Distribution (WinOF). VMware ESX Server 3.5, vSphere 4.0/4.1
Features VPI, Hardware-based Transport and Application Offloads, RDMA, GPU Communication Acceleration, I/O Virtualization, QoS and Congestion Control; IP Stateless Offload; Precision Time Protocol

ConnectX 3 OPC MCX345A-FCPN MCX346A-FCPN
Open Compute Project (OCP) MCX345A-FCPN MCX346A-FCPN
Port Anzahl 1x (Single) 2x (Dual)
VPI Ports FDR 40/56GbE FDR 40/56GbE
Connector QSFP QSFP
Host Bus PCIe 3.0 PCIe 3.0
Speed 8.0 GT/s 8.0 GT/s
Lanes x8 x8
OS Support Citrix XenServer 6.1; Novell SLES; Red Hat Enterprise; Linux (RHEL); Ubuntu and other Linux distributions Microsoft Windows Server 2008/2012/2012 R2; OpenFabrics Enterprise Distribution (OFED); OpenFabrics Windows Distribution (WinOF); VMware ESXi
Features Virtual Protocol Interconnect, Up to FDR 56Gb/s InfiniBand or 40/56, Gigabit Ethernet per port, Single- and Dual-Port options available, PCI Express 3.0 (up to 8GT/s), OCP Specification 2.0, 1us MPI ping latency, Hardware Offloads for NVGRE and VXLAN encapsulated traffic, Application offload, Data Center Bridging support, GPU communication acceleration, Precision Clock Synchronization, Traffic steering across multiple cores, Hardware-based I/O virtualization, End-to-end QoS and congestion control, Intelligent interrupt coalescence, Advanced Quality of Service, RoHS-R6
ConnectX®-3 OPC

ConnectX-2 VPI Karten

User Manuals Overview
Single/Dual-Port Adapters with Virtual Protocol Interconnect®
This series is End of Life (EOL)

ConnectX 2 MHGH29B-XTR MHQH19B-XTR & MHRH19B-XTR MHQH29B-XTR & MHRH29B-XTR MHZH29-XTR
Bezeichnung MHRH29C-XTR*1
MHQH29C-XTR*2
MHRH19B-XTR*1
MHQH19B-XTR*2
MHRH29B-XTR

MHZH29B-XTR

Port Anzahl 2x (Dual) 1x (Single) 2x (Dual) 1x und 1x
Ports 4X 20Gb/s IB*1
4X 40Gb/s IB*2
4X 20Gb/s IB*1
4X 40Gb/s IB*2
4X 20Gb/s IB

4X 40Gb/s IB,
10GigE SFP+
Connector QSFP QSFP QSFP QSFP und SFP+
Host Bus PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0
Speed 5.0 GT/s 5.0 GT/s 5.0 GT/s 5.0 GT/s
Lanes x8 x8 x8 x8
OS Support Novell SLES, Red Hat Enterprise Linux (RHEL), and other Linux distributions. Microsoft Windows Server 2008/CCS 2003, HPC Server 2008. OpenFabrics Enterprise Distribution (OFED). OpenFabrics Windows Distribution (WinOF). VMware ESX Server 3.5, vSphere 4.0/4.1
Features VPI, Hardware-based Transport, RDMA, I/O Virtualization, QoS and Congestion Control; IP Stateless Offload
ConnectX®-2

ConnectX VPI Karten

Single/Dual- Port Adapter Cards supporting up to 40Gb/s InfiniBand
This series is End of Life (EOL)

ConnectX MH(EG)H28-XTC MHGH19-XTC & MHJH19-XTC MHGH29-XTC & MHJH29-XTC MHQH19-XTC & MHQH19-XTC MHQH29-XTC & MHRH29-XTC
Bezeichnung MHEH28-XTC
MHGH28-XTC
MHGH19-XTC
MHJH19-XTC
MHGH29-XTC
MHJH29-XTC
MHRH19-XTC
MHQH19-XTC
MHRH29-XTC
MHQH29-XTC
Ports 2 x 10 Gb/s
2 x 20 Gb/s
1 x 20 Gb/s
1 x 40 Gb/s
2 x 20 Gb/s
2 x 40 Gb/s
1 x 20 Gb/s
1 x 40 Gb/s
2 x 20 Gb/s
2 x 40 Gb/s
Connector CX4 CX4 CX4 QSFP QSFP
Host Bus PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0
Speed 2.5 GT/s 5.0 GT/s 5.0 GT/s 5.0 GT/s 5.0 GT/s
Lanes x8 x8 x8 x8 x8
OS Support RHEL, SLES, Fedora (& other Linux distributions), Windows, OFED, WinOF, ESX
Features VPI, Hardware-based Transport, RDMA, I/O Virtualization, QoS and Congestion Control; IP Stateless Offload
ConnectX®

Infini Host III Karten

User Manuals  Overview EX Overview LX
Single/Dual- Port InfiniBand HCA Cards with PCI Express

MHGA28-XTC & MHEA28-XTC MHEA28-1XTC & MHGA28-1XTC MHEA28-2XTC & MHGA28-2XTC MHES14-XTC MHGS18-XTC & MHES18-XTC
Bezeichnung MHEA28-XTC
MHGA28-XTC
MHEA28-1TC
MHGA28-1TC
MHEA28-2TC
MHGA28-2TC
MHES14-XTC

MHES18-XTC
MHGS18-XTC
Serie InfiniHost III Ex InfiniHost III Lx
Local Memory MemFree 128 MB 256 MB MemFree MemFree
Ports 2 x 10 Gb/s
2 x 20 Gb/s
2 x 10 Gb/s
2 x 20 Gb/s
2 x 10 GB/s
2 x 20 Gb/s
1 x 10 Gb/s

1 x 10 Gb/s
1 x 20 Gb/s
Connector CX4 CX4 CX4 CX4 CX4
Host Bus PCIe 1.1 PCIe 1.1 PCIe 1.1 PCIe 1.1 PCIe 1.1
Speed 2.5 GT/s 2.5 GT/s 2.5 GT/s 2.5 GT/s 2.5 GT/s
Lanes x8 x8 x8 x4 x8
OS Support Linux, Windows, HPUX, AIX, OS X, Solaris, and VxWorks
Features Hardware-based Transport, RDMA, I/O Virtualization
InfiniHost® III Ex und InfiniHost® III Lx


Mellanox HCA cards (HBAs) gehören zu den verbreitesten InfiniBand Adaptern auf dem Markt.
Diese bieten die leistungsfähigsten Interconnect-Lösung für Enterprise Rechenzentren, Web 2.0, Cloud Computing, High-Performance Computing und Embedded-Umgebungen.
  • Mellanox HCAs liefern höchste Bandbreite und geringer Latenz von jeder Standard-Verbindung, durch höhere CPU Wirkungsgrade von mehr als 95%.
  • Die Rechenzentren und Cloud-Computing benötigen I/O-Dienste wie Bandbreite, Konsolidierung und Vereinheitlichung und Flexibilität. Mellanox HCAs unterstützen LAN-und SAN-Verkehr Konsolidierung und bietet Hardware Beschleunigung für Server-Virtualisierung.
  • Die Flexibilität von Virtual Protocol Interconnect® (VPI) bietet InfiniBand, Ethernet, Data Center Bridging, EoIB, FCoIB und FCoE-Konnektivität.

Vorteile:

  • Weltklasse Cluster Leistung
  • High-Performance Networking und Storage-Zugriff
  • Effiziente Nutzung von Computer-Ressourcen
  • Garantierte Bandbreite- und geringer Latenz-Leistungen
  • zuverlässiger Datentransport
  • I/O Einheit
  • Virtualisierungsbeschleunigung
  • Skalierbar auf bis zu tausende von Knoten

Anwendung für:

  • Parallelisiertes High-Performance-Computing
  • Datenzentrum Virtualisierung
  • Clustered-Database Anwendungen, parallele RDBMS-Abfragen, High-Throughput-Data Warehousing,
  • Latenz sensible Anwendungen wie Finanz-Analyse und Handel.
  • Web 2.0, Cloud-und Grid-Computing-Rechenzentren.
  • Performance-Storage-Anwendungen wie Backup, Restore, Spiegelung, etc.


Weitere Informationen, Preise und Referenzinstallationen auf Anfrage.


Mellanox

Mellanox Partner First

VMWare Enterprise Partner
Angebote & Fragen
Sprechen Sie uns an
Wir beraten Sie gerne unverbindlich und erstellen Ihnen ein individuelles Angebot.
E-Mail schreiben
oder rufen Sie uns an:
040 / 881 44 99 70



Mellanox ConnectX InfiniBand



Valid HTML 5

 Back to Top