Type: | Wireless |
---|---|
Bandwidth: | 10Mbps/100Mbps |
Interface: | PCI Express |
Still deciding? Get samples of $ !
Order Sample
|
Shipping Cost: | Contact the supplier about freight and estimated delivery time. |
---|
Payment Methods: |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|---|
Support payments in USD |
Secure payments: | Every payment you make on Made-in-China.com is protected by the platform. |
---|
Refund policy: | Claim a refund if your order doesn't ship, is missing, or arrives with product issues. |
---|
Suppliers with verified business licenses
Audited by an independent third-party inspection agency
The Mellanox MCX653106A-ECAT is a high-performance dual-port network adapter based on the powerful ConnectX-6 architecture. Offering seamless support for both 100Gb/s InfiniBand (HDR100, EDR) and 100GbE Ethernet, this adapter delivers ultra-low latency, high throughput, and flexible protocol support. It's ideal for high-performance computing (HPC), artificial intelligence (AI), and next-generation data center environments.
Dual-Port QSFP56 Interfaces
Two QSFP56 ports supporting up to 100Gb/s per port for scalable performance.
InfiniBand & Ethernet Support
Supports HDR100 and EDR InfiniBand, along with 100GbE Ethernet for hybrid networking.
PCIe Gen 3.0 / Gen 4.0 x16 Interface
High-bandwidth interface compatible with both legacy and cutting-edge server platforms.
Based on ConnectX-6 Silicon
Offers acceleration for MPI, NVMe over Fabrics (NVMe-oF), RDMA, RoCE v2, and GPUDirect.
Specification | Details |
---|---|
Product | ConnectX-6 InfiniBand/Ethernet Adapter |
Model Number | MCX653106A-ECAT |
Ports | 2x QSFP56 |
Max Data Rate | 100Gb/s per port (200Gb/s aggregate) |
Interface | PCI Express Gen 3.0/4.0 x16 |
Network Protocols | HDR100, EDR InfiniBand, 100Gb Ethernet |
Controller | Mellanox ConnectX-6 |
Bracket Type | Full Height (Tall) |
Form Factor | Add-in Card |
Boot Support | PXE, UEFI |
Operating Modes | InfiniBand or Ethernet |
Deliver high bandwidth with two QSFP56 ports supporting up to 200Gb/s combined throughput for data-intensive workloads.
Maximize network flexibility with support for both high-speed Ethernet and low-latency InfiniBand protocols.
Harness the full power of the PCIe Gen4 x16 interface for faster data flow and reduced latency.
Includes support for RDMA, NVMe-oF, RoCE v2, GPUDirect, SR-IOV, and virtualization offloads.
Engineered for modern HPC and AI infrastructures with support for secure boot, scalable clustering, and low-power operation.
High Bandwidth & Low Latency
Perfect for compute and storage-intensive environments like HPC, AI, and real-time analytics.
Versatile Deployment Options
Operate in InfiniBand, Ethernet, or converged network modes depending on workload needs.
Offload-Driven Performance
Free up CPU cycles with offloading technologies and network acceleration features.
Enterprise and Cloud-Ready
Ideal for hyperscale data centers, research labs, and enterprise clusters requiring ultra-fast interconnects.
Accelerate scientific simulations, modeling, and big data processing using HDR InfiniBand interconnects.
Support massive data transfers between GPU clusters with low latency and high throughput.
Enable high-speed, low-latency storage networking between nodes with NVMe acceleration.
Deploy a flexible, future-ready networking stack supporting both Ethernet and InfiniBand.