Type: | Wireless |
---|---|
Bandwidth: | 10Mbps/100Mbps |
Interface: | PCI Express |
Still deciding? Get samples of $ !
Order Sample
|
Shipping Cost: | Contact the supplier about freight and estimated delivery time. |
---|
Payment Methods: |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|---|
Support payments in USD |
Secure payments: | Every payment you make on Made-in-China.com is protected by the platform. |
---|
Refund policy: | Claim a refund if your order doesn't ship, is missing, or arrives with product issues. |
---|
Suppliers with verified business licenses
Audited by an independent third-party inspection agency
The Mellanox MCX556A-ECAT ConnectX-5 VPI is a high-performance, dual-port adapter card supporting both 100Gb/s EDR InfiniBand and 100GbE Ethernet. Designed for high-performance computing (HPC), artificial intelligence (AI), deep learning, cloud environments, and storage networking, this PCIe 3.0 x16 adapter delivers ultra-low latency and high throughput for demanding workloads.
Dual-Port Versatility - Supports two QSFP28 ports for 100Gb/s EDR InfiniBand and 100GbE Ethernet connectivity.
PCIe 3.0 x16 Interface - Ensures high-speed data transmission with minimal bottlenecks.
RDMA Support (RoCE v2 & InfiniBand) - Reduces CPU overhead, improving performance for HPC and AI workloads.
Adaptive Routing & Congestion Control - Enhances network efficiency and scalability.
Advanced Virtualization - Supports SR-IOV, NVGRE, VXLAN, and Geneve for virtualized and cloud environments.
NVMe-over-Fabrics (NVMe-oF) Support - Optimized for high-speed storage applications.
Security & Protection - Features Secure Boot and Hardware Root-of-Trust for data integrity.
Tall Bracket Included - Designed for standard server racks (low-profile bracket available separately).
Seamless support for both InfiniBand and Ethernet networks.
Low-latency, high-throughput performance for scientific computing, AI, and machine learning.
Ideal for cloud-scale data centers and multi-node HPC clusters.
InfiniBand congestion control and adaptive routing improve network efficiency.
Efficient CPU offloading reduces data center costs and power consumption.
Scalable networking solution for large-scale infrastructures.
Parameter | Specification |
---|---|
Model | MCX556A-ECAT |
Network Speed | 100Gb/s EDR InfiniBand / 100GbE Ethernet |
Ports | 2 × QSFP28 |
Interface | PCIe 3.0 x16 |
RDMA Support | RoCE v2, InfiniBand |
Virtualization | SR-IOV, VXLAN, NVGRE, Geneve |
Congestion Control | Adaptive Routing, InfiniBand Congestion Control |
Hardware Offloading | Yes |
Security Features | Secure Boot, Hardware Root-of-Trust |
Form Factor | Standard with Tall Bracket (low-profile option available) |
Power Consumption | Low Power Design |
Operating Temperature | 0°C to 55°C |
Compliance | RoHS, IEEE 802.3 |
High-Performance Computing (HPC) - Optimized for AI, deep learning, and scientific research.
Cloud Data Centers & Hyperscale Computing - Provides ultra-fast networking for multi-node deployments.
Virtualized & Cloud Environments - Enhances network performance for virtualized infrastructures.
InfiniBand-Based Clusters - Ideal for HPC clusters, AI workloads, and storage solutions.
NVMe-over-Fabrics (NVMe-oF) Storage - Supports high-speed storage networking.