|
|||||||||||||||||||||||||||||||||
AMBER Archive (2009)Subject: RE: [AMBER] XServe cluster
From: Richard Owczarzy (rowczarzy_at_idtdna.com)
What is the delay time for request? It is not just the speed. It matters
Infiniband is marketed as "Signal Rate" not "Data Rate" like Ethernet.
Cheers!
Richard Owczarzy
>From the presentations:
Gigabit Ethernet Bandwidth/Latency
Theoretical Bandwidth: 1Gbps = 1,000 Mbps, 1,000 Mbps / 8 bytes/bit =
Demonstrated Bandwidth: 112 MB/s, ~90%
Discrepancies: Overhead in TCP stack
Latency (Pallas): 30 microseconds
InfiniBand - DDR
Signal Rate: 20Gb/s
Data Rate: 16Gb/s
Theoretical Bandwidth (divide by 8): 2GB/s
Observed Bandwidth (ping-pong): 1.5 GB/s
Observed Latency: 1.2 microsec - 1.6 microsec
-----Original Message-----
Hi,
I have been tasked with building a 4 or 8 node Xserve cluster for use
with Amber and Gaussian. The Xserve's are quad core Intel servers. My
concern is the interconnect between the servers.
Would a QLogic Fibre Channel switch running at 4.25GB/s be suitable for
a cluster this size?
Infiniband is probably going to be too expensive. Are there any other
recommendations?
Thanks,
Abdul
_______________________________________________
AMBER mailing list
http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
| |||||||||||||||||||||||||||||||||
|