A Photographic Tour of Gordon and Trestles

I took a bunch of photos of Gordon and Trestles, the supercomputers at the San Diego Supercomputer Center, in case I needed some material for background graphics or flash for any of the presentations and activities in which I'll be participating surrounding SC'13.  Upon reviewing the photos, I realized that there is a lot of technological content captured in them that others, either professionals or enthusiasts, might enjoy seeing.

So as to contribute what I can, I uploaded a few photos and annotated them as best I could in a gallery I call "A Photographic Tour of Gordon and Trestles." Here are the thumbnails:

SDSC's Gordon Supercomputer A Gordon IO Node Rack of Gordon IO Nodes Rack of Gordon Compute Nodes Gordon Compute Subrack, Front Fan Modules in Gordon Subrack
Rear of Gordon Compute Subrack Cabling in Gordon Compute Rack Rear of Gordon Compute Rack Networking for Gordon Compute Nodes A QDR InfiniBand Switch in Gordon Gordon Compute Node
Gordon Compute Nodes Gordon Viewed from Rear Gordon Compute Rack Ethernet SwitchesInter-row Cabling in GordonFlash Photo of Gordon Compute SubrackSDSC's Trestles Supercomputer
Trestles's Infiniband SwitchA Rack of Trestles Compute NodesTrestles Nodes CloseupData Oasis OSS

Gordon and Trestles, a set on Flickr.

For those who may not be familiar, SDSC is the home to two "national systems," or supercomputers which are available for anyone in the U.S. to use through the National Science Foundation's XSEDE program.  These two systems were both integrated by Cray (formerly Appro) and are:
  • Trestles, a 324-node, 10,363-core, 100 TF, AMD Opteron (Magny Cours) cluster outfitted with a fat tree QDR InfiniBand interconnect designed to accommodate the computational needs of the common man.  This machine was deployed in 2011.
  • Gordon, a 1024-node, 16,384-core, 341 TF, Intel Xeon (Sandy Bridge) cluster outfitted with two (dual-rail) hybrid 3D torus QDR InfiniBand fabric designed to tackle data-intensive problems with an architecture very rich in IO capabilities.
Most of the gallery details Gordon, since it has a very unique architecture based around a relatively complex basic building block:


Sixteen compute nodes in an 8U subrack share a common 36-port QDR IB switch, and an IO node with sixteen SSDs also hangs off that switch.  The IO node is connected directly to Lustre via a 2×10Gbe bonded link, and it also provides the SSDs to the compute nodes over InfiniBand.  This entire building block represents one torus node on Gordon's overall 4×4×4 torus, and since the whole fabric is really dual-rail, each node (compute and IO) is connected to two of these 36-port switches.

This all makes for a very pretty machine.