National Computational Infrastructure
NCI National Facility
Sun Constellation cluster, vayu: System Details
Image of vayu cluster goes here

Oracle/Sun Constellation Cluster Hardware Specifications of the NCI NF Oracle/Sun Constellation Cluster:
  • a high density integrated system

  • 1492 nodes in Sun X6275 blades, each containing:
    • two quad-core 2.93GHz Intel Nehalem cpus with 6.4GTs QPI bus
    • L1 cache (on chip): 32KB (I) + 32KB (D)
      L2 cache (on chip): 256KB
      L3 cache (on chip): 8MB per quad-core cpu
    • 24Gbytes DDR3-1333 memory (48 nodes have 48 Gbytes DDR3-1333 and 4 nodes have 96 Gbytes DDR3-1066 memory)
    • 24GB Flash DIMM for swap and some job scratch
    • on-board QDR InfiniBand adapter

  • Aggregate Specfprate_base_2006 of (compute nodes only) 250000 (AC was around 20000)
    Peak theoretical performance of approximately 140TFlops.

  • Total of 37TB of RAM on compute nodes

  • 30 dual socket, quad-core Sun X4270 servers for Lustre fileserving

  • Approx 800 TBytes of usable global storage using 52 Sun J4400 JBOD trays each with 24 1TB Seagate Enterprise SATA drives

  • Four independent Sun DS648 Infiniband switches each with 432 QDR IB ports for both MPI and Lustre filesystem traffic:
    Measured MPI Latency: < 2.0us
    Measured MPI Bandwidth: > 2800MB/s per direction per node
Sun Constellation Cluster Software The system software used on the vayu cluster includes:
  • CentOS 5.6 Linux distribution (based on RHEL5.6)

  • the oneSIS cluster software management system

  • the Lustre cluster file system:
    • 104 x (8+2 RAID6 8TB) OSTs for /short
    • 104 x (1+1 RAID1 520GB) OSTs for /home
    • 104 x (1+1 RAID1 140GB) OSTs for /apps
    • root filesystem also on Lustre

  • the National Facility variant of the OpenPBS batch queuing system

        
Image of VU IB switch

Image of VU disks

Email problems, suggestions, questions to