Carter

Overview of Carter

Carter was launched through an ITaP partnership with Intel in November 2011 and is a member of Purdue's Community Cluster Program. Carter primarily consists of HP compute nodes with two 8-core Intel Xeon-E5 processors (16 cores per node) and between 32 GB and 256 GB of memory. A few NVIDIA GPU-accelerated nodes are also available. All nodes have 56 Gbps FDR Infiniband connections and a 5-year warranty. Carter is planned to be decommissioned on April 30, 2017.

To purchase access to Carter today, go to the Carter Access Purchase page. Please subscribe to our Community Cluster Program Mailing List to stay informed on the latest purchasing developments or contact us via email at rcac-cluster-purchase@lists.purdue.edu if you have any questions.

Namesake

Carter is named in honor of Dennis Lee Carter. More information about his life and impact on Purdue is available in an ITaP Biography of Dennis Lee Carter.

Detailed Hardware Specification

Most Carter nodes consist of identical hardware. All Carter nodes have 16 processor cores, between 32 GB and 256 GB RAM, and 56 Gbps Infiniband interconnects. Carter G nodes are also each equipped with three NVIDIA Tesla GPUs that may be used to further accelerate work tailored to these GPUs.

Sub-Cluster Number of Nodes Processors per Node Cores per Node Memory per Node Interconnect TeraFLOPS
Carter-A 556 Two 8-Core Intel Xeon-E5 16 32 GB 56 Gbps FDR Infiniband 165.6
Carter-B 80 Two 8-Core Intel Xeon-E5 16 64 GB 56 Gbps FDR Infiniband 20.1
Carter-C 12 Two 8-Core Intel Xeon-E5 16 256 GB 56 Gbps FDR Infiniband 0.6
Carter-G 12 Two 8-Core Intel Xeon-E5 + Three NVIDIA Tesla M2090 GPUs 16 128 GB 56 Gbps FDR Infiniband n/a

Carter nodes run Red Hat Enterprise Linux 6 (RHEL6) and use Moab Workload Manager 7 and TORQUE Resource Manager 4 as the portable batch system (PBS) for resource and job management. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).

For more information about the TORQUE Resource Manager:

On Carter, ITaP recommends the following set of compiler, math library, and message-passing library for parallel code:

  • Intel 13.1.1.163
  • MKL
  • OpenMPI 1.6.3

To load the recommended set:

$ module load devel

To verify what you loaded:

$ module list