Carter is the newest of Purdue's Community Clusters, and was launched through a partnership with Intel in November 2011. Carter primarily consists of HP compute nodes with two 8-core Intel Xeon-E5 processors (16 cores per node), between 32 GB and 256 GB of memory, and a 500 GB system disk. A few NVIDIA GPU-accelerated nodes are also available. All nodes have 56 Gbps FDR Infiniband connections and a 5-year warranty.
To purchase access to Carter today, go to the Carter Access Purchase page. Please subscribe to our Community Cluster Program Mailing List to stay informed on the latest purchasing developments or contact us via email at firstname.lastname@example.org if you have any questions.
Carter is named in honor of Dennis Lee Carter, Purdue alumnus and creator of the "Intel Inside" campaign. More information about his life and impact on Purdue is available in an ITaP Biography of Dennis Lee Carter.
Most Carter nodes consist of identical hardware. All Carter nodes have 16 processor cores, between 32 GB and 256 GB RAM, and 56 Gbps Infiniband interconnects. Carter G nodes are also each equipped with three NVIDIA Tesla GPUs that may be used to further accelerate work tailored to these GPUs.
|Sub-Cluster||Number of Nodes||Processors per Node||Cores per Node||Memory per Node||Interconnect||Disk||TeraFLOPS|
|Carter-A||576||Two 8-Core Intel Xeon-E5||16||32 GB||56 Gbps FDR Infiniband||500 GB||165.6|
|Carter-B||70||Two 8-Core Intel Xeon-E5||16||64 GB||56 Gbps FDR Infiniband||500 GB||20.1|
|Carter-C||2||Two 8-Core Intel Xeon-E5||16||256 GB||56 Gbps FDR Infiniband||500 GB||0.6|
|Carter-G||12||Two 8-Core Intel Xeon-E5 + Three NVIDIA Tesla M2090 GPUs||16||128 GB||56 Gbps FDR Infiniband||500 GB||n/a|
Carter nodes run Red Hat Enterprise Linux 6 (RHEL6) and use Moab Workload Manager 6 and TORQUE Resource Manager 3 as the portable batch system (PBS) for resource and job management. Carter also runs jobs for BoilerGrid whenever processor cores in it would otherwise be idle. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).
For more information about the TORQUE Resource Manager:
On Carter, ITaP recommends the following set of compiler, math library, and message-passing library for parallel code:
To load the recommended set:
$ module load devel
To verify what you loaded:
$ module list