MIT Campus-Wide Resources

Campus wide resources are enabled and supported through ORCD. These resources have base capacity that is available for research and teaching use by anybody in the MIT community. They also provide PI group priority resources that are available for general opportunist use when they are not in use for priority work. We are passionate about continuing to grow an MIT-shared, cost-effective research computing capability with PI priority that works well for all PIs across all of MIT. Please don't hesitate to get in touch at orcd-help@mit.edu to explore ways we can help you. 



Getting started information and documentation for the ORCD systems can be found at https://orcd-docs-mit-edu.ezproxy.canberra.edu.au.

The text below describes general features of the ORCD systems.  

Engaging

The Engaging cluster is open to everyone on campus. Further information and support is available from orcd-help-engaging@mit.edu.

Features: 

  • Compute power: 80,000 x86 CPU cores and 300 GPU cards ranging from K80 generation to recent Voltas. Additional compute and storage resources can be purchased by PIs.
  • Hardware access: Hardware access is through the Slurm resource scheduler that supports batch and interactive workloads and allows dedicated reservations.
  • Portal: A standard, open-source, web-based portal supporting Jupyter notebooks, R studio, Mathematica and X graphics is available at https://engaging-ood-mit-edu.ezproxy.canberra.edu.au.
  • Software
    • A wide range of standard software is available and the Docker compatible Singularity container tool is supported.
    •  A range of PI group maintained custom software stacks are also available through the widely adopted environment modules toolkit.
    • User-level tools like Anaconda for Python, R libraries and Julia packages are all supported.

The cluster has a large shared file system for working datasets.

 

Openmind

Openmind is a shared cluster with a large number of A100 GPUs. The system has been developed to meet needs of Brain and Cognitive Science researchers. It was added as an ORCD managed resource in July 2023. The computing resources on OpenMind are being migrated to Engaging in 2024, then there will be a new policy for institute-wide users to access these resources on Engaging.

  • Compute power: Over 300 GPU cards and around 3500 CPU cores.
  • Hardware access: The cluster resources are managed by the Slurm scheduler which provides support for batch, interactive, and reservation-based use.
  • Software: Python is supported in Anaconda environment or Singularity containers. MATLAB is available. 

Further information and support is available from orcd-help-openmind@mit.edu

SuperCloud

The SuperCloud system is a collaboration with MIT Lincoln Laboratory on a shared facility that is optimized for streamlining open research collaborations with Lincoln Laboratory (e.g.,  AIA, BW, CQE, Haystack, HPEC, ISN). The facility is open to everyone on campus. Further information and support is available at supercloud@mit.edu.

Features:

  • Compute power: The latest SuperCloud system has more than 16,000 x86 CPU cores and more than 850 NVidia Volta GPUs in total.
  • Hardware access: Hardware access is through the Slurm resource scheduler that supports batch and interactive workload and allows dedicated reservations.
  • Portal: A custom, web-based portal supporting Jupyter notebooks is available.
  • Software: A wide range of standard software is available and the Docker compatible Singularity container tool is supported.

Satori

Satori is an IBM Power 9 large memory node system. It is open to everyone on campus and has optimized software stacks for machine learning and for image stack post-processing for MIT.nano Cryo-EM facilities. Further information and support is available at orcd-help-satori@mit.edu.

Features:

  • Compute power: 256 NVidia Volta GPU cards attached in groups of four to 1TB memory nodes and a total of 2560 Power 9 CPU cores. Additional compute and storage resources can be purchased by PIs and integrated into the system.
  • Hardware access: Hardware access is through the Slurm resource scheduler that supports batch and interactive workload and allows dedicated reservations.
  • Portal: A standard web based portal with Jupyter notebook support is available.
  • Software: A wide range of standard software is available and the Docker compatible Singularity container tool is supported.