Skip to:


Atlas is a Cray CS500 Linux Cluster with 11,520 2.40GHz Xeon Platinum 8260 processor cores, 101 terabytes of RAM, 8 NVIDIA V100 GPUs, and a Mellanox HDR100 InfiniBand interconnect. Atlas has a peak performance of 565 TeraFLOPS.

Node Specifications

Atlas is composed of 240 nodes, two login nodes, and two data transfer nodes. All nodes contain two 2.40GHz Xeon Platinum 8260 2nd Generation Scalable Processors. The processors contain 24 cores each, for a total of 48 cores per node. All Atlas Memory is DDR4-2933 2R RDIMM. The 228 standard Atlas compute nodes plus the login nodes have a total of 384GB RAM in each node. The 2 Data Transfer nodes have a total of 192GB RAM in each node. The 8 Big Mem nodes have a total of 1536GB RAM in each node. The 4 GPU nodes have 384GB RAM and 2 NVIDIA V100 GPUs in each node.

Node Name(s) Node Type Memory GPUs
atlas-login Login Nodes 384 GB N/A
atlas-dtn Data Transfer Nodes 192 GB N/A
atlas-[0001-0228] Compute Nodes 384 GB N/A
atlas-[0229-0236] Compute Nodes (Big Mem) 1536 GB N/A
atlas-[0237-0240] Compute Nodes (GPU) 384 GB NVIDIA V100 x 2

Accessing Atlas

Currently, only 4 machines from the Atlas cluster can be accessed from outside of the Mississippi State HPC2 Network. The two login nodes and two DTN nodes can be accessed by connecting via ssh:

 ssh <SCINet UserID> 
 ssh <SCINet UserID>

For older Microsoft Windows machines, we recommend using PuTTY or OpenSSH (see the SciNet Quick Start Guide) When you log in, you will be on the Atlas login node. The login node is a shared resource among all SCINet users that are currently logged in to the system. Please do NOT run computationally or memory intensive tasks on the login node, this will negatively impact performance for all other users on the system. See the Slurm section for instructions on how to run such tasks on compute nodes.

File Transfers

Globus Online can be used to transfer data to and from the Atlas cluster. The two DTN nodes on Atlas can also be accessed from outside of the Mississippi State HPC2 network via a single globus endpoint:


For small amounts of data that need to be transferred to user home directories, the scp command can be used. This command copies files between hosts on a network. It uses ssh for data transfer, and uses the same authentication and provides the same security as ssh. SCP will ask for passwords as well as two-factor authentication codes.

To copy a file from a remote host to local host:

 $ scp  <username>@<remotehost>:/path/to/file.txt  /local/directory/

To copy a file from a local host to a remote host:

 $ scp  /path/to/file.txt  <username>@<remotehost>:/remote/directory/

To copy a directory from a remote host to local host:

 $ scp  -r  <username>@<remotehost>:/remote/directory  /local/directory

To copy a directory from a local host to a remote host:

 $ scp  -r  /local/directory  <username>@<remotehost>:/remote/directory

Internet Connectivity

On this cluster, only certain nodes are reachable from the internet. Any software packages, libraries, or datasets needed for jobs or software development can be downloaded on the login nodes, devel nodes, or dtn nodes. The compute nodes of the cluster are on a private network, and they are unreachable from the internet.


Atlas uses LMOD as an environment module system. For a guide on how to use LMOD to set up the programming environment, please refer to the official LMOD User Guide.

Atlas uses a heirarchy based on the Compilers and MPI implementations. Software in the Core tree is built using the default system compilers. Software built against a specific compiler is available only after that compiler module has been loaded. Software built against a specific MPI implementation is available only after that MPI module has been loaded. Information on available modules can be found with the "module avail" and "module spider" commands:

 $ module spider mesa

   mesa: mesa/20.1.6
    This module can be loaded directly: module load mesa/20.1.6

    Additional variants of this module can also be loaded after loading the following modules:


Each user has a home folder with a 5GB quota. Once the data exceeds 5GB, the quota will be enforced and users will not be able to create any more files until they have cleared enough disk space to be back under the 5GB quota. The command to see quota usage (in human readable format) is:

 $ quota -s

Each project directory under /project also has a quota. This quota is applied to the entire project space, and is not set on a per-user basis. The default storage space for each project on Atlas is 1 TB. The usage and quota of all projects can be checked by running the following script:

 $ /apps/bin/reportFSUsage

Specific projects can also be added as an argument to the script. With the -p flag, it will display information for single projects as well as comma-separated lists of project names:

 $ /apps/bin/reportFSUsage -p proj1,proj2,proj3

 Directory/Group             Usage(GB)   Quota(GB)   Limit(GB)      Files  Percentage
 proj1                          41417       92160      102400    2211354        44.9
 proj2                          18287       23040       25600    1816769        79.4
 proj3                              0         922        1024          1         0.0

Project Space

On Atlas, the parallel filesystem is mounted as /project. All projects have their own directory located at /project/<projectname>/, and have quotas that are set by VRSC. This filesystem is considered a 'temporary' or 'scratch' filesystem.


It is important that users submit and run jobs from their respective /project directories instead of their home directories. The /home filesystem is not designed or configured for high performance use, nor does it have much space on it. Home directories will run out of space quickly on parallel jobs and will cause jobs to fail. After useful data is generated from supercompute jobs, it is recommended that users transfer this data to a more long-term storage location

/project/reference/data is the location for reference datasets. It is sync'd nightly from

Local Scratch Space

Atlas compute nodes provide up to 2TB of local disk space at /local/scratch that may be used for temporary storage. However, data stored in these locations is not backed up, and is susceptible to data loss due to disk failure or corruption. Each job sets up a unique local space available only with the job script via the environmental $TMPDIR variable:


You can use this for any scratch disk space you need, or if you plan to compute on an existing large data set (such as a sequence assembly job) it might be beneficial to copy all your input data to this space at the beginning of your job, and then do all your computation on $TMPDIR. You must copy any output data you need to keep back to permanent storage before the job ends, since $TMPDIR will be erased upon job exit. The following example shows how to copy data in, and then run from $TMPDIR:


 #SBATCH --job-name="TMPDIR example"
 #SBATCH --partition=atlas
 #SBATCH --account=projectname
 #SBATCH --nodes=1
 #SBATCH --ntasks=48
 #SBATCH --time=08:00:00

 # start staging data to the job temporary directory in $TMPDIR
 /bin/cp -r $MYDIR $TMPDIR/

 # add regular job commands like module load
 # and commands to launch scientific software

 # copy output data off of local scratch
 /bin/cp -r output $MYDIR/output

$TMPDIR is defined as the above directory at the beginning of every job, before the job scripts are executed. Users must overwrite this definition inside of the batch script itself if TMPDIR needs to be set to a different location.


On each login node, we have a utility named Arbiter which regulates activity by monitoring and limiting resource consumption via cgroups. Users are limited to using 4 cpu cores and 50GB of memory at time while their status is "normal". When a user uses more than half of their cap for 10 minutes, they are sent a warning email and penalized by having their usage caps reduced. Each violation of this usage policy results in an occurance which raises the penalty level. A user's occurance level will drop by 1 for every 3 hours that they go without triggering another usage violation. The following table outlines the penalty/status levels that we currently have defined.

Status CPU Cap Memory Cap Penalty Timeout
Normal 10 Cores 50 GB N/A
Penalty1 8 Cores 40 GB 30 Minutes
Penalty2 5 Cores 25 GB 1 Hour
Penalty3 3 Cores 15 GB 3 Hours

Certain programs, such as compilers and build utilities are whitelisted. These whitelisted programs will not cause the user to be penalized. Login nodes can still be freely used to build and test software. The purpose of Arbiter is to identify and limit computationally intensive jobs which should be run on the compute nodes instead of the login and development nodes.


Atlas uses the Slurm Workload Manager as a scheduler and resource manager. For a guide on how to use the Slurm system to submit and run jobs on this cluster, please refer to the official Slurm Quickstart Guide.

Slurm has three primary job allocation commands which accept almost identical options:

SBATCH Submits a job runscript for later execution (batch mode)
SALLOC Creates a job allocation and starts a shell to use it (interactive mode)
SRUN Creates a job allocation and launches the job step (typically an MPI job)

The salloc command is configured to have the default functionality on Atlas. The salloc command allocates resources for the job, but spawns a shell on the login node with various Slurm environment variables set. Job steps can be launched from the salloc shell with the srun command.

Example salloc usage:

 jake.frulla@Atlas-login-1 ~$ salloc -A admin
 salloc: Pending job allocation 527990
 salloc: job 527990 queued and waiting for resources
 salloc: Granted job allocation 527990
 salloc: Waiting for resource configuration
 salloc: Nodes Atlas-0029 are ready for job

 jake.frulla@Atlas-login-1 ~$ srun hostname

The srun command can be used to launch an interactive shell on an allocated node or set of nodes. Simply specify the --pty option while launching a shell (such as bash) with srun. It is also recommended to set the wallclock limit along with the number of nodes and processors needed for the interactive shell.

Example interactive shell:

 jake.frulla@Atlas-login-1 ~$ srun -A admin --pty --preserve-env bash
 srun: job 527987 queued and waiting for resources
 srun: job 527987 has been allocated resources

 jake.frulla@Atlas-0029 ~$  hostname

When running batch jobs, it is necessary to interact with the job queue. It is usually helpful to be able to see information about the system, the queue, the nodes, and your job. This can be accomplished a set of important commands:

SQUEUE Displays information about jobs in the scheduling queue.
SJSTAT Displays short summary of running jobs and scheduling pool data.
SHOWUSERJOBS Displays short summary of jobs by user and account, along with a summary of node state.
SHOWPARTITIONS Displays short summary and current state of the available partitions.
SSTAT Displays information about specific jobs.
SINFO Reports system status (nodes, queues, etc).
SACCT Displays accounting information from the Slurm database.

Each of these commands has a variety of functions, options, and filters that refine the information returned and displayed. Users can customize filtering, sorting, and output format using command line options or environment variables. Please consult the man page of each command or the Slurm Documentation for more information on using these commands.

 $ sacctmgr show associations where user=$USER format=account%20,qos%50

Available Atlas QOS's

QOS Priority Max Nodes
(Per QOS)
Max Walltime Notes
normal 20 30 14 Days Default QOS
debug 30 6 30 Minutes Can run a max of 2 jobs per user,
3 nodes per job
special 20 N/A N/A Must be requested and approved
priority 100 N/A N/A Must be requested and approved

Available Atlas Partitions

Partition Available Nodes Memory Notes
atlas* 228 384 GB N/A
bigmem 8 1536 GB N/A
gpu 4 384 GB 2 x NVIDIA V100 GPUs per node
service 2 192 GB Partition consists of DTN nodes
For submitting Data Transfer jobs
* Default Partition


 $ srun -n 10 -N 2 --mem=150G ./example_program
 $ srun -n 10 -N 1 --exclusive ./example_program

Job Dependencies and Pipelines

Job dependencies are used to defer the start of a job until the specified dependencies have been satisfied. They are specified with the --dependency option to sbatch or swarm in the format:

 sbatch --dependency=<type:job_id[:job_id][,type:job_id[:job_id]]> ...

Dependency types:

after:jobid[:jobid...] job can begin after the specified jobs have started
afterany:jobid[:jobid...] job can begin after the specified jobs have terminated
afternotok:jobid[:jobid...] job can begin after the specified jobs have failed
afterok:jobid[:jobid...] job can begin after the specified jobs have run to completion with an exit code of zero
singleton jobs can begin execution after all previously launched jobs with the same name and user have ended

Job dependencies are useful in setting up job pipelines. If a particular job needs a dataset downloaded before it can run, this must be submit as two jobs: the first job downloads the dataset on the DTN nodes in the service partition and the second job operates on the dataset retrieved by the first job. To set up pipelines using job dependencies the most useful types are afterany, afterok and singleton. The simplest way is to use the afterok dependency for single consecutive jobs. For example:

 $ sbatch

 $ sbatch --dependency=afterok:11254323

Now when job1 ends with an exit code of zero, job2 will become eligible for scheduling. However, if job1 fails (ends with a non-zero exit code), job2 will not be scheduled but will remain in the queue and needs to be canceled manually. As an alternative, the afterany dependency can be used and checking for successful execution of the prerequisites can be done in the jobscript itself.

Open OnDemand

OnDemand is a user-friendly front-end interface for access to the Atlas Cluster resources. With OnDemand, users can upload and download files, create, edit, submit, and monitor jobs, run GUI applications, open a remote desktop, and connect to the terminal on a login node, all via a web broswer, with no client software to install and configure. To connect to Atlas's OnDemand interface, visit with a webbrowser. To have the best experience using OnDemand, use the latest versions of Google Chrome, Mozilla Firefox or Microsoft Edge. Use any modern browser that supports ECMAScript 2016.

In order to log in, users must use their SciNet credentials. The Google Authenticator token used for Open OnDemand is separate from the token used by Ceres and Atlas. Users must register their Open OnDemand account with Authenticator. New users will be prompted to register after logging in with their SciNet credentials.

After logging in, the user will be greeted with the dashboard page. This gives links to all other OnDemand utilities at the top of the page, as well as displaying our Terms of Service and any announcements related to the OnDemand Server:

OnDemand Dashboard page

Virtual Desktops

You can request a desktop to run GUI applications without the issues of X11 forwarding. To access a virtual desktop on Atlas, click the "Interactive Apps" dropdown menu and select "Atlas Desktop" from the Desktops section of the menu:

OnDemand Navigation bar interactive app dropdown menu

On the Desktop parameters page, you need to specify what Slurm Account, Partition, and QOS to run the desktop under, as this will run as a job through the resource manager and scheduler. There will also be fields for users to select how long they need the desktop, as well as how many nodes and processors should be allocated for the session:

OnDemand Virtual Desktop Launch Parameter Screen

After entering in the necessary parameters for the batch job, click "Launch" to put the session into the scheduling queue. You will be taken to the "Interactive Sessions" page, which contains a list of all queued and running sessions that the user has. The virtual desktop that was just launched will have an entry in the queued state until a spot is freed and the scheduler is able to start the job:

OnDemand Interactive App queue

Once the job has been given an allocation and launched on the compute nodes, you will be able to connect to the virtual desktop session via VNC in the web browser by clicking the "Launch Atlas Desktop" button on the Interactive App entry. There are also sliders to adjust the remote session's image quality and graphics prior to launching the desktop:

OnDemand Launch Interactive App

OnDemand Virtual Desktop

If the window is closed out without cancelling the job, the job will continue to run on the cluster. Users can access their Virtual Desktop, or any other active session, by navigating to the "My Interactive Sessions" page. Users can get to this page by clicking on the Sessions icon on the navigational bar at the top of their dashboard. Once on this page, users will be able to re-launch their interactive session. If you want to adjust the Compression or Image Quality of your session, you need only close out the tab that the session is in and then relaunch the desktop. It is not necessary to create a new desktop session.

RStudio Server and Jupyter Notebooks

In addition to the virtual desktop, other standalone active GUI applications can be launched. On the GUIs and Servers section of the Interactive Sessions dropdown menu, there are options for various graphical applications that can be launched by themselves, such as Jupyter and Rstudio.

The advantage of running these applications separate from the virtual desktop is they run on top of a more lightweight windows manager that does not have the overhead of a full desktop environment. They also have startup scripts that are application specific and often optimized for the way that they are launched on OnDemand. Otherwise, these applications are launched exactly the same way that the virtual desktops are launched:

OnDemand Matlab application parameters selection

OnDemand Matlab application launch entry

OnDemand Matlab Interactive Application