Running parallel MX software at an interactive compute node

We recommend using the PReSTO menu to launch applications, however terminal window alternatives are:

How to use terminal to launch default XDSAPP3, XDSGUI or hkl2map software at an interactive compute node.

interactive --nodes=1 --exclusive -t 01:00:00 -A naiss2024-22-1114
module load XDSAPP3
xdsapp

interactive --nodes=1 --exclusive -t 01:00:00 -A naiss2024-22-1114
module load XDSGUI
xdsgui

interactive --nodes=1 --exclusive -t 01:00:00 -A naiss2024-22-1114
module load hkl2map
hkl2map

Login node for graphics, non-parallel software, and GUIs configured for SLURM

Letter case matter when loading modules

Graphics software:
module load CCP4            # load CCP4 module i.e. all CCP4 software
    coot                    # launch coot
    ccp4mg                  # launch ccp4mg
module load PyMol           # load PyMOL module
    pymol                   # launch PyMol
module load ChimeraX        # load ChimeraX module
    chimerax                # launch ChimeraX

Diffraction data viewers:
module load XDS-Viewer      # load XDS-Viewer default module
    xds-viewer              # launch XDS-Viewer
module load ALBULA          # load ALBULA default module
    albula                  # launch ALBULA GUI
module load adxv            # load adxv default module
    adxv                    # launch adxv GUI
    
Non-parallel software:
module load USF             # load all Uppsala Software Factory softwares
    moleman                 # launch moleman
module load CNS             # load CNS default module
    cns_solve               # usually part of cns sbatch script launched from command-line

GUIs with inherent SLURM schedulers
module load CrystFEL        # load CrystFEL module
    crystfel                # launch CrystFEL GUI
module load Phenix          # load Phenix default module
    phenix                  # launch Phenix GUI

Thinlinc documentation

Information on using ThinLinc desktop environment is available at Lunarc support pages and NSC running graphical applications.

Software modules available in PReSTO

Sometimes researchers might want to use the same version of a particular software that they been used in the past. This is working for the majority of softwares, however not for XDS that are time-bombed making old versions useless. Multiple software versions are also useful when adding new versions, into PReSTO because then the new versions can then be tested without overwriting the current default (D) version. The very latest versions of software are sometimes installed but not released to users, so asking for the latest versions in support will sometimes result in a link to the very latest version in testing by PReSTO team.

A module avail XXX shows all available versions or modules of XXX

module load PyMOL                 # load PyMOl default module
module load PyMOL/2.5.0-1-PReSTO  # load PyMOL version 2.5.0

A `module load XXX` or `module load XXX/version` command is required in sbatch scripts and prior to launching software from the terminal window. 

HPC terminology

HPC terminology regarding nodes, cores, processors and tasks taken from the LUNARC Cosmos pages on the subject.

Term Explanation Tetralith Cosmos
node A physical computer 1892 182
processor   a multi-core processor housing many processing elements 2 per node 2 per node
socket plug where processor is placed, synonym for the processor 2 per node 2 per node
core individual processing element 32 per node 48 per node
task software process with own data & instructions forking multiple threads   specified in sbatch script    
thread An instruction stream sharing data with other threads from the task specified in sbatch script  

Table 1. Terminology regarding resources as defined in the HPC community

HPC vs XDS terminology

Term XDS terminology
tasks JOBS (i.e. MAXIMUM_NUMBER_OF_JOBS)
threads   PROCESSORS (i.e. MAXIMUM_NUMBER_OF_PROCESSORS)

Table 2. HPC vs XDS terminology

Useful HPC commands

HPC command Consequence
interactive -N 1 –exclusive -t 00:30:00 -A naiss2024-22-1114   Get 30 min terminal window at compute node for project
exit leave terminal window, save compute time on project
storagequota check your diskspace at /home and /proj/xray folder
storagequota -a check diskspace for all users in /proj/xray
squeue -u x_user Check my jobs in running or in the SLURM queue
top see all jobs running on current node
top -U username see my jobs running on current node
scancel JOBID Kill my job with JOBID
module load XDSAPP Load XDSAPP and dependencies CCP4, PHENIX, XDS, XDSTAT…
module avail xxx What xxx modules are there?
module purge unload all modules

Table 3. Basic HPC commands

Tetralith and Cosmos differences

Tetralith vs Cosmos   outcome
jobsh n1024 Access a node when in use at NSC Tetralith
ssh au118 Access a node when in use at Lunarc Cosmos

Table 4. Command-line commands differing between LUNARC Cosmos and NSC Tetralith

SBATCH and checking the queue

Running sbatch scripts is the most efficient way of using HPC compute time since once the job is finished, the clock counting compute time is stopped. Every sbatch script require a maximum time for the job to finish by #SBATCH -t 00:30:00 before the job can be scheduled into the queue. To check status of jobs sumitted by sbatch use squeue -u username and obtain

Figure 1. Output of squeue -u username. The first column gives the job ID, the second the partition (or queue) where the job was submitted, the third the name of the job (specified by the user in the submission script) and the fourth the owner of the job. The fifth is the status of the job (R=running, PD=pending, CA=cancelled, CF=configuring, CG=completing, CD=completed, F=failed). The sixth column gives the elapsed time for each particular job. Finally, there are the number of nodes requested and the nodelist where the job is running (or the cause that it is not running).

Now it is possible to access the compute nodes and check job status in more detail using top or top -U username. This is done in two different ways at NSC Triolith and LUNARC Cosmos:

  • At LUNARC Cosmos use: ssh au118
  • At NSC Tetralith use: jobsh n1024

Once your terminal window is at the compute node you can check status by top

Figure 2. Result of top given at compute node. Using top the status of the job is indicated by (D=uninterruptible sleep, R=running, S=sleeping, T=traced or stopped, Z=zombie)

or by top -U username

Figure 3. Result of top -U username given at compute node

The interactive command can use the same parameters as sbatch below.

SBATCH script line Consequence
#!/bin/sh Use sh to interpret the script
#SBATCH -t 0:30:00 Run the sh script for maximum 30 min
#SBATCH –nodes=2 –exclusive Allocate two full nodes for this sh script
#SBATCH -A naiss2024-22-1114 Count compute time on project naiss2024-22-1114
#SBATCH –mail-type=ALL Send email when job start and stops
#SBATCH –mail-user=name.surname@lu.se   Send email to name.surname@lu.se

Table 5. SBATCH script lines. The interactive command is using the same terminology, however usually given in a single row at the login node as interactive --nodes=2 --exclusive -t 00:30:00 -A naiss2024-22-1114

Useful LINUX commands

  • rsync -rvplt ./data username@aurora.lunarc.lu.se:/lunarc/nobackup/users/username/. data directory copied to lunarc
  • scp file.pdb username@aurora.lunarc.lu.se:/lunarc/nobackup/users/username/. single file.pdb copied to lunarc

User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems

Self-service

SUPR
NSC Express