Skip to content

LDAS @ Caltech

Key information
Home page
Account sign-up See Requesting an account on the LDG page
Support Open a Help Desk ticket
SSH CA Cert @cert-authority * ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHa03AZF3CvJ1C4Po15swSaMYI4kPszyBH/uOKHQYvu+EpehSfMZMaX5D7pUpc5cAXvMEEFzlZJQH4pOioIlqyE= IGWN_CIT_SSH_CERT_AUTHORITY

This computing system is available for all LIGO-Virgo-KAGRA collaboration members with as part of the LIGO Data Grid (LDG) via an account request at https://ldg.ligo.org.

SSH access is either via the local cluster Proxy sshproxy.ligo.caltech.edu or the general LDG Portal ssh.igwn.org with multi-factor authentication (MFA): see https://git.ligo.org/computing/iam/mfa.

Historical direct ssh access is no longer allowed with the temporary exception of citlogin0.ligo.caltech.edu to help facilitate a transition to MFA. Access to citlogin0 is limited to connections from an allow list of remote client IP addresses; to request an addition please open a help desk ticket.

Production Login and Condor Job Submission Hosts

Hostname Description Mem CPU GPU VM SciToken
citlogin0.ligo.caltech.edu All users 16G 4 Y Local/AP
citlogin1.ligo.caltech.edu home1 users only 386G 64 N Local/AP
citlogin2.ligo.caltech.edu home2 users only 386G 64 N Local/AP
citlogin3.ligo.caltech.edu home3 users only 386G 64 N Local/AP
citlogin4.ligo.caltech.edu home4 users only 386G 64 N Local/AP
citlogin5.ligo.caltech.edu home5 users only 386G 64 N Local/AP
ldas-grid.ligo.caltech.edu General use 128G 24 Y Local/AP
ldas-pcdev1.ligo.caltech.edu Matlab for CIT members 16G 8 Y Local/AP
ldas-pcdev2.ligo.caltech.edu GPU mixed 256G 32 4 N Local/AP
ldas-pcdev3.ligo.caltech.edu GPU mixed 128G 16 3 N Local/AP
ldas-pcdev5.ligo.caltech.edu Large memory 512G 40 N Local/AP
ldas-pcdev6.ligo.caltech.edu Large memory 1.5T 72 N Local/AP
ldas-pcdev11.ligo.caltech.edu GPU mixed 128G 20 4 N Local/AP
ldas-pcdev12.ligo.caltech.edu GPU 4 x A100-SXM4-80GB 512G 128 4 N Local/AP
ldas-pcdev13.ligo.caltech.edu GPU mixed 128G 24 4 N Local/AP
ldas-pcdev14.ligo.caltech.edu General use 240G 16 N Local/AP
ldas-osg.ligo.caltech.edu IGWN 192G 32 Y Local/AP
ldas-osg2.ligo.caltech.edu IGWN 96G 24 Y Dual
ldas-osg3.ligo.caltech.edu IGWN 96G 16 Y Local/AP

Test Login and Condor Job Submission Hosts

Hostname Description Mem CPU GPU VM SciToken
condor-f1.ligo.caltech.edu IGWN 24.x Release 32G 32 Y Vault
condor-f2.ligo.caltech.edu IGWN 24.x Beta 16G 8 Y Dual
condor-f3.ligo.caltech.edu IGWN 25.x Alpha 16G 16 Y Local/AP
condor-f4.ligo.caltech.edu IGWN 24.x Snapshot 16G 8 Y Local/AP
condor-f5.ligo.caltech.edu IGWN 24.x Beta 16G 8 Y Local/AP
ldas-pcdev4.ligo.caltech.edu IGWN development repo 32G 4 Y Local/AP
ldas-pcdev8.ligo.caltech.edu IGWN testing repo 32G 4 Y Local/AP
ldas-pcdev10.ligo.caltech.edu EPEL 8 upstream repo 64G 4 N Local/AP
ldas-pcdev15.ligo.caltech.edu RL9.6 16G 4 Y Local/AP
ldas-pcdev16.ligo.caltech.edu AlmaLinux 10.1 8G 4 Y Local/AP

Dedicated Login and Condor Job Submission Hosts

Hostname Description Mem CPU GPU VM SciToken
aframe.ldas.cit Aframe 512G 48 2 N -
cbc.ligo.caltech.edu CBC 32G 12 Y Local/AP
cwb.ligo.caltech.edu cWB 32G 8 Y Local/AP
cwb-mon.ldas.cit cWB 96G 8 Y -
detchar.ligo.caltech.edu DetChar 256G 16 Y Local/AP
dqr.ligo.caltech.edu DetChar 48G 8 Y Vault
emfollow.ligo.caltech.edu EM Follow-up 64G 8 Y Local/AP
emfollow-dev.ligo.caltech.edu EM Follow-up 64G 8 Y Local/AP
emfollow-meg.ligo.caltech.edu EM Follow-up 16G 2 Y Local/AP
emfollow-playground.ligo.caltech.edu EM Follow-up 64G 8 Y Local/AP
emfollow-test.ligo.caltech.edu EM Follow-up 64G 8 Y Local/AP
gstlal.ligo.caltech.edu GstLAL 64G 8 Y Vault
ldvw.ldas.cit DetChar 64G 4 Y Local/AP
mly.ldas.cit MLy 96G 10 1 N Local/AP
pycbc-live.ldas.cit PyCBC 96G 16 Y -
pycbc-live-ew.ldas.cit PyCBC 32G 24 Y -
spiir.ligo.caltech.edu SPIIR 32G 8 Y Vault
spiir-mon.ldas.cit SPIIR 4G 4 Y -

For details on how to connect to these machines, please see Access to the LIGO Data Grid.

Additional services

Service URL
JupyterLab https://jupyter.ligo.caltech.edu
User webspace https://ldas-jobs.ligo.caltech.edu/~USER/

Configuring your user environment on LDAS

This page describes the default user environments on LDAS, and how to customise availability and versions of the following software distributions:

Intel oneAPI

The Intel oneAPI Base Toolkit is available by default on LDAS, with the exception of the intelpython and mpi modules.

Disabling all Intel modules

To disable loading of all Intel oneAPI modules, create an empty file in your home directory called ~/.nointel:

touch ~/.nointel

Customising the included oneAPI modules

To take full control over which modules to include/exclude (including pinning specific versions) please create ~/.oneapi_config.txt that takes precedence over the default /opt/intel/oneapi/oneapi_config.txt

Conda environment selection

The igwn conda environment is activated for all users by default when logging into a Rocky Linux 8 headnode. This can be customized by each user in a few different ways:

  • To prevent any conda pre-setup from occurring, which will prevent you from running conda activate from your shell and prevent any conda environment activation, create an empty ~/.noconda file in your home directory:

    touch ~/.noconda
    
  • To allow the conda pre-setup to occur, but prevent any conda environment from activating on login, you can create a file called ~/.noigwn in your home directory:

    touch ~/.igwn
    
  • To change the conda environment that gets activated when you login from the default igwn to something else, create a file called ~/.conda_version in your home directory. This file should contain a single line that is the name of the custom environment you want to activate:

    echo "igwn-py39-20220827" > ~/.conda_version
    

Corner cases:

  • If your selected conda environment doesn't exist, then no conda environment will be activated and a message will be printed to the screen. You will still be able to log in, but you will not be in an igwn conda environment. At this point you should remove or rename your ~/.conda_version custom environment selection file.

  • If you have multiple lines in ~/.conda_version, only the first line will be read.

  • If conda is broken or unavailable to the point that it is not allowing you to log in at all, then you can ssh into a cluster headnode using port 2222 to bypass any conda setup, regardless of the presence of a ~/.noigwn or ~/.conda_version file. This has the same effect as creating a ~/.noconda file in your home directory.

    ssh -p 2222 albert.einstein@ldas-pcdev1.ligo.caltech.edu
    

MATLAB

Enabling MATLAB

MATLAB is available on the command path by default, and can be discovered using which:

$ which matlab
/ldcg/matlab_r2015a/bin/matlab

Note

The default matlab version will be updated from time-to-time according to approval from the Software Change Control Board.

Interactive MATLAB sessions that require a license are restricted to just the LIGO Lab members on ldas-pcdev1.ligo.caltech.edu.

Enabling a specific version of MATLAB

To select a specific version of MATLAB, create a file in your ${HOME} directory named .usematlab_{release}, where {release} is the release number of MATLAB that you want, e.g:

touch ~/.usematlab_r2019a

Listing available MATLAB releases

To list the available MATLAB releases, just run this:

ls /ldcg/ | grep matlab

Disabling MATLAB

Disabling MATLAB

To opt out of all MATLAB releases, create a file in your ${HOME} directory named .nomatlab:

touch ~/.nomatlab

Warning

~/.nomatlab takes precedence over any ~/.usematlab_* files, so if you want to opt in after previously opting out, make sure and remove the old ~/.nomatlab file.

Restoring accidentally deleted/modified files at CIT

Home directories on the CIT cluster use the ZFS filesystem, which allows for periodic snapshots. This allows you to recover accidentally deleted/modified files as long as the file you want to recover existed when the snapshot was taken.

Here's an example. Let's assume you're a user working in your home directory on the CIT cluster:

$ pwd
/home/albert.einstein/temp
$ ls -l test.file
-rw-------   1 albert.einstein albert.einstein  8416 Feb 18  2020 test.file
$ rm test.file

Oops--you didn't mean to delete test.file!

To find out if you can recover this file, first you need to see what snapshots are available. You can find them by looking at the files in the /snapshot/albert.einstein directory:

$ ls /snapshot/albert.einstein
autosnap_2021-10-01_00:00:01_monthly/  autosnap_2021-12-07_00:42:36_weekly/  autosnap_2021-12-07_15:25:30_hourly/  autosnap_2021-12-08_05:56:38_hourly/
autosnap_2021-11-01_19:27:51_monthly/  autosnap_2021-12-07_02:43:53_hourly/  autosnap_2021-12-07_16:16:33_hourly/  autosnap_2021-12-08_07:03:20_hourly/
autosnap_2021-11-15_23:30:55_weekly/   autosnap_2021-12-07_03:32:00_hourly/  autosnap_2021-12-07_17:35:32_hourly/  autosnap_2021-12-08_08:34:30_hourly/
autosnap_2021-11-16_16:17:41_monthly/  autosnap_2021-12-07_04:18:34_hourly/  autosnap_2021-12-07_18:42:37_hourly/  autosnap_2021-12-08_09:22:19_hourly/
autosnap_2021-11-16_16:17:41_weekly/   autosnap_2021-12-07_05:33:12_hourly/  autosnap_2021-12-07_19:06:09_hourly/  autosnap_2021-12-08_10:32:40_hourly/
autosnap_2021-11-22_23:48:55_weekly/   autosnap_2021-12-07_06:06:00_hourly/  autosnap_2021-12-07_20:38:54_hourly/  autosnap_2021-12-08_11:52:12_hourly/
autosnap_2021-11-29_23:43:30_weekly/   autosnap_2021-12-07_07:42:33_hourly/  autosnap_2021-12-07_21:05:52_hourly/  autosnap_2021-12-08_13:02:57_hourly/
autosnap_2021-12-01_00:24:05_monthly/  autosnap_2021-12-07_08:07:01_hourly/  autosnap_2021-12-07_22:31:05_hourly/  autosnap_2021-12-08_14:48:35_hourly/
autosnap_2021-12-02_00:17:00_daily/    autosnap_2021-12-07_09:45:35_hourly/  autosnap_2021-12-07_23:47:03_hourly/  autosnap_2021-12-08_16:08:23_hourly/
autosnap_2021-12-03_00:11:47_daily/    autosnap_2021-12-07_10:15:41_hourly/  autosnap_2021-12-08_00:21:21_daily/   autosnap_2021-12-08_17:28:41_hourly/
autosnap_2021-12-04_00:04:44_daily/    autosnap_2021-12-07_11:47:58_hourly/  autosnap_2021-12-08_00:21:21_hourly/  autosnap_2021-12-08_18:28:45_hourly/
autosnap_2021-12-05_00:06:23_daily/    autosnap_2021-12-07_12:25:30_hourly/  autosnap_2021-12-08_02:14:08_hourly/  autosnap_2021-12-08_19:39:26_hourly/
autosnap_2021-12-06_00:12:35_daily/    autosnap_2021-12-07_13:11:19_hourly/  autosnap_2021-12-08_03:54:22_hourly/
autosnap_2021-12-07_00:42:36_daily/    autosnap_2021-12-07_14:44:16_hourly/  autosnap_2021-12-08_04:39:22_hourly/

As you can see, there are snapshots labeled with their time day and time (denoted in yyyy-mm-dd_hh:dd:ss format). To see if a particular file exists in a snapshot, you can just ls the snapshot for it. However, you should note that /snapshot/albert.einstein/<snapshot> will be the root of that snapshot, i.e. a picture of what /home/albert.einstein looked like at the time the snapshot was taken. Therefore, you'll need to look down the path where the file of interest lived (in the example, /temp):

$ ls -l /snapshot/albert.einstein/autosnap_2021-12-08_11:52:12_hourly/temp/test.file
-rw------- 1 albert.einstein albert.einstein 8416 Feb 18  2020 /snapshot/albert.einstein/autosnap_2021-12-08_11:52:12_hourly/temp/test.file

You can view/open any of the files in a snapshot just as you would with the original file, so you can check that the file is the version you want. However, please note that the snapshots are read only, so you cannot modify the file inside the snapshot.

Once you've found a version of the file to restore, you simply need to copy it back to your home directory so you can work with it. Simply use cp:

$ cp -ip /snapshot/albert.einstein/autosnap_2021-12-08_11:52:12_hourly/temp/test.file /home/albert.einstein/temp
$ ls -l test.file
-rw------- 1 albert.einstein albert.einstein 8416 Feb 18  2020 test.file

The -p option will preserve the ownership and timestamps of the file (if that's what you want).

If you want to restore an entire directory tree, this is also possible, just use something like

$ cp -ipr /snapshot/albert.einstein/autosnap_2021-12-08_11:52:12_hourly/temp /home/albert.einstein

to restore the entire "temp" directory, where the '-r' option is for a recursive copy of the entire tree.