B Admin Reference
B.0.1 Software Installations and Changes
This page is intended to document all of the software installation that has been done so that we can re-do it if ever necessary. When something is installed on the cluster, you should document what it is, where it is installed, and how you installed it on this page. Note that small things like python packages do not need to be documented here.
B.0.2 General Notes
When you install something, please download the compressed (.tar.gz
/.zip
/.bz2</code>
…) file to /local/downloads/
. Please leave uncompressed source code there too. Do not remove either of those after installing the software.
B.0.3 SQLite3
We have (and must have) a slightly non-standard installation of SQLite. This just has to do with where everything is installed on our cluster’s parallel filesystem. Because of this, we need to install sqlite3 from source instead of using yum install sqlite3
. To do this, I essentially followed the tutorial here. All of the dependencies needed to install sqlite3 should be installed already. The exact commands used (which can be rerun verbatim if needed) are below. Make sure you install python after this, because it uses header files that are only included with the distribution of SQLite3.
cd /local/downloads/
wget https://www.sqlite.org/2018/sqlite-autoconf-3220000.tar.gz
tar xf ./sqlite-autoconf-3220000.tar.gz
cd ./sqlite-autoconf-3120200.tar.gz
./configure --prefix=/local/cluster --disable-static --enable-fts5 --enable-json1 CFLAGS="-g -O2 -DSQLITE_ENABLE_FTS3=1 -DSQLITE_ENABLE_FTS4=1 -DSQLITE_ENABLE_RTREE=1"
make
make install
B.0.4 A Note On Python
Installing, configuring and managing python on a cluster like ours is frankly a mess. We have at least 5 different python interpreters installed, all of which have their own packages and package managers. If you change the configuration of any of them, please be very careful. Make sure you’re using the right ‘pip’ tool by either calling the fully-qualified path (e.g. ‘/local/cluster/bin/pip
’) or by saying which pip
and ensuring that the selected one is in /local/cluster/bin/
. Please do not install packages or make changes to the python in /usr/bin/python
. This python is important for the yum
package manager.
B.0.5 Python36
Because of the way they are bundled, Python3 must be installed after sqlite3 is installed. It is installed from source using the following commands, which were adapted from the same tutorial as we used to get SQLite3 (available here).
The exact commands I ran are below:
cd /local/downloads/
wget https://www.python.org/ftp/python/3.6.4/Python-3.6.4.tgz
tar xf ./Python-3.5.1.tgz
cd ./Python-3.5.1
LD_RUN_PATH=/local/cluster/lib/ LD_LIBRARY_PATH=/local/cluster/lib/ ./configure --prefix=/local/cluster LDFLAGS="-L/local/cluster/lib" CPPFLAGS="-I /local/cluster/include"
LD_RUN_PATH=/local/cluster/lib/ LD_LIBRARY_PATH=/local/cluster/lib/ make
LD_RUN_PATH=/local/cluster/lib/ LD_LIBRARY_PATH=/local/cluster/lib/ make test
LD_RUN_PATH=/local/cluster/lib/ LD_LIBRARY_PATH=/local/cluster/lib/ make install
Python 2 can be installed essentially identically, except with different version numbers on the things you download.
B.0.6 JupyterHub Notebook Server
Jupyterhub is the program we use to serve the multi-user notebook server located at mayo.blt.lclark.edu:8000. Installation of jupyterhub is easy. You can install it with ‘sudo /local/cluster/bin/pip3 install jupyterhub.’
If jupyterhub is down, do this sequence of commands:
- sign in as root
- start a new screen, by running
screen -S jupyterhub
- run
PATH=/local/cluster/bin/:$PATH; cd /local/cluster/jupyterhub-runtime/jupyterhub_run/ && ./jupyterhub_start.sh
- after it starts running, detach from the screen by typing
ctrl-A
and thend
Configuration is in the file /local/cluster/jupyterhub-runtime/jupyterhub_config.py
I needed to make an IPTables entry allowing traffic in and out on port 8000. If you need to change the port, remember to make a new iptables entry.
Jupyterhub example user password (for use case demonstrations): user:example, pass:processtruster76
B.0.7 Apache Httpd Web Server
We installed apache2 with the standard sudo yum install httpd
. Apache2 is currently serving from its default server root at /var/www/html
. PHP v5.4.16 was also installed, and has its configuration at /etc/php.ini
I needed to make an IPTables entry allowing traffic in and out on port 80. If you need to change the port, remember to make a new iptables entry.
B.0.8 OwnCloud
OwnCloud is essentially a google drive clone that will run on our cluster. We automatically create user owncloud accounts upon user creation and we also symlink their file dropbox to their home directory. For help with installation, follow this tutorial here.
B.0.9 Dropbox CLI
Follow page here. Our install location is somewhere else, in /local/cluster/dropbox_dist
.
B.0.10 Shellinabox
Shellinabox is a browser-based method to access command line on mayo. Once signed in to VPN, shellinabox is accessible at http://mayo.blt.lclark.edu/shell/. The installation is at /local/cluster/shellinabox
. If it goes down, navigate to that directory, and run the startup script: ./shellinabox
B.0.11 ProtTest
ProtTest is a bioinformatic tool for the selection of best-fit models of aminoacid replacement for the data at hand. ProtTest makes this selection by finding the model in the candidate list with the smallest Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) score or Decision Theory Criterion (DT). At the same time, ProtTest obtains model-averaged estimates of different parameters (including a model-averaged phylogenetic tree) and calculates their importance(Posada and Buckley 2004). ProtTest differs from its nucleotide analog jModeltest (Posada 2008) in that it does not include likelihood ratio tests, as not all models included in ProtTest are nested. It is written in java with MPJ, and it has a fairly strange installation. It’s installed to /local/cluster/prottest3
, and needs to be run from there. People should not put data files in that directory. It can be run something like this: cd $PROTTEST_HOME java -jar prottest-3.4.2.jar -i examples/COX2_PF0016/alignment -all-matrices -all-distributions -threads 2
B.0.12 MrBayes
MrBayesis a program used for Bayesian inference of phylogeny. It’s installed normally. Instructions for building from source are here.
B.0.13 PAML
PAML needs to be installed on a per-user basis. It needs to be copied into the user’s home directory and it’s internal bin
directory needs to be added to the user’s PATH. It’s compiled executables are in /local/downloads/paml4.9g
.
B.0.14 MODELLER
MODELLER is A Program for Protein Structure Modeling. It’s a special python interpreter which is installed normally in /local/cluster/bin/
B.0.15 STAR
STAR (Spliced Transcripts Alignment to a Reference) is a bioinformatics tool to align large ENCODE Transcriptome RNA-seq datasets. Its executables (STAR and STARlong) are in /local/cluster/bin. Background on STAR can be found here, and full docs are here.
B.0.16 RAxML
RAxML is a maximum likelihood phylogenetic bioinformatic tool. It’s installed normally. However, there are two versions installed. There is an MPI version (suitable for tasks of size 48 slots or more) as well as a shared memory pool version (suitable for 48 slots or fewer). If you are using exactly 48 slots, flip a coin. The MPI version can be run with raxmlHPC-MPI-AVX2
and the SMP version can be run with raxmlHPC-PTHREADS-AVX2
B.0.17 Intel OpenCL Runtime
The Intel OpenCL Runtime is required to run OpenCL programs on Intel CPUs. The latest packages are available here under OpenCL Runtime for Intel Core and Intel Xeon Processors. To install, run the install script and select /local/cluster
as the target prefix. The files will be installed in /local/cluster/opt/intel
. Be sure to symlink the libraries in /local/cluster/opt/intel/opencl-${version_number}/lib64
to /local/cluster/lib
:
find "$(realpath /local/cluster/opt/intel/opencl)/lib64/" -type f -name '*.so*' -print0 \
| xargs -0 ln -s -t /local/cluster/lib
We are explicitly linking to the libraries in the version specific directory since the generic symlink requires /etc/alternatives/opencl*
which won’t be recreated on worker nodes.
Setting up OpenCL to run on worker nodes will require some additional work. The OpenCL runtime searches for platform specific ICDs in /etc/OpenCL/vendors
by default. While the documentation states that this search path can be modified with an environment variable, OPENCL_VENDOR_PATH
, it seems that not all ICD loader implementations support this. As a temporary workaround, we will recreate this directory structure in /local/cluster/etc/opencl_icd_fix
:
fixdir='/local/cluster/etc/opencl_icd_fix'
for d in "$fixdir" "${fixdir}/vendors"; do
if ! [ -d "$d" ]; then
mkdir "$d"
fi
done
find "$(realpath /local/cluster/opt/intel/opencl)/etc/" -type f -name '*.icd' -print0 \
| xargs -0 ln -s -t "${fixdir}/vendors"
If OPENCL_VENDOR_PATH
worked with the implementation shipped with Intel’s runtime, we could just set it to our fix directory. Unfortunately, it does not. We need to create symlinks in each worker node:
ln -s /local/cluster/etc/opencl_icd_fix /etc/OpenCL
B.0.18 Hashcat
Hashcat is a suite of password cracking tools. It requires OpenCL supported devices along with the appropriate OpenCL runtimes and drivers. The latest release is available here. To install hashcat, first download the source tarball and untar it. Since hashcat lacks a configure script, we’ll need to edit the Makefile manually. Find the definition of the PREFIX
in the Makefile and define it as /local/cluster
. Then run make
and make install
.
B.0.19 Set up procedure for classes/projects
Classes and projects often require shared materials (files/folders/etc.). We are able to do this with UNIX groups. To make a UNIX group for a class, first ensure that all students who need to be in the group have user accounts on the system.
Then, create the group:
groupadd <GROUP NAME>
Create a shared folder in /home/data
, and set its’ owner and permissions accordingly:
mkdir /home/data/<GROUP NAME> chown <OWNER>:<GROUP NAME> /home/data/<GROUP NAME> chmod 755 /home/data/<GROUP NAME>
where <OWNER>
is the username of the group owner (usually professor)
Now, add all the students to the group as follows (you can do this individually as follows or automate it as after that):
usermod -a -G <GROUP NAME> <STUDENT USERNAME>
To do this automatically, do the following:
First, create a file which contains a student’s username on each line, like so:
jimmy
timmy
tommy
terry
jerry
Save this file as users.txt
.
Then, run the following script, changing the group name as necessary:
gn=<GROUP NAME>
while read p; do
usermod -a -G $gn $p
done <users.txt
B.0.20 Troubleshooting
Upon running qstat -f
, if you see a status as “au” (alarm unreachable), there are a couple things to check:
* do the nodes need remounting?
* is SGE running on the nodes?
We encountered this in July 2019, and here was the fix, after SSH-ing into Mayo:
sudo su root
ssh tomato
cd /local
ls -la
If /local
is empty, then run:
mount -a
Now in the same node:
cd /etc/init.d
./sgeexecd.BLT start
This should start SGE on the node. Repeat this process for each node that is down.
If you’re unable to SSH to a node (this happened July, 2021), you may need to check the internal networking to make sure the nodes are mapped correctly:
Check /etc/hosts to ensure that there’s an entry for each node. It should look like this:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.1 mayo.blt.lclark.local mayo
192.168.0.101 bacon.blt.lclark.local bacon
192.168.0.102 lettuce.blt.lclark.local lettuce
192.168.0.103 tomato.blt.lclark.local tomato
192.168.0.104 sprouts.blt.lclark.local sprouts
192.168.0.2 bread.blt.lclark.local bread
If a listing for any of the above nodes is missing, go to /etc/dhcp/dhcpd.conf to ensure you have the correct IP. While this seems crazy, we lost both sprouts and bread post re-boot in July 2021.
This should allow you to SSH to a given node. Follow the instructions above to make sure it’s mounted correctly (check /local
, etc.). For any compute nodes (bacon, lettuce, tomato, sprouts), you may need to restart Sun Grid Engine. To do this, SSH to the node, and navigate to /etc/init.d/ . Then run the following:
./sgeexecd.BLT stop
./sgeexecd.BLT start
Go back to mayo, and run qstat -f
. If all goes well, the affected node should no longer have the ‘au’ flag.