How do I find my MAC address?

A MAC address, or hardware address, is a 6 octet (groups) of 2 hexidecimal digits, e.g. a1:b2:c3:d4:e5:f6:

Linux:ifconfig - Look for Ethernet, possibly at eth0 or em1, and next to HWaddr.
OS X:System Preferences... > Network > Select interface > Advanced... > Hardware > MAC Address
Windows:Run... > cmd > ipconfig /all

How do I login to a remote server via SSH?

First you must have an SSH client:


SSH client and X server is installed, in general, by default. All you have to do is open a terminal emulator (e.g. GNOME Terminal) and away you go.


SSH client is installed by default, but you might want to install XQuartz (X server for Mac) so that you can farward X Windows sessions, i.e. be able to view GUI apps. To use the SSH client, start the Terminal app.


You will need to download and install an SSH client like PuTTY and an X server like Xming:

  • Get the Windows installer for everything except PuTTYtel version of PuTTY.
  • Get the Public Domain Release of Xming, as it doesn’t require a donation.

Then to login via SSH, you need the following information:

  1. Username of the account you want to login to.
  2. Password of that account.
  3. Hostname of the remote server.

In the terminal of a UNIX based machine (i.e. Linux and OS X) you can enter the following command:

ssh -X <username>@<hostname>


-X is an option that allows the forwarding of X sessions.

Example:ssh -X ab123@apollo.hpc.sussex.ac.uk

How do I configure SSH for passwordless login?

First you have to be on the machine you want to login from, the client machine, e.g. your laptop. Then enter the command:

ssh-keygen -t rsa
  1. Press ENTER to select the default file to save the key.
  2. For passwordless login, do not enter a passphrase.
  3. Take note of the file name of the public key.

Then copy the content of the public key file that you generated on the client machine and paste it into the ~/.ssh/authorized_keys file on the remote server. You can also do this in one line of commands from the client machine [1] :

cat ~/.ssh/id_rsa.pub | ssh <username>@<hostname> 'mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys'


ssh-copy-id -i ~/.ssh/id_rsa.pub <hostname>

Now you can SSH into the remote host without having to enter a password!

How do I use screen to keep my terminal session even after I logout?

Screen is a virtual terminal emulator. When you start screen, it will spawn a terminal to run an application, such as a Bash shell.

You can detach screen, thereby sending it’s process to the background allowing you to logout of your parent terminal, say, when you leave the office. You can then reattach screen at any time, say, once you get home, and retrieve your shell session from where you left off! Neat!

To start a screen session do:


It will spawn a new virtual terminal with your default shell in it. Use it like any normal shell instance.

CTRL + A D:Detach screen. Whilst holding down CTRL, hit A and release, followed by D.

You can reattach with:

screen -r


Although you can start multiple screen sessions, this is not recommended as it becomes difficult to tell which screen to reattach from its process ID alone.

CTRL + A C:Open another window in the same screen session.
CTRL + A “:See a list of screen windows and select which one you want to switch to. Use arrow keys to navigate and ENTER to select.
CTRL + A <#>:Jump straight to the window you want if you know it’s number.
CTRL + D:Exit shell normally to end screen session.

How do I use Bash startup files?

The Bash shell uses several startup files which are read and the commands executed in order. There are different startup files for login and non-login shells, and a different set for each shell, e.g. Bash and Tcsh.

From the Bash man pages, man bash:

When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior.

When a login shell exits, bash reads and executes commands from the files ~/.bash_logout and /etc/bash.bash_logout, if the files exists.

When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc, if that file exists. This may be inhibited by using the --norc option. The --rcfile file option will force bash to read and execute commands from file instead of ~/.bashrc.

You can customize environment variables or even put Module commands in startup files, for example:

export LD_LIBARY_PATH="/research/astro/eor/apps/fftw/2.1.5/lib:$LD_LIBRARY_PATH"
export PS1="[\t \u@\h:\w]$"
module load mps/software

How do I set environment variables?

Environment variables are used to control the way processes behave on a computer. For example, you can view all the variables for the Bash shell with the command env. The output uses the following format:


These variables are used by various processes, for example:

PATH:Paths where the shell checks for application executables.
 Paths where the shell checks for shared libraries.
MANPATH:Paths where the shell checks for man pages.
PS1:Information to display at command prompt.

Variables can also be modified simply by setting VARIABLE_NAME="value" and made global with export VARIABLE_NAME. You can also do it all in one line with export VARIABLE_NAME="value".

It is often the need for a path to be prefixed (or suffixed) to an existing PATH variable, this can be done as such:

export PATH="/local/bin:$PATH"


Environment Module is a convenient way to manage environment variables that need to be set for custom applications, see more about Module in How do I create modulefiles?

How do I send the output of a command to a file?

<command> > stdout.log 2> stderr.log


<command> > all.log 2>&1


You can view the output in real time by doing tail -f <filename> [1] .

How do permissions work on UNIX/Linux?

The output of ls -l gives something like this:

$ ls -l .bashrc
-rw-r--r--. 1 noob noob 1644 Jul 18 16:24 .bashrc

The first part of the output (-rw-r–r–.) represents the file permissions.

How do I compile software on UNIX/Linux?

This is obviously a big question, and it’s certainly not possible to cover all the nuances in this FAQ. But at least I hope to give you some tips that can help you on your way.


This guidance is based on the standard followed by most respectable open source projects. However, it is not always the case that source codes come in nicely packaged tarballs with a complete configure script, or any at all for that matter!

This is especially true for scientific software, where even up to date or complete documentation is often hard to find. Beware that you may find conflicting or obsolete information and you must use your best intuition and attention to detail to navigate through it. Good luck!

Extract archive

Firstly, I will only refer to source code that has been packaged in the standard convention of open sourced projects. These usually come in tar gzip format (.tar.gz) and sometimes tar bzip (.tar.bz2) or both. To unpack the archive, do [1] :

tar xzfv <file>.tar.gz


tar xjfv <file>.tar.bz2 -C /path/to/extracted/files

This will unpack the files into your current working directory and, if it’s been packaged properly, the contents are usually in a subdirectory with the same name as the original tar file, obviously without the suffixes.

If for some reason the files were packaged without a subdirectory, you can use the -C option after the filename to specify where you would like the files to be extracted to. Like in the second example.


Check man tar to see all the available options.

Run configure script

Now you should have the source code in a structure ready for you to begin the configure process.

The very first thing to do is read the INSTALL and README files, these usually contain useful information and even instructions on how to build your software:


Once you have read everything, you should also search for website with more information that might be helpful for you to compile this source code successfully. In particular the project’s home page may contain some documentation and search Google for other people’s experience or issues encountered during compilation. Indeed some of the information you find on websites could prove crucial.

Properly packaged source code usually comes with a configure script. This is used to prepare the code to be compiled on your current system, such as specify compiler options and dependent libraries, etc. But for most projects it also comes with the --help option which displays all the possible options you can parse via the configure script, execute the script as follows:

./configure --help

With the information you find in the INSTALL text, various online sources and ./configure --help option, you should be able to construct the options you want to configure your software with. In normal circumstances there is usally just one option you need to worry about:

./configure --prefix=/path/to/your/new/app


After this you will want to run make then make install. If you are very confident, you can also do it all in one line, like:

./configure --prefix=/path/to/your/new/app && make && make install


When dealing with projects in the early stages of development, where there may not be a configure script, you may be required to edit the Makefile directly before you run make.


Check to make sure your compile succeeded without any errors, see How do I send the output of a command to a file?

Set environment variables

Once you are happy, you can configure your environment variables to make your software readily available to use, for example:

export PATH="/path/to/your/new/app/bin:$PATH"
export LD_LIBRARY_PATH="/path/to/your/new/app/lib:$LD_LIBRARY_PATH"
export MANPATH="/path/to/your/new/app/share/man:$MANPATH"

The above could be put in a file which you could source <new_app_env> [1] when you want to use.


The variables suggested above are not strictly always the case:

  • Some packages do not produce an executable, so the PATH variable would be obsolete, and similarly for libararies and LD_LIBRARY_PATH.
  • Some paths should not take precedence over system paths so may need to be placed after existing paths as a suffix.
  • There may be other variables required to use the software.

This is why it is very important that you read the documentation that is available or that you can find.


A better way of managing environment variables for custom built applications, as opposed to distro managed packages, is to use Environment Module, see How do I create modulefiles? This is especially true on an HPC cluster.

How do I create modulefiles?

Private Modules

Module is used to configure user shell environments. Specifically, it provides a convenient way to modify environment variables on the fly and works well with the scheduler’s job submission script on the HPC cluster.

Module comes as standard with a modulefile called use.own which can be loaded with the command module load use.own. This appends the path ~/privatemodules to your $MODULEPATH variable, which is used to locate modulefiles. Now you can create and load your own modulefiles by putting them in that path.

Here is an example of a modulefile for an application called CMake:

#%Module -*- tcl -*-
## cmake/
proc ModulesHelp { } {
  puts stderr "\tCMake $version has been added to your environment"

  module-whatis "Adds CMake to your environment

  CMake, the cross-platform, open-source build system. CMake is a family of
  tools designed to build, test and package software. CMake is used to control
  the software compilation process using simple platform and compiler
  independent configuration files. CMake generates native makefiles and
  workspaces that can be used in the compiler environment of your choice.

  CMake was created by Kitware in response to the need for a powerful,
  cross-platform build environment for open-source projects such as ITK and
  VTK. In addition to leading the development of this popular tool, Kitware
  also offers commercial consulting, support and training to help your
  organization effectively use CMake and the entire Kitware quality software


  set             version
  set             root            /path/to/home/apps/cmake/$version
  prepend-path    PATH            $root/bin
  prepend-path    MANPATH         $root/man


When an application requires a complex array of environment variables to be configured, i.e. when a script is provided to configure the environment, you can send the output to a file, like env > env.bef, run the script to configure the environment, then do env > env.aft and do diff env.bef env.aft to see what needs to be added to your modulefile.


The convention is to create a subdirectory in your modulepath with the application’s name, then use the version number as the filename of the modulefile; also deploy your application with a similar style prefix:

App prefix:~/apps/cmake/

With this example, when you do module load use.own then module avail you will see the application appear as cmake/

Shared Modules

In a similar way, you can build and manage a software stack with applications and modulefiles in a colaborative environment. The Research disks are ideal for this as its access is controlled by Groups. Users can create a modulefile that loads their own module path, like use.own, making any modulefile in that path available to use.

Here is an example of such a modulefile [1] :

#%Module -*- tcl -*-
## <group>/software
proc ModulesHelp { } {
  puts stderr "\t<group> software has been added to your Module path"

  module-whatis "Adds <group> software to your Module path

  The <group> software stack is provided as a separate set of modules that can be
  optionally loaded as a supplement to the primary software stack.


  set             root                    /research/<group>/apps
  module          use                     $root/etc/modulefiles


The recommended way to make these modules loadable is for users to link to the software stack’s modulefile in their ~/privatemodules directory and load use.own, e.g.

$ ln -sf /path/to/my_module_file ~/privatemodules/my_module_file
$ module load use.own my_module_file


You must ensure that every file has group write access enabled, chmod g+w, or even better chmod ug+rw, to ensure that your group can interact effectively with applications deployed in this environment. We will not set the default file mask for you!

How do I build a custom Python module stack with virtualenv?

Virtualenv itself is a Python module, so it needs to be already installed in your current Python instance for you to be able to use it. Virtualenv appears as a command and if you enter virtualenv on it’s own you will see a list of command options and a description of what they do.

Due to conflicts which can arise with Python modules, we provide Python environment Module on the MPS software stack with only pip and virtualenv modules installed. We will not install other modules to ensure that Python instances remain in good working order. Users are encouraged to use virtualenv to build Python module stacks for themselves.

Python virtualenv can be shared and there are neat ways to rebuild it if needed, for portability.

Firstly, load an environment Module for a Python instance that has virtualenv, for example:

module load mps/software python/2.7.8

Then create your virtualenv [1] :

virtualenv <my_virtualenv>

This will create a subdirectory in your current working directory called my_virtualenv [1] with an independent instance of Python.


It is possible to use the option -p in virtualenv to specify the path to a Python interpreter/executable to use to create a new virtualenv. This could be a different version of Python from a separate deployment.

To activate your new virtualenv, do:

source <path_to>/<my_virtualenv>/bin/activate

A prefix should appear on your command prompt, like (my_virtualenv). Now if you do which python it should point to the Python instance in my_virtualenv.


If you want to use virtualenv on the cluster, you have to source the activate script in your Grid Engine submission script.

To leave my_virtualenv, do:


Now you have a way to activate and deactivate any virtualenv on the fly.

You can use pip to install/manage modules in my_virtualenv and this won’t affect other Python instances on the system:

pip search matplotlib
pip install matplotlib

To use virtualenv in a collaborative environment, you have to deploy it on a shared filesystem, i.e. /research/<group>/<project>/apps/<my_virtualenv>, ensuring that the permissions allow the users you want to read all the files in the virtualenv:

chgrp -R <group> /research/<group>/<project>/apps/<my_virtualenv>


The filesystem in the /research area have been set up such that each group has a directory and project directories are created under that. The group directory are owned by a corresponding Posix group and has the inheritance flag (S) set for the group permissions. So any files created under that group directory will also belong to the same Posix group.

So you don’t have to do chgrp -R when this is the case.

To save a list of currently installed modules and restore them later, do:

pip freeze > requirements.txt
pip install -r requirements.txt


[1](1, 2, 3, 4, 5, 6, 7) Replace object within and including angle brackets, like <this>, with some relevant entry.