Using GPU server (Spring 2016)

Log on to

ssh [user]
([user] is your SFSU login, with the extension .csc656; hence, if your SFSU login is jrotten, your tiger login is jrotten.csc656)

Run the provided bash script to set up your paths:

source ~whsu/lees.bash_profile

Navigate to the directory where your code and data live. Suppose your CUDA source file is Compile it ([other flags] are additional flags that you may need):

nvcc -o sr [other flags]

You should be able to submit a job to one of the GPUs. Look in my home directory for ~whsu/bin/c2050.qsub (for the Tesla C2050), and for ~whsu/bin/titan.qsub (for the Titan). After making copies in your home directory, you have to edit one line in the *.qsub file; replace


... with the executable that you'd like to run. If there are additional command line arguments, enclose the whole command in double quotes.

Suppose Iā€™d like to run the sum-rows kernel sr on titan, with a 4096 x 4096 matrix, 4 threads per block. I make a copy of of titan.qsub, edit the line specifying the executable:

EXE=ā€./sr 4096 4ā€

Then in the directory where sr lives, I type

qsub titan.qsub

A string is printed, telling me my job number. For example:


To check the status of jobs in the queue, type:


You'll get a few lines of information, like this (for job ID 136):

Job id Name User Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
136.tiger titanJob whsu 0 R titan

The "R" stands for run; your job is running. When it's done, the R will turn into a "C" (for complete).

When your job is complete, an output file will appear in your directory:


This text file contains a dump of stdout from the executable you just ran.

If your job hangs, you may have to delete it with the qdel command.

More information is available in the man pages on Or check:

Good luck!