Differences between revisions 81 and 82
Revision 81 as of 2022-11-02 10:14:59
Size: 21731
Editor: jbjohnston
Comment:
Revision 82 as of 2022-11-02 10:17:01
Size: 21739
Editor: jbjohnston
Comment:
Deletions are marked like this. Additions are marked like this.
Line 305: Line 305:
 * [[attachment:HPCMatlab/OU.nonshared.scripts.tar.gz|OU.nonshared.scripts.tar.gz||&do=get]]
 * [[attachment:HPCMatlab/OU.nonshared.scripts.zip|OU.nonshared.scripts.zip||&do=get]]
 * [[attachment:uts/HPCMatlab/OU.nonshared.scripts.tar.gz|OU.nonshared.scripts.tar.gz||&do=get]]
 * [[attachment:uts/HPCMatlab/OU.nonshared.scripts.zip|OU.nonshared.scripts.zip||&do=get]]

MATLAB

Overview

Oakland University has obtained a campus wide license (CWL) for MATLAB. This permits any user to run MATLAB on Matilda without special permission or license files. This page describes some of the ways in which MATLAB can be run on the cluster.

Initial Cluster Configuration

In order to use parallel work and batch modes it is necessary to import cluster configuration settings into your MATLAB profile the first time you run it.

Please use the following:

module load MATLAB
matlab -nodisplay -nosplash
>>configCluster

Note that if you run the "configCluster" MATLAB command more than once, it will erase and regenerate your cluster profile information, so it is only necessary to do this once.

Setting Default Walltime

IMPORTANT: Since the SLURM upgrade in August 24-25, 2022, users must specify walltime or their jobs will die in 1:00 (the default). If you are spawning MATLAB parallel worker jobs, (see the Multi-Processor Multi-Node example below) you will need to set a default walltime for worker jobs. To accomplish this, perform the following steps:

1. Establish an X-Windows session with the cluster:

2. Load the MATLAB modulefile:

  • module load MATLAB/R2022a

3. Launch MATLAB in GUI mode:

  • matlab

4. When the GUI window opens, select the dropdown for "Parallel" under "Environment" and choose "Parallel Preferences"

5. You should see "Parallel Computing Toolbox" selected in the left pane. On the right pane, click the link for "Cluster Profile Manager"

6. For "Cluster Profile" (left pane), select the "Matilda R2022a" profile (or most recent)

7. In the right pane, scroll down to the "Scheduler Plugin" section and click the "Edit" button (bottom right)

8. Under "Scheduler Plugin", scroll through the window that looks like a spreadsheet until you see an entry for "WallTime"

9. In the "Value" section next to "WallTime", enter a default value (e.g. 7-00:00:00)

10. Click the "Done" button (lower right)

11. Exit MATLAB

The above steps will update your settings in your ~/.matlab subdirectory. Now, a default walltime should be set for any parallel worker jobs spawned using parcluster or parfor.

Alternate Method

Alternately, you can use the the following to "shape" the inner job parameters sent to SLURM using something like the following at the beginning of your MATLAB script:

c=parcluster;
c.AdditionalProperties.WallTime='7-00:00:00';
p=c.parpool(50)

In the example above, we set the additional walltime desired by linking it to the handle "c" (parcluster). Then spawn the necessary "pool" of desired workers using "c.parpool(50)" and using the "p" handle for identification.

Interactively Scheduled

A scheduled interactive job can be run on one of the cluster nodes using something like the following:

srun -N 1 -c 1 -t 30:00 --pty /bin/bash --login

Once the session has started, simply load MATLAB and launch the application:

module load MATLAB
matlab -nodisplay -nosplash

Please see our documentation for more information on scheduling interactive jobs.

Interactive Parallel

You can start an interactive MATLAB job and specify the total number of parallel workers to use during your run. This can be accomplished using something like the following:

module load MATLAB
matlab -nodisplay
>>c=parcluster;
>>p=c.parpool(40)

The command sequence above provides a handle to the cluster resource (parcluster), and then allocates 40 workers (parpool). These workers can run on a single node or across multiple nodes depending on the request and cluster scheduling. From here, you can execute MATLAB commands or scripts using a default set of cluster parameters.

To see the default cluster parameters use the following command:

>>c.AdditionalProperties

Various other parameters can be added or modified such as walltime, memory usage, GPUs, etc. For example:

>>c.AdditionalProperties.Walltime = '5:00:00';
>>c.AdditionalProperties.MemUsage = '4000';
>>c.AdditionalProperties.GpusPerNode = 1;

When finished, you can terminate this job using:

>>p.delete

Asynchronous Interactive Batch Jobs

It is also possible to submit asynchronous batch jobs to the cluster through the MATLAB interactive interface. This can be a handy way to submit a series of jobs to Matilda without having to create independent job scripts for each analysis task. MATLAB can be run from the login node or a scheduled interactive session and the jobs submitted as shown below.

The "batch" command will return a job object which is used to access the output of the submitted job. For example:

module load MATLAB
matlab -nodisplay
>>c=parcluster;
>>j=c.batch(@pwd, 1, {})

In the example above we obtain a handle to the cluster (parcluster) as we did in the previous example. The batch command launches the "@pwd" command (present working directory) with "1" output argument and no input parameters "{}".

Another example might involve running a MATLAB function which might look something like the following:

j=c.batch('sphere',1, {})

The command above submits a MATLAB job for the function "sphere" to the Matilda cluster, with 1 output argument and no input arguments.

In this example, we run a MATLAB script as a batch job:

j=c.batch('myscript')

In the example below, we create a multi-processor, multi-node job using a MATLAB script named "for_loop" which is submitted to the cluster:

>>c=parcluster;
>>j=c.batch('for_loop', 'Pool', 40)

The example above specifies 40 CPU workers for the task 'for_loop'. For reference, "for_loop" contains the following commands:

tic
n = 30000;
A = 500;
a = zeros(n);
parfor i = 1:n
    a(i) = max(abs(eig(rand(A))));
end
toc

In the script "for_loop" we utilize the "parfor" loop directive to divide our function tasks among different workers. Please note, your MATLAB code must be designed to parallelize processing of your data in order for the specified workers to have an actual performance benefit.

To query the state of any job:

>>j.State

If the "state" is "finished" you can fetch the results:

>>j.fetchOutputs{:}

After you're finished, make sure to delete the job results:

>>j.delete

To retrieve a list of currently running or completed jobs, call "parcluster" to retrieve the cluster object. This object stores an array of jobs that were run, are running, or are queued to run:

>>c=parcluster;
>>jobs=c.Jobs
>>j2=c.Jobs(2);

The example above will fetch the second job and display it.

We can view our job results as well:

>>j2.fetchOutputs

Please note that ending a command with the semicolon (;) will suppress output.

SLURM Job Script

Overview

MATLAB runs can also be accomplished using conventional job scripts. MATLAB commands should be constructed to suppress GUI display components. MATLAB scripts should be coded to display or print results to output.

Single Processor

Below is an example job script where we run the MATLAB script "mysphere" and view the slurm*.out file for the results:

{{{#!/bin/bash #Sample Job Script #SAMPLE JOB SCRIPT #SBATCH --job-name=matlabTest #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --cpus-per-task=1 #SBATCH --time=00:05:00

cd ~ module load MATLAB matlab -singleCompThread -nodisplay -nosplash -r mysphere }}} For reference, the MATLAB script "mysphere.m" contains the following:

[x,y,z] = sphere;
r = 2;
surf(x*r,y*r,z*r)
axis equal
A = 4*pi*r^2;
V = (4/3)*pi*r^3;
disp(A)
disp(V)

Multi-Processor Single Node

We can also run multi-worker parallel jobs using SLURM job scripts. For example, assume we have the following script "for_loop.m":

c=parcluster('local')
poolobj = c.parpool(4);
fprintf('Number of workers: %g\n', poolobj.NumWorkers);

tic
n = 200;
A = 500;
a = zeros(n);
parfor i = 1:n
    a(i) = max(abs(eig(rand(A))));
end
toc

In the above script, we specify we want 4 workers. We also make use of the "parcluster('local') instruction, so resources will be limited to the local node assigned to the job. So our job script would look something like the following:

{{{#!/bin/bash # Sample job script #SBATCH --job-name=matlabTest #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --cpus-per-task=5 #SBATCH --time=00:05:00

cd ~ module load MATLAB matlab -nodisplay -nosplash -r for_loop }}} Note that in the job script example above, we specified 5 CPUs instead of 4. This is because one (1) CPU is needed to manage the 4 workers (N + 1) for a total of 5.

Please also note that if a value is not specified for "parpool" MATLAB will default to 12 workers.

Multi-Processor Multi-Node

We can assign a number of workers to our MATLAB job and allow MATLAB and the job scheduler to assign those workers to a series of nodes with distributed processing. One change we must make to our MATLAB code is the specification of "parcluster". Presented below is some sample code that calculates the value of PI using multiple workers assigned to multiple nodes:

function calc_pi_multi_node

c = parcluster;
c.AdditionalProperties.MemUsage = '4gb';



if isempty(gcp('nocreate')), c.parpool(40); end

spmd
    a = (labindex - 1)/numlabs;
    b = labindex/numlabs;
    fprintf('Subinterval: [%-4g, %-4g]\n', a, b)

    myIntegral = integral(@quadpi, a, b);
    fprintf('Subinterval: [%-4g, %-4g]   Integral: %4g\n', a, b, myIntegral)

    piApprox = gplus(myIntegral);
end

approx1 = piApprox{1};  % 1st element holds value on worker 1
fprintf('pi           : %.18f\n', pi)
fprintf('Approximation: %.18f\n', approx1)
fprintf('Error        : %g\n',    abs(pi - approx1))


function y = quadpi(x)
%QUADPI Return data to approximate pi.

% Derivative of 4*atan(x)
y = 4./(1 + x.^2);

In the example above we omit the 'local' from "parcluster" and assign the value "40" to "parpool". Note also we make use of "c.AdditionalProperties" to prescribe "MemUsage" of 4gb (you can add any number of job parameters in this way). This will allocate 40 workers to our task. The job script for this function will look something like the following:

#Sample Job Script
#!/bin/bash
#SBATCH -n 1                            # 1 instance of MATLAB
#SBATCH --cpus-per-task=1               # 1 core per instance
#SBATCH --mem-per-cpu=4gb               # 4 GB RAM per core
#SBATCH --time=00:20:00                 # 20 minutes

# Load MATLAB module
module load MATLAB/R2021a

# Run code
matlab -batch calc_pi_multi_node

Note that we only assign 1 cpu-per-task and 1 node to this job. This job is essentially the "wrapper" for our function. Once launched, workers will be spawned as a separate job across one or more nodes using a total of 40 workers (cores). The walltime on this job must exceed the walltime needed to complete the "calc_pi_multi_node" function otherwise the entire job will fail. Because of MATLAB Parallel Server integration with the SLURM cluster system, the necessary worker script will be spawned in this example.

Submit Jobs Off-Cluster

Overview

It is possible to submit jobs to the Matilda cluster from your workstation copy of MATLAB. These instructions assume you have installed MATLAB on your workstation, including the Parallel Computing Toolbox.

At this time, MATLAB off-cluster job submission does NOT integrate properly with DUO. Therefore it is necessary to connect using the OU VPN in order to submit off-cluster jobs (thereby bypassing DUO push).

Initial Configuration

Before your first off-cluster job submission from your personal workstation, it is necessary to download and install one of the tarball/zip integration scripts below to your machine:

  • [[attachment:uts/HPCMatlab/OU.nonshared.scripts.tar.gz|OU.nonshared.scripts.tar.gz||&do=get]]

  • [[attachment:uts/HPCMatlab/OU.nonshared.scripts.zip|OU.nonshared.scripts.zip||&do=get]]

Open MATLAB on your local and run the following command to determine your "user path":

>> userpath

Once this has been determined, unzip the integration script files into this directory. Once complete, configure your local MATLAB to use the Matilda cluster:

>> configCluster

Upon entering the command above, you should be prompted for your Matlida username. Enter that, and configuration for the cluster should be complete.

At this point you should be ready to submit jobs to Matilda from your local machine.

Submitting Jobs

Off-cluster job submission includes issuing batch jobs much in the same way as we would on-cluster. In this example, we will work with the function we presented previously that was designed to calculate the value of PI, with a couple of modifications:

function calc_pi_multi_node

spmd
    a = (labindex - 1)/numlabs;
    b = labindex/numlabs;
    fprintf('Subinterval: [%-4g, %-4g]\n', a, b)

    myIntegral = integral(@quadpi, a, b);
    fprintf('Subinterval: [%-4g, %-4g]   Integral: %4g\n', a, b, myIntegral)

    piApprox = gplus(myIntegral);
end

approx1 = piApprox{1};  % 1st element holds value on worker 1
fprintf('pi           : %.18f\n', pi)
fprintf('Approximation: %.18f\n', approx1)
fprintf('Error        : %g\n',    abs(pi - approx1))


function y = quadpi(x)
%QUADPI Return data to approximate pi.

% Derivative of 4*atan(x)
y = 4./(1 + x.^2);

Notice in the revised code we've removed references to "parcluster" and "parpool". We will instead, set these in our desktop MATLAB session:

>> c = parcluster;
>> c.AdditionalProperties.MemUsage='4gb';
>> j=c.batch('calc_pi_multi_node', 'Pool', 40,'AutoAddClientPath',false);

In the example above we create a handle to the cluster, specify additional job parameters (in this case memory), and then submit "calc_pi_multi_node" as a batch job, specifying a pool of 40 workers. The argument "'AutoAddClientPath', false" instructs that we should not try to add the current local MATLAB path to the cluster job, since these are not shared filesystems.

Once the job is submitted as shown, we will be prompted to enter our password for the first run:

  • authenticate.png

After a few moments our job details can be displayed by entering:

>> j
  • jobdetails.png

We can check our job state using the "j.State" command:

  • Matlab Job State

We can obtain the output of the function call using the command "diary(j)":

>> diary(j)
--- Start Diary ---
Lab  1:
  Subinterval: [0   , 0.025]
Lab  2:
  Subinterval: [0.025, 0.05]
Lab  3:
  Subinterval: [0.05, 0.075]
Lab  4:
  Subinterval: [0.075, 0.1 ]
.....
Lab 40:
  Subinterval: [0.975, 1   ]
Lab  1:
  Subinterval: [0   , 0.025]   Integral: 0.0999792
Lab  2:
  Subinterval: [0.025, 0.05]   Integral: 0.0998544
Lab  3:
  Subinterval: [0.05, 0.075]   Integral: 0.0996058
Lab  4:
  Subinterval: [0.075, 0.1 ]   Integral: 0.0992352
.....
Lab 40:
  Subinterval: [0.975, 1   ]   Integral: 0.0506302
pi           : 3.141592653589793116
Approximation: 3.141592653589793116
Error        : 0

--- End Diary ---

Multiple MATLAB Versions

There are multiple versions of MATLAB available on the Matilda cluster. To see what's available, simply use:

module av MATLAB

For on-cluster work, simply load the desired version and proceed as described herein.

For off-cluster jobs, you will need to make sure to download and install the version of MATLAB corresponding to the version on Matilda that you wish to use. Install as you did with previous versions and as described previously.

IMPORTANT: To utilize a newer version of MATLAB you will need to download the latest version of the off-cluster Integration Scripts and use those to configure MATLAB as described above. Once this has been completed, selecting "Parallel->Create and Manage Clusters" will bring up a window showing the cluster/version configurations that are available, as shown below:

ClusterConfig.png

The cluster configuration for the newest version of MATLAB should be selected by default. If not, please select the correct version prior to commencing off-cluster work. It is possible to have more than one installed version of MATLAB on your workstation, or you can simply replace the older version with the newer version. If you are using multiple versions, please run the same version on your workstation that you desire to use on the cluster, and ensure the proper cluster configuration is selected.

A Brief Intro to GPU Jobs

In the following example we demonstrate a couple of ways to execute the following code (gpuTest.m) using MATLAB and a GPU:

X=[-15:15 0 -15:15 0 -15:15];
gpuX = gpuArray(X);
whos gpuX
gpuE = expm(diag(gpuX,-1)) * expm(diag(gpuX,1));
gpuM = mod(round(abs(gpuE)),2);
gpuF = gpuM+fliplr(gpuM);
imagesc(gpuF);
imwrite(gpuF,'myimage.png')
colormap(flip(gray));
result = gather(gpuF);
whos result

Which produces the image file "myimage.png" shown below:

  • myimage.png

This job can be run using a job script which might look something like:

#Sample Job Script
#!/bin/bash
#SBATCH --job-name=matlabGPU
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --gres=gpu:1
#SBATCH --time=00:10:00

cd ~
module load MATLAB
matlab -nodisplay -nosplash -r gpuTest

Alternately, we could also run this from the desktop (off-cluster), by making one small change to the code:

X=[-15:15 0 -15:15 0 -15:15];
gpuX = gpuArray(X);
whos gpuX
gpuE = expm(diag(gpuX,-1)) * expm(diag(gpuX,1));
gpuM = mod(round(abs(gpuE)),2);
gpuF = gpuM+fliplr(gpuM);
imagesc(gpuF);
imwrite(gpuF,'/scratch/users/someuser/myimage.png')
colormap(flip(gray));
result = gather(gpuF);
whos result

Note we've added a full path on the cluster for "myimage.png" (to our scratch space). Then in our desktop MATLAB window:

>> c = parcluster;
>> c.AdditionalProperties.GpusPerNode=1;
>> j = c.batch('gpuTest','AutoAddClientPath',false);

Interactive GPU Jobs

It is also possible to run the MATLAB GUI from an interactive scheduled job session on one of Matilda's GPU nodes.

In order to launch the MATLAB GUI it is necessary to establish an X-windows session connection to Matilda:

ssh -X [email protected]

To begin, we must request that the cluster allocate the job resources:

salloc -n 1 -c 1 -t 30:00 --gres=gpu:1

Note that by specifying "--gres=gpu:1" we are informing Slurm that we need a node with at least 1 GPU.

After issuing the "salloc" command an allocation message will be displayed once the job is scheduled (followed by the message-of-the-day).

salloc: Granted job allocation 29807

Since "srun" on Matilda does not have X-windows session forwarding capability, it is necessary to manually login to the allocated node using SSH. First, to determine which node has been assigned for our job:

squeue -u <username>

We might see something like the following:

             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
             29814      defq     bash someuser  R       0:11      1 hpc-gpu-p02

In this example, we've been assigned to "hpc-gpu-p02". We can now login using an X-session:

ssh -X hpc-gpu-p02

Once logged in, load the MATLAB module and start matlab:

module load MATLAB
matlab

IMPORTANT: When you are done using MATLAB in interactive mode, close the MATLAB GUI, and type "exit" to leave the assigned node, and then type "exit" again (from the login node) to release the resources allocated by "salloc".

It is also possible to run a non-GUI interactive MATLAB job. This can be done in the same way as any other interactive, command-line based job. No special X-session or X-forwarding is required. Please see the section on running interactive jobs on Matilda for more information.

More Information

For more information on parallel and cluster computing with MATLAB, refer to the references below:


CategoryHPC