Differences between revisions 41 and 42
Revision 41 as of 2021-08-12 09:07:06
Size: 13233
Editor: jbjohnston
Comment:
Revision 42 as of 2021-08-12 09:10:25
Size: 12616
Editor: jbjohnston
Comment:
Deletions are marked like this. Additions are marked like this.
Line 301: Line 301:

---- /!\ '''Edit conflict - other version:''' ----
{{attachment:authenticate.png||align="left" width="200"}}

---- /!\ '''Edit conflict - your version:''' ----
{{attachment:authenticate.png||align="left" width=200}}
{{attachment:authenticate.png||width=200}}


After
a few seconds our job details will be displayed:

{{attachment:jobdetails.png||width=200}}
Line 310: Line 310:
After a few seconds our job details will be displayed:


---- /!\ '''Edit conflict - other version:''' ----
{{attachment:jobdetails.png||align="left" width="200"}}

---- /!\ '''Edit conflict - your version:''' ----
{{attachment:jobdetails.png||align="left" width=200}}

---- /!\ '''End of edit conflict''' ----
Line 323: Line 312:

---- /!\ '''Edit conflict - other version:''' ----
{{attachment:jobstate.png||align="left" width="200"}}

---- /!\ '''Edit conflict - your version:''' ----
{{attachment:jobstate.png||align="left" width=200}}

---- /!\ '''End of edit conflict''' ----
{{attachment:jobstate.png||width=200}}

MATLAB

Overview

Oakland University has obtained a campus wide license (CWL) for MATLAB. This permits any user to run MATLAB on Matilda without special permission or license files. This page describes some of the ways in which MATLAB can be run on the cluster.

Initial Cluster Configuration

In order to use parallel work and batch modes it is necessary to import cluster configuration settings into your MATLAB profile the first time you run it.

Please use the following:

module load MATLAB
matlab -nodisplay -nosplash
>>configCluster

Note that if you run the "configCluster" MATLAB command more than once, it will erase and regenerate your cluster profile information, so it is only necessary to do this once.

Interactively Scheduled

A scheduled interactive job can be run on one of the cluster nodes using something like the following:

srun -N 1 -c 1 -t 30:00 --pty /bin/bash --login

Once the session has started, simply load MATLAB and launch the application:

module load MATLAB
matlab -nodisplay -nosplash

Please see our documentation for more information on scheduling interactive jobs.

Interactive Parallel

You can start an interactive MATLAB job and specify the total number of parallel workers to use during your run. This can be accomplished using something like the following:

module load MATLAB
matlab -nodisplay
>>c=parcluster;
>>p=c.parpool(40)

The command sequence above provides a handle to the cluster resource (parcluster), and then allocates 40 workers (parpool). These workers can run on a single node or across multiple nodes depending on the request and cluster scheduling. From here, you can execute MATLAB commands or scripts using a default set of cluster parameters.

To see the default cluster parameters use the following command:

>>c.AdditionalProperties

Various other parameters can be added or modified such as walltime, memory usage, GPUs, etc. For example:

>>c.AdditionalProperties.Walltime = '5:00:00';
>>c.AdditionalProperties.MemUsage = '4000';
>>c.AdditionalProperties.GpusPerNode = 1;

When finished, you can terminate this job using:

>>p.delete

Asynchronous Interactive Batch Jobs

It is also possible to submit asynchronous batch jobs to the cluster through the MATLAB interactive interface. This can be a handy way to submit a series of jobs to Matilda without having to create independent job scripts for each analysis task. MATLAB can be run from the login node or a scheduled interactive session and the jobs submitted as shown below.

The "batch" command will return a job object which is used to access the output of the submitted job. For example:

module load MATLAB
matlab -nodisplay
>>c=parcluster;
>>j=c.batch(@pwd, 1, {})

In the example above we obtain a handle to the cluster (parcluster) as we did in the previous example. The batch command launches the "@pwd" command (present working directory) with "1" output argument and no input parameters "{}".

Another example might involve running a MATLAB function which might look something like the following:

j=c.batch('sphere',1, {})

The command above submits a MATLAB job for the function "sphere" to the Matilda cluster, with 1 output argument and no input arguments.

In this example, we run a MATLAB script as a batch job:

j=c.batch('myscript')

In the example below, we create a multi-processor, multi-node job using a MATLAB script named "for_loop" which is submitted to the cluster:

>>c=parcluster;
>>j=c.batch('for_loop', 'Pool', 40)

The example above specifies 40 CPU workers for the task 'for_loop'. For reference, "for_loop" contains the following commands:

tic
n = 30000;
A = 500;
a = zeros(n);
parfor i = 1:n
    a(i) = max(abs(eig(rand(A))));
end
toc

In the script "for_loop" we utilize the "parfor" loop directive to divide our function tasks among different workers. Please note, your MATLAB code must be designed to parallelize processing of your data in order for the specified workers to have an actual performance benefit.

To query the state of any job:

>>j.State

If the "state" is "finished" you can fetch the results:

>>j.fetchOutputs{:}

After you're finished, make sure to delete the job results:

>>j.delete

To retrieve a list of currently running or completed jobs, call "parcluster" to retrieve the cluster object. This object stores an array of jobs that were run, are running, or are queued to run:

>>c=parcluster;
>>jobs=c.Jobs
>>j2=c.Jobs(2);

The example above will fetch the second job and display it.

We can view our job results as well:

>>j2.fetchOutputs

Please note that ending a command with the semicolon (;) will suppress output.

SLURM Job Script

Overview

MATLAB runs can also be accomplished using conventional job scripts. MATLAB commands should be constructed to suppress GUI display components. MATLAB scripts should be coded to display or print results to output.

Single Processor

Below is an example job script where we run the MATLAB script "mysphere" and view the slurm*.out file for the results:

#Sample Job Script
#!/bin/bash
#SAMPLE JOB SCRIPT
#SBATCH --job-name=matlabTest
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --time=00:05:00

cd ~
module load MATLAB
matlab -singleCompThread -nodisplay -nosplash -r mysphere

For reference, the MATLAB script "mysphere.m" contains the following:

[x,y,z] = sphere;
r = 2;
surf(x*r,y*r,z*r)
axis equal
A = 4*pi*r^2;
V = (4/3)*pi*r^3;
disp(A)
disp(V)

Multi-Processor Single Node

We can also run multi-worker parallel jobs using SLURM job scripts. For example, assume we have the following script "for_loop.m":

c=parcluster('local')
poolobj = c.parpool(4);
fprintf('Number of workers: %g\n', poolobj.NumWorkers);

tic
n = 200;
A = 500;
a = zeros(n);
parfor i = 1:n
    a(i) = max(abs(eig(rand(A))));
end
toc

In the above script, we specify we want 4 workers. We also make use of the "parcluster('local') instruction, so resources will be limited to the local node assigned to the job. So our job script would look something like the following:

# Sample job script
#!/bin/bash
#SBATCH --job-name=matlabTest
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=5
#SBATCH --time=00:05:00

cd ~
module load MATLAB
matlab -nodisplay -nosplash -r for_loop

Note that in the job script example above, we specified 5 CPUs instead of 4. This is because one (1) CPU is needed to manage the 4 workers (N + 1) for a total of 5.

Please also note that if a value is not specified for "parpool" MATLAB will default to 12 workers.

Multi-Processor Multi-Node

We can assign a number of workers to our MATLAB job and allow MATLAB and the job scheduler to assign those workers to a series of nodes with distributed processing. One change we must make to our MATLAB code is the specification of "parcluster". Presented below is some sample code that calculates the value of PI using multiple workers assigned to multiple nodes:

function calc_pi_multi_node

c = parcluster;
c.AdditionalProperties.MemUsage = '4gb';



if isempty(gcp('nocreate')), c.parpool(40); end

spmd
    a = (labindex - 1)/numlabs;
    b = labindex/numlabs;
    fprintf('Subinterval: [%-4g, %-4g]\n', a, b)

    myIntegral = integral(@quadpi, a, b);
    fprintf('Subinterval: [%-4g, %-4g]   Integral: %4g\n', a, b, myIntegral)

    piApprox = gplus(myIntegral);
end

approx1 = piApprox{1};  % 1st element holds value on worker 1
fprintf('pi           : %.18f\n', pi)
fprintf('Approximation: %.18f\n', approx1)
fprintf('Error        : %g\n',    abs(pi - approx1))


function y = quadpi(x)
%QUADPI Return data to approximate pi.

% Derivative of 4*atan(x)
y = 4./(1 + x.^2);

In the example above we omit the 'local' from "parcluster" and assign the value "40" to "parpool". Note also we make use of "c.AdditionalProperties" to prescribe "MemUsage" of 4gb (you can add any number of job parameters in this way). This will allocate 40 workers to our task. The job script for this function will look something like the following:

#Sample Job Script
#!/bin/sh
#SBATCH -n 1                            # 1 instance of MATLAB
#SBATCH --cpus-per-task=1               # 1 core per instance
#SBATCH --mem-per-cpu=4gb               # 4 GB RAM per core
#SBATCH --time=00:20:00                 # 20 minutes

# Load MATLAB module
module load MATLAB/R2021a

# Run code
matlab -batch calc_pi_multi_node

Note that we only assign 1 cpu-per-task and 1 node to this job. This job is essentially the "wrapper" for our function. Once launched, workers will be spawned as a separate job across one or more nodes using a total of 40 workers (cores). The walltime on this job must exceed the walltime needed to complete the "calc_pi_multi_node" function otherwise the entire job will fail. Because of MATLAB Parallel Server integration with the SLURM cluster system, the necessary worker script will be spawned in this example.

Submit Jobs Off-Cluster

Overview

It is possible to submit jobs to the Matilda cluster from your workstation copy of MATLAB. These instructions assume you have installed MATLAB on your workstation, including the Parallel Computing Toolbox.

At this time, MATLAB off-cluster job submission does NOT integrate properly with DUO. Therefore it is necessary to connect using the OU VPN in order to submit off-cluster jobs (thereby bypassing DUO push).

Initial Configuration

Before your first off-cluster job submission from your personal workstation, it is necessary to download and install one of the tarball/zip integration scripts below to your machine:

Open MATLAB on your local and run the following command to determine your "user path":

>> userpath

Once this has been determined, unzip the integration script files into this directory. Once complete, configure your local MATLAB to use the Matilda cluster:

>> configCluster

Upon entering the command above, you should be prompted for your Matlida username. Enter that, and configuration for the cluster should be complete.

At this point you should be ready to submit jobs to Matilda from your local machine.

Submitting Jobs

Off-cluster job submission includes issuing batch jobs much in the same way as we would on-cluster. In this example, we will work with the function we presented previously that was designed to calculate the value of PI:

>> c = parcluster;
>> j = c.batch('calc_pi_multi_node',1,{},'AutoAddClientPath',false);

In the example above we create a handle to the cluster, then submit "calc_pi_multi_node" as a batch job. We expect 1 output value and have no input values. The argument "'AutoAddClientPath', false" instructs that we should not try to add the current local MATLAB path to the cluster job, since these are not shared filesystems.

Once the job is submitted as shown, we will be prompted to enter our password for the first run:

authenticate.png

After a few seconds our job details will be displayed:

jobdetails.png


/!\ End of edit conflict


We can check our job state using the "j.State" command:

jobstate.png

More Information

For more information on parallel and cluster computing with MATLAB, refer to the references below:


CategoryHPC