19460
Comment:

19521

Deletions are marked like this.  Additions are marked like this. 
Line 326:  Line 326: 
In the example above we create a handle to the cluster, then submit "calc_pi_multi_node" as a batch job, specifying a pool of 40 workers. The argument "'AutoAddClientPath', false" instructs that we should not try to add the current local MATLAB path to the cluster job, since these are not shared filesystems.  In the example above we create a handle to the cluster, specify additional job parameters (in this case memory), and then submit "calc_pi_multi_node" as a batch job, specifying a pool of 40 workers. The argument "'AutoAddClientPath', false" instructs that we should not try to add the current local MATLAB path to the cluster job, since these are not shared filesystems. 
MATLAB
Contents
Overview
Oakland University has obtained a campus wide license (CWL) for MATLAB. This permits any user to run MATLAB on Matilda without special permission or license files. This page describes some of the ways in which MATLAB can be run on the cluster.
Initial Cluster Configuration
In order to use parallel work and batch modes it is necessary to import cluster configuration settings into your MATLAB profile the first time you run it.
Please use the following:
module load MATLAB matlab nodisplay nosplash >>configCluster
Note that if you run the "configCluster" MATLAB command more than once, it will erase and regenerate your cluster profile information, so it is only necessary to do this once.
Interactively Scheduled
A scheduled interactive job can be run on one of the cluster nodes using something like the following:
srun N 1 c 1 t 30:00 pty /bin/bash login
Once the session has started, simply load MATLAB and launch the application:
module load MATLAB matlab nodisplay nosplash
Please see our documentation for more information on scheduling interactive jobs.
Interactive Parallel
You can start an interactive MATLAB job and specify the total number of parallel workers to use during your run. This can be accomplished using something like the following:
module load MATLAB matlab nodisplay >>c=parcluster; >>p=c.parpool(40)
The command sequence above provides a handle to the cluster resource (parcluster), and then allocates 40 workers (parpool). These workers can run on a single node or across multiple nodes depending on the request and cluster scheduling. From here, you can execute MATLAB commands or scripts using a default set of cluster parameters.
To see the default cluster parameters use the following command:
>>c.AdditionalProperties
Various other parameters can be added or modified such as walltime, memory usage, GPUs, etc. For example:
>>c.AdditionalProperties.Walltime = '5:00:00'; >>c.AdditionalProperties.MemUsage = '4000'; >>c.AdditionalProperties.GpusPerNode = 1;
When finished, you can terminate this job using:
>>p.delete
Asynchronous Interactive Batch Jobs
It is also possible to submit asynchronous batch jobs to the cluster through the MATLAB interactive interface. This can be a handy way to submit a series of jobs to Matilda without having to create independent job scripts for each analysis task. MATLAB can be run from the login node or a scheduled interactive session and the jobs submitted as shown below.
The "batch" command will return a job object which is used to access the output of the submitted job. For example:
module load MATLAB matlab nodisplay >>c=parcluster; >>j=c.batch(@pwd, 1, {})
In the example above we obtain a handle to the cluster (parcluster) as we did in the previous example. The batch command launches the "@pwd" command (present working directory) with "1" output argument and no input parameters "{}".
Another example might involve running a MATLAB function which might look something like the following:
j=c.batch('sphere',1, {})
The command above submits a MATLAB job for the function "sphere" to the Matilda cluster, with 1 output argument and no input arguments.
In this example, we run a MATLAB script as a batch job:
j=c.batch('myscript')
In the example below, we create a multiprocessor, multinode job using a MATLAB script named "for_loop" which is submitted to the cluster:
>>c=parcluster; >>j=c.batch('for_loop', 'Pool', 40)
The example above specifies 40 CPU workers for the task 'for_loop'. For reference, "for_loop" contains the following commands:
tic n = 30000; A = 500; a = zeros(n); parfor i = 1:n a(i) = max(abs(eig(rand(A)))); end toc
In the script "for_loop" we utilize the "parfor" loop directive to divide our function tasks among different workers. Please note, your MATLAB code must be designed to parallelize processing of your data in order for the specified workers to have an actual performance benefit.
To query the state of any job:
>>j.State
If the "state" is "finished" you can fetch the results:
>>j.fetchOutputs{:}
After you're finished, make sure to delete the job results:
>>j.delete
To retrieve a list of currently running or completed jobs, call "parcluster" to retrieve the cluster object. This object stores an array of jobs that were run, are running, or are queued to run:
>>c=parcluster; >>jobs=c.Jobs >>j2=c.Jobs(2);
The example above will fetch the second job and display it.
We can view our job results as well:
>>j2.fetchOutputs
Please note that ending a command with the semicolon (;) will suppress output.
SLURM Job Script
Overview
MATLAB runs can also be accomplished using conventional job scripts. MATLAB commands should be constructed to suppress GUI display components. MATLAB scripts should be coded to display or print results to output.
Single Processor
Below is an example job script where we run the MATLAB script "mysphere" and view the slurm*.out file for the results:
#Sample Job Script #!/bin/bash #SAMPLE JOB SCRIPT #SBATCH jobname=matlabTest #SBATCH nodes=1 #SBATCH ntasks=1 #SBATCH cpuspertask=1 #SBATCH time=00:05:00 cd ~ module load MATLAB matlab singleCompThread nodisplay nosplash r mysphere
For reference, the MATLAB script "mysphere.m" contains the following:
[x,y,z] = sphere; r = 2; surf(x*r,y*r,z*r) axis equal A = 4*pi*r^2; V = (4/3)*pi*r^3; disp(A) disp(V)
MultiProcessor Single Node
We can also run multiworker parallel jobs using SLURM job scripts. For example, assume we have the following script "for_loop.m":
c=parcluster('local') poolobj = c.parpool(4); fprintf('Number of workers: %g\n', poolobj.NumWorkers); tic n = 200; A = 500; a = zeros(n); parfor i = 1:n a(i) = max(abs(eig(rand(A)))); end toc
In the above script, we specify we want 4 workers. We also make use of the "parcluster('local') instruction, so resources will be limited to the local node assigned to the job. So our job script would look something like the following:
# Sample job script #!/bin/bash #SBATCH jobname=matlabTest #SBATCH nodes=1 #SBATCH ntasks=1 #SBATCH cpuspertask=5 #SBATCH time=00:05:00 cd ~ module load MATLAB matlab nodisplay nosplash r for_loop
Note that in the job script example above, we specified 5 CPUs instead of 4. This is because one (1) CPU is needed to manage the 4 workers (N + 1) for a total of 5.
Please also note that if a value is not specified for "parpool" MATLAB will default to 12 workers.
MultiProcessor MultiNode
We can assign a number of workers to our MATLAB job and allow MATLAB and the job scheduler to assign those workers to a series of nodes with distributed processing. One change we must make to our MATLAB code is the specification of "parcluster". Presented below is some sample code that calculates the value of PI using multiple workers assigned to multiple nodes:
function calc_pi_multi_node c = parcluster; c.AdditionalProperties.MemUsage = '4gb'; if isempty(gcp('nocreate')), c.parpool(40); end spmd a = (labindex  1)/numlabs; b = labindex/numlabs; fprintf('Subinterval: [%4g, %4g]\n', a, b) myIntegral = integral(@quadpi, a, b); fprintf('Subinterval: [%4g, %4g] Integral: %4g\n', a, b, myIntegral) piApprox = gplus(myIntegral); end approx1 = piApprox{1}; % 1st element holds value on worker 1 fprintf('pi : %.18f\n', pi) fprintf('Approximation: %.18f\n', approx1) fprintf('Error : %g\n', abs(pi  approx1)) function y = quadpi(x) %QUADPI Return data to approximate pi. % Derivative of 4*atan(x) y = 4./(1 + x.^2);
In the example above we omit the 'local' from "parcluster" and assign the value "40" to "parpool". Note also we make use of "c.AdditionalProperties" to prescribe "MemUsage" of 4gb (you can add any number of job parameters in this way). This will allocate 40 workers to our task. The job script for this function will look something like the following:
#Sample Job Script #!/bin/sh #SBATCH n 1 # 1 instance of MATLAB #SBATCH cpuspertask=1 # 1 core per instance #SBATCH mempercpu=4gb # 4 GB RAM per core #SBATCH time=00:20:00 # 20 minutes # Load MATLAB module module load MATLAB/R2021a # Run code matlab batch calc_pi_multi_node
Note that we only assign 1 cpupertask and 1 node to this job. This job is essentially the "wrapper" for our function. Once launched, workers will be spawned as a separate job across one or more nodes using a total of 40 workers (cores). The walltime on this job must exceed the walltime needed to complete the "calc_pi_multi_node" function otherwise the entire job will fail. Because of MATLAB Parallel Server integration with the SLURM cluster system, the necessary worker script will be spawned in this example.
Submit Jobs OffCluster
Overview
It is possible to submit jobs to the Matilda cluster from your workstation copy of MATLAB. These instructions assume you have installed MATLAB on your workstation, including the Parallel Computing Toolbox.
At this time, MATLAB offcluster job submission does NOT integrate properly with DUO. Therefore it is necessary to connect using the OU VPN in order to submit offcluster jobs (thereby bypassing DUO push).
Initial Configuration
Before your first offcluster job submission from your personal workstation, it is necessary to download and install one of the tarball/zip integration scripts below to your machine:
Open MATLAB on your local and run the following command to determine your "user path":
>> userpath
Once this has been determined, unzip the integration script files into this directory. Once complete, configure your local MATLAB to use the Matilda cluster:
>> configCluster
Upon entering the command above, you should be prompted for your Matlida username. Enter that, and configuration for the cluster should be complete.
At this point you should be ready to submit jobs to Matilda from your local machine.
Submitting Jobs
Offcluster job submission includes issuing batch jobs much in the same way as we would oncluster. In this example, we will work with the function we presented previously that was designed to calculate the value of PI, with a couple of modifications:
function calc_pi_multi_node spmd a = (labindex  1)/numlabs; b = labindex/numlabs; fprintf('Subinterval: [%4g, %4g]\n', a, b) myIntegral = integral(@quadpi, a, b); fprintf('Subinterval: [%4g, %4g] Integral: %4g\n', a, b, myIntegral) piApprox = gplus(myIntegral); end approx1 = piApprox{1}; % 1st element holds value on worker 1 fprintf('pi : %.18f\n', pi) fprintf('Approximation: %.18f\n', approx1) fprintf('Error : %g\n', abs(pi  approx1)) function y = quadpi(x) %QUADPI Return data to approximate pi. % Derivative of 4*atan(x) y = 4./(1 + x.^2);
Notice in the revised code we've removed references to "parcluster" and "parpool". We will instead, set these in our desktop MATLAB session:
>> c = parcluster; >> c.AdditionalProperties.MemUsage='4gb'; >> j=c.batch('calc_pi_multi_node', 'Pool', 40,'AutoAddClientPath',false);
In the example above we create a handle to the cluster, specify additional job parameters (in this case memory), and then submit "calc_pi_multi_node" as a batch job, specifying a pool of 40 workers. The argument "'AutoAddClientPath', false" instructs that we should not try to add the current local MATLAB path to the cluster job, since these are not shared filesystems.
Once the job is submitted as shown, we will be prompted to enter our password for the first run:
After a few moments our job details can be displayed by entering:
>> j
We can check our job state using the "j.State" command:
We can obtain the output of the function call using the command "diary(j)":
>> diary(j)  Start Diary  Lab 1: Subinterval: [0 , 0.025] Lab 2: Subinterval: [0.025, 0.05] Lab 3: Subinterval: [0.05, 0.075] Lab 4: Subinterval: [0.075, 0.1 ] Lab 5: Subinterval: [0.1 , 0.125] Lab 6: Subinterval: [0.125, 0.15] Lab 7: Subinterval: [0.15, 0.175] Lab 8: Subinterval: [0.175, 0.2 ] Lab 9: Subinterval: [0.2 , 0.225] Lab 10: Subinterval: [0.225, 0.25] Lab 11: Subinterval: [0.25, 0.275] Lab 12: Subinterval: [0.275, 0.3 ] Lab 13: Subinterval: [0.3 , 0.325] Lab 14: Subinterval: [0.325, 0.35] Lab 15: Subinterval: [0.35, 0.375] Lab 16: Subinterval: [0.375, 0.4 ] Lab 17: Subinterval: [0.4 , 0.425] Lab 18: Subinterval: [0.425, 0.45] Lab 19: Subinterval: [0.45, 0.475] Lab 20: Subinterval: [0.475, 0.5 ] Lab 21: Subinterval: [0.5 , 0.525] Lab 22: Subinterval: [0.525, 0.55] Lab 23: Subinterval: [0.55, 0.575] Lab 24: Subinterval: [0.575, 0.6 ] Lab 25: Subinterval: [0.6 , 0.625] Lab 26: Subinterval: [0.625, 0.65] Lab 27: Subinterval: [0.65, 0.675] Lab 28: Subinterval: [0.675, 0.7 ] Lab 29: Subinterval: [0.7 , 0.725] Lab 30: Subinterval: [0.725, 0.75] Lab 31: Subinterval: [0.75, 0.775] Lab 32: Subinterval: [0.775, 0.8 ] Lab 33: Subinterval: [0.8 , 0.825] Lab 34: Subinterval: [0.825, 0.85] Lab 35: Subinterval: [0.85, 0.875] Lab 36: Subinterval: [0.875, 0.9 ] Lab 37: Subinterval: [0.9 , 0.925] Lab 38: Subinterval: [0.925, 0.95] Lab 39: Subinterval: [0.95, 0.975] Lab 40: Subinterval: [0.975, 1 ] Lab 1: Subinterval: [0 , 0.025] Integral: 0.0999792 Lab 2: Subinterval: [0.025, 0.05] Integral: 0.0998544 Lab 3: Subinterval: [0.05, 0.075] Integral: 0.0996058 Lab 4: Subinterval: [0.075, 0.1 ] Integral: 0.0992352 Lab 5: Subinterval: [0.1 , 0.125] Integral: 0.0987454 Lab 6: Subinterval: [0.125, 0.15] Integral: 0.0981398 Lab 7: Subinterval: [0.15, 0.175] Integral: 0.0974229 Lab 8: Subinterval: [0.175, 0.2 ] Integral: 0.0965996 Lab 9: Subinterval: [0.2 , 0.225] Integral: 0.0956755 Lab 10: Subinterval: [0.225, 0.25] Integral: 0.0946569 Lab 11: Subinterval: [0.25, 0.275] Integral: 0.0935502 Lab 12: Subinterval: [0.275, 0.3 ] Integral: 0.0923623 Lab 13: Subinterval: [0.3 , 0.325] Integral: 0.0911004 Lab 14: Subinterval: [0.325, 0.35] Integral: 0.0897717 Lab 15: Subinterval: [0.35, 0.375] Integral: 0.0883834 Lab 16: Subinterval: [0.375, 0.4 ] Integral: 0.0869428 Lab 17: Subinterval: [0.4 , 0.425] Integral: 0.0854571 Lab 18: Subinterval: [0.425, 0.45] Integral: 0.0839331 Lab 19: Subinterval: [0.45, 0.475] Integral: 0.0823776 Lab 20: Subinterval: [0.475, 0.5 ] Integral: 0.0807971 Lab 21: Subinterval: [0.5 , 0.525] Integral: 0.0791976 Lab 22: Subinterval: [0.525, 0.55] Integral: 0.0775848 Lab 23: Subinterval: [0.55, 0.575] Integral: 0.0759643 Lab 24: Subinterval: [0.575, 0.6 ] Integral: 0.0743409 Lab 25: Subinterval: [0.6 , 0.625] Integral: 0.0727193 Lab 26: Subinterval: [0.625, 0.65] Integral: 0.0711036 Lab 27: Subinterval: [0.65, 0.675] Integral: 0.0694978 Lab 28: Subinterval: [0.675, 0.7 ] Integral: 0.0679052 Lab 29: Subinterval: [0.7 , 0.725] Integral: 0.0663289 Lab 30: Subinterval: [0.725, 0.75] Integral: 0.0647717 Lab 31: Subinterval: [0.75, 0.775] Integral: 0.0632358 Lab 32: Subinterval: [0.775, 0.8 ] Integral: 0.0617235 Lab 33: Subinterval: [0.8 , 0.825] Integral: 0.0602364 Lab 34: Subinterval: [0.825, 0.85] Integral: 0.0587761 Lab 35: Subinterval: [0.85, 0.875] Integral: 0.0573437 Lab 36: Subinterval: [0.875, 0.9 ] Integral: 0.0559404 Lab 37: Subinterval: [0.9 , 0.925] Integral: 0.0545669 Lab 38: Subinterval: [0.925, 0.95] Integral: 0.0532237 Lab 39: Subinterval: [0.95, 0.975] Integral: 0.0519114 Lab 40: Subinterval: [0.975, 1 ] Integral: 0.0506302 pi : 3.141592653589793116 Approximation: 3.141592653589793116 Error : 0  End Diary 
A Brief Intro to GPU Jobs
In the following example we demonstrate a couple of ways to execute the following code (gpuTest.m) using MATLAB and a GPU:
X=[15:15 0 15:15 0 15:15]; gpuX = gpuArray(X); whos gpuX gpuE = expm(diag(gpuX,1)) * expm(diag(gpuX,1)); gpuM = mod(round(abs(gpuE)),2); gpuF = gpuM+fliplr(gpuM); imagesc(gpuF); imwrite(gpuF,'myimage.png') colormap(flip(gray)); result = gather(gpuF); whos result
Which produces the image file "myimage.png" shown below:
This job can be run using a job script which might look something like:
#Sample Job Script #!/bin/bash #SBATCH jobname=matlabGPU #SBATCH nodes=1 #SBATCH ntasks=1 #SBATCH cpuspertask=1 #SBATCH gres=gpu:1 #SBATCH time=00:10:00 cd ~ module load MATLAB matlab nodisplay nosplash r gpuTest
Alternately, we could also run this from the desktop (offcluster), by making one small change to the code:
X=[15:15 0 15:15 0 15:15]; gpuX = gpuArray(X); whos gpuX gpuE = expm(diag(gpuX,1)) * expm(diag(gpuX,1)); gpuM = mod(round(abs(gpuE)),2); gpuF = gpuM+fliplr(gpuM); imagesc(gpuF); imwrite(gpuF,'/scratch/users/someuser/myimage.png') colormap(flip(gray)); result = gather(gpuF); whos result
Note we've added a full path on the cluster for "myimage.png" (to our scratch space). Then in our desktop MATLAB window:
>> c = parcluster; >> c.AdditionalProperties.GpusPerNode=1; >> j = c.batch('gpuTest','AutoAddClientPath',false);
More Information
For more information on parallel and cluster computing with MATLAB, refer to the references below:
CategoryHPC