Matlab Distributed Computing Server System Administrator's Guide
Matlab Distributed Computing Server System Administrator's Guide
Matlab Distributed Computing Server System Administrator's Guide
R2017b
How to Contact MathWorks
Phone: 508-647-7000
Introduction
1
MATLAB Distributed Computing Server Product
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Key Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Network Administration
2
Prepare for Parallel Computing . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Plan Your Network Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Network Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Fully Qualified Domain Names . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Security Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
v
Shut Down a Job Manager Cluster . . . . . . . . . . . . . . . . . . . . . . 2-8
Linux and Macintosh Operating Systems . . . . . . . . . . . . . . . . 2-8
Microsoft Windows Operating Systems . . . . . . . . . . . . . . . . . . 2-9
Product Installation
3
Install Products and Choose Cluster Configuration . . . . . . . . 3-2
Cluster Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
Install Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
Configure Your Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
vi Contents
Configure for HPC Pack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-26
Configure Cluster for Microsoft HPC Pack . . . . . . . . . . . . . . 3-26
Configure Client Computer for HPC Pack . . . . . . . . . . . . . . . 3-27
Validate Installation Using Microsoft HPC Pack . . . . . . . . . . 3-28
Admin Center
4
Start Admin Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
vii
Move a Worker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-8
Update the Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9
Glossary
viii Contents
1
Introduction
MATLAB Distributed Computing Server enables you to run your programs and models
on a cluster without having to acquire additional MathWorks® product licenses for each
computer in the cluster.
Key Features
• Access to all eligible licensed toolboxes or blocksets with a single server license on the
distributed computing resource
• Execution of GPU-enabled functions on distributed computing resources
• Execution of parallel computations from applications and software components
generated using MATLAB Compiler™ on distributed computing resources
• Support for all hardware platforms and operating systems supported by MATLAB
and Simulink
• Application scheduling using a built-in job scheduler or third-party schedulers such as
Platform LSF®, Microsoft® Windows® HPC Server 2008, Altair PBS Pro®, and
TORQUE
1-2
Product Overview
Product Overview
In this section...
“Parallel Computing Concepts” on page 1-3
“Determining Product Installation and Versions” on page 1-4
The MATLAB session in which the job and its tasks are defined is called the client
session. Often, this is on the machine where you program MATLAB. The client uses
Parallel Computing Toolbox software to perform the definition of jobs and tasks. The
MATLAB Distributed Computing Server product performs the execution of your job by
evaluating each of its tasks and returning the result to your client session.
Parallel Computing Toolbox software allows you to run a cluster of MATLAB workers on
your local machine in addition to your MATLAB client session. MATLAB Distributed
Computing Server software allows you to run as many MATLAB workers on a remote
cluster of computers as your licensing allows.
The MATLAB job scheduler (MJS) is the part of the server software that coordinates the
execution of jobs and the evaluation of their tasks. The MJS distributes the tasks for
evaluation to the server's individual MATLAB sessions called workers. Use of the MJS is
optional; the distribution of tasks to workers can also be performed by a third-party
scheduler, such as Window HPC Server (including CCS), a Platform LSF scheduler, or a
PBS Pro scheduler.
See the Glossary for definitions of the parallel computing terms used in this manual.
1-3
1 Introduction
MATLAB Worker
MATLAB Distributed
Computing Server
MATLAB Worker
MATLAB Distributed
Computing Server
When you enter this command, MATLAB displays information about the version of
MATLAB you are running, including a list of all toolboxes installed on your system and
their version numbers.
You can run the ver command as part of a task in a distributed or parallel application to
determine what version of MATLAB Distributed Computing Server software is installed
on a worker machine. Note that the toolbox and server software must be the same
version.
1-4
Toolbox and Server Components
Each worker receives a task of the running job from the MJS, executes the task, returns
the result to the MJS, and then receives another task. When all tasks for a running job
have been assigned to workers, the MJS starts running the next job with the next
available worker.
Note For testing your application locally or other purposes, you can configure a single
computer to host the client, workers, and MJS. You can also have more than one worker
session or more than one MJS session on a machine.
1-5
1 Introduction
Task
Worker
Job Results
Client
All Results
Task
Scheduler Worker
Results
Job
Client Task
All Results
Worker
Results
A large network might include several MJS sessions as well as several client sessions.
Any client session can create, run, and access jobs on any MJS, but a worker session is
registered with and dedicated to only one MJS at a time. The following figure shows a
configuration with multiple MJS processes.
Worker
Worker
Client
Client
Worker
Worker
Scheduler 2
Client
Worker
1-6
Toolbox and Server Components
Third-Party Schedulers
As an alternative to using the MJS, you can use a third-party scheduler. This could be a
Microsoft Windows HPC Server (including CCS), Platform LSF scheduler, PBS Pro
scheduler, TORQUE scheduler, or a generic scheduler.
You should consider the following when deciding to use a scheduler or the MJS for
distributing your tasks:
The MJS can handle all file and data sharing necessary for your parallel computing
applications. This might be helpful in configurations where shared access is limited.
• Are you interested in batch or interactive processing?
When you use an MJS, worker processes usually remain running at all times,
dedicated to their MJS. With a third-party scheduler, workers are run as applications
that are started for the evaluation of tasks, and stopped when their tasks are
complete. If tasks are small or take little time, starting a worker for each one might
involve too much overhead time.
• Are there security concerns?
1-7
1 Introduction
If you have a large cluster, you probably already have a scheduler. Consult your
MathWorks representative if you have questions about cluster size and the MJS.
• Who administers your cluster?
The person administering your cluster might have a preference for how jobs are
scheduled.
For a complete listing of all network requirements, including those for heterogeneous
environments, see the System Requirements page for MATLAB Distributed Computing
Server software at
http://www.mathworks.com/products/distriben/requirements.html
mdce Service
If you are using the MJS, every machine that hosts a worker or MJS session must also
run the mdce service.
The mdce service recovers worker and MJS sessions when their host machines crash. If a
worker or MJS machine crashes, when mdce starts up again (usually configured to start
at machine boot time), it automatically restarts the MJS and worker sessions to resume
their sessions from before the system crash.
1-8
Using Parallel Computing Toolbox Software
1 Find a cluster — Your network may have one or more MJS available (but usually
only one scheduler). The function you use to find an MJS or scheduler creates an
object in your current MATLAB session to represent the MJS or scheduler that will
run your job.
2 Create a Job — You create a job to hold a collection of tasks. The job exists on the
MJS (or scheduler's data location), but a job object in the local MATLAB session
represents that job.
3 Create Tasks — You create tasks to add to the job. Each task of a job can be
represented by a task object in your local MATLAB session.
4 Submit a Job to the Job Queue for Execution — When your job has all its tasks
defined, you submit it to the queue in the MJS or scheduler. The MJS or scheduler
distributes your job's tasks to the worker sessions for evaluation. When all of the
workers are completed with the job's tasks, the job moves to the finished state.
5 Retrieve the Job's Results — The resulting data from the evaluation of the job is
available as a property value of each task object.
6 Destroy the Job — When the job is complete and all its results are gathered, you can
destroy the job to free memory resources.
1-9
2
Network Administration
In this section...
“Plan Your Network Layout” on page 2-2
“Network Requirements” on page 2-3
“Fully Qualified Domain Names” on page 2-3
“Security Considerations” on page 2-3
This section discusses the requirements and configurations for your network to support
parallel computing.
The job manager process should run on a stable machine, with adequate resources to
manage the number of tasks and amount of data expected in your parallel computing
applications.
The following table shows what products and processes are needed for each of these roles
in the parallel computing configuration.
Session Product Processes
Client Parallel Computing Toolbox MATLAB with toolbox
Worker MATLAB Distributed worker; mdce service (if you
Computing Server are using a job manager)
Job manager MATLAB Distributed mdce service; job manager
Computing Server
The server software includes the mdce service or daemon. The mdce service is separate
from the worker and job manager processes, and it must be running on all machines that
run job manager sessions or workers that are registered with a job manager. (The mdce
service is not used with third-party schedulers.)
2-2
Prepare for Parallel Computing
You can install both toolbox and server software on the same machine, so that one
machine can run both client and server sessions.
Network Requirements
To view the network requirements for MATLAB Distributed Computing Server software,
visit the product requirements page on the MathWorks Web site at
http://www.mathworks.com/products/distriben/requirements.html
Security Considerations
The parallel computing products do not provide any security measures. Therefore, be
aware of the following security considerations:
• MATLAB workers run as whatever user the administrator starts the node’s mdce
service under. By default, the mdce service starts as root on UNIX operating
systems, and as LocalSystem on Microsoft Windows operating systems. Because
MATLAB provides system calls, users can submit jobs that execute shell commands.
• The mdce service does not enforce any access control or authentication. Anyone with
local or remote access to the mdce services can start and stop their workers and job
managers, and query for their status.
• The job manager does not restrict access to the cluster, nor to job and task data.
Using a third-party scheduler instead of the MathWorks job manager could allow you
to take advantage of the security measures it provides.
• The parallel computing processes must all be on the same side of a firewall, or you
must take measures to enable them to communicate with each other through the
firewall. Workers running tasks of the same communicating job cannot be firewalled
off from each other, because their MPI-based communication will not work.
2-3
2 Network Administration
• If certain ports are restricted, you can specify the ports used for parallel computing.
See “Define Script Defaults” on page 2-11.
• If your organization is a member of the Internet Multicast Backbone (MBone), make
sure that your parallel computing cluster is isolated from MBone access if you are
using multicast for parallel computing. Isolation is generally the default condition. If
you have any questions about MBone membership, contact your network
administrator.
2-4
Use Different MPI Builds on UNIX Systems
In this section...
“Build MPI” on page 2-5
“Use Your MPI Build” on page 2-5
Build MPI
On Linux and Macintosh operating systems, you can use an MPI build that differs from
the one provided with Parallel Computing Toolbox. This topic outlines the steps for
creating an MPI build for use with the generic scheduler interface. If you already have an
alternative MPI build, proceed to “Use Your MPI Build” on page 2-5.
1 Unpack the MPI sources into the target file system on your machine. For example,
suppose you have downloaded mpich2-distro.tgz and want to unpack it
into /opt for building:
# cd /opt
# mkdir mpich2 && cd mpich2
# tar zxvf path/to/mpich2-distro.tgz
# cd mpich2-1.4.1p1
2 Build your MPI using the enable-shared option (this is vital, as you must build a
shared library MPI, binary compatible with MPICH2-1.4.1p1 for R2013b and later).
For example, the following commands build an MPI with the nemesis channel
device and the gforker launcher.
#./configure -prefix=/opt/mpich2/mpich2-1.4.1p1 \
--enable-shared --with-device=ch3:nemesis \
--with-pm=gforker 2>&1 | tee log
# make 2>&1 | tee -a log
# make install 2>&1 | tee -a log
2-5
2 Network Administration
1 Test your build by running the mpiexec executable. The build should be ready to
test if its bin/mpiexec and lib/libmpich.so are available in the MPI
installation location.
$ /opt/mpich2/mpich2-1.4.1p1/bin/mpiexec -n 4 hostname
2 Create an mpiLibConf function to direct Parallel Computing Toolbox to use your
new MPI. Write your mpiLibConf.m to return the appropriate information for your
build. For example:
function [primary, extras] = mpiLibConf
primary = '/opt/mpich2/mpich2-1.4.1p1/lib/libmpich.so';
extras = {};
The primary path must be valid on the cluster; and your mpiLibConf.m file must
be higher on the cluster workers’ path than matlabroot/toolbox/distcomp/mpi.
(Sending mpiLibConf.m as an attached file for this purpose does not work. You can
get the mpiLibConf.m function on the worker path by either moving the file into a
folder on the path, or by having the scheduler use cd in its command so that it starts
the MATLAB worker from within the folder that contains the function.)
3 Determine necessary daemons and command-line options.
• Determine all necessary daemons (often something like mpdboot or smpd). The
gforker build example in this section uses an MPI that needs no services or
daemons running on the cluster, but it can use only the local machine.
• Determine the correct command-line options to pass to mpiexec.
4 To set up your cluster to use your new MPI build, modify your communicating job
wrapper script to pick up the correct mpiexec. Additionally, there might be a stage
in the wrapper script where the MPI process manager daemons are launched.
2-6
Use Different MPI Builds on UNIX Systems
• Stop the daemon processes. For example, for the MPD process manager this
means calling "mpdallexit".
2-7
2 Network Administration
If you are done using the job manager and its workers, you might want to shut down the
server software processes so that they are not consuming network resources. You do not
need to be at the computer running the processes that you are shutting down. You can
run these commands from any machine with network access to the processes. The
following sections explain shutting down the processes for different platforms.
If you have more than one job manager running, stop each of them individually by
host and name.
If you have more than one worker session running, you can stop each of them
individually by host and name.
2-8
Shut Down a Job Manager Cluster
Normally, you configure the mdce daemon to start at system boot time and continue
running until the machine shuts down. However, if you plan to uninstall the MATLAB
Distributed Computing Server product from a machine, you might want to uninstall the
mdce daemon also, because you no longer need it.
Note You must have root privileges to stop or uninstall the mdce daemon.
If you used the alternative manual startup of the mdce daemon, use the following
commands to stop it manually:
cd matlabroot/toolbox/distcomp/bin
mdce stop
Enter the commands of this section at the prompt in a DOS command window.
2-9
2 Network Administration
If you have more than one job manager running, stop each of them individually by
host and name.
If you have more than one worker session running, you can stop each of them
individually by host and name.
stopworker -remotehost <worker hostname> -name <worker1 name>
stopworker -remotehost <worker hostname> -name <worker2 name>
Normally, you configure the mdce service to start at system boot time and continue
running until the machine shuts down. If you need to stop the mdce service while leaving
the machine on, enter the following commands at a DOS command prompt:
cd matlabroot\toolbox\distcomp\bin
mdce stop
If you plan to uninstall the MATLAB Distributed Computing Server product from a
machine, you might want to uninstall the mdce service also, because you no longer need
it.
To uninstall the mdce service, enter the following commands at a DOS command prompt:
cd matlabroot\toolbox\distcomp\bin
mdce uninstall
2-10
Customize Startup Parameters
The MATLAB Distributed Computing Server scripts run using several default
parameters. You can customize the scripts, as described in this section.
Note The startup script flags take precedence over the settings in the mdce_def file.
The default parameters used by the server service scripts are defined in the file:
To set the default parameters, edit this file before installing or starting the mdce service.
The mdce_def file is self-documented, and includes explanations of all its parameters.
Note If you want to run more than one job manager on the same machine, they must all
have unique names. Specify the names using flags with the startup commands.
By default, the job manager and worker services run as the user who starts them. You
can run the services as a different user with the following settings in the mdce_def file.
2-11
2 Network Administration
Parameter Description
MDCEUSER Set this parameter to run the mdce services as a user different
from the user who starts the service. On a UNIX operating
system, set the value before starting the service; on a Windows
operating system, set it before installing the service.
MDCEPASS On a Windows operating system, set this parameter to specify
the password for the user identified in the MDCEUSER parameter;
otherwise, the system prompts you for the password when the
service is installed.
On UNIX operating systems, MDCEUSER requires that the current machine has the sudo
utility installed, and that the current user be allowed to use sudo to execute commands
as the user identified by MDCEUSER. For further information, refer to your system
documentation on the sudo and sudoers utilities (for example, man sudo and man
sudoers).
1 Select the Windows menu Start > Settings > Control Panel.
2 Double-click Administrative Tools, then Local Security Policy.
3 In the tree, select Local Policies, then in the right pane, double-click User
Rights Assignment.
The table above indicates which policies are affected by MDCEUSER. Double-click any of
the listed policies in the Local Security Settings GUI to alter its setting or remove a user
from that policy.
2-12
Customize Startup Parameters
The default parameters used by the mdce service, job managers, and workers are defined
in the file:
Before installing and starting the mdce service, you can edit this file to set the default
parameters with values you require.
Alternatively, you can make a copy of this file, modify the copy, and specify that this copy
be used for the default parameters.
If you specify a new mdce_def file instead of the default file for the service on one
computer, the new file is not automatically used by the mdce service on other computers.
If you want to use the same alternative file for all your mdce services, you must specify it
for each mdce service you install or start.
Note The startup script flags take precedence over the settings in the mdce_def file.
When a job manager or worker starts up, it normally resumes its session from the past.
This way, a job queue is not destroyed or lost if the job manager machine crashes or if the
2-13
2 Network Administration
job manager is inadvertently shut down. To start up a job manager or worker from a
clean state, with all history deleted, use the -clean flag on the start command:
2-14
Access Service Record Files
The MATLAB Distributed Computing Server services generate various record files in the
normal course of their operations. The mdce service, job manager, and worker sessions
all generate such files. This section describes the types of information stored by the
services.
2-15
2 Network Administration
A primary feature offered by the checkpoint folders is in crash recovery. This allows
server services to automatically resume their sessions after a system goes down and
comes back up, minimizing the loss of data. However, if a MATLAB worker goes down
during the evaluation of a task, that task is neither reevaluated nor reassigned to
another worker. In this case, a finished job may not have a complete set of output data,
because data from any unfinished tasks might be missing.
Note If a job manager crashes and restarts, its workers can take up to 2 minutes to
reregister with it.
2-16
Set MJS Cluster Security
In this section...
“Set the Security Level” on page 2-17
“Local, MJS, and Network Passwords” on page 2-19
“Set Secure Communication” on page 2-20
The following table describes the available security levels for accessing an MJS and its
jobs.
Security Description User Requirements
Level
0 No security. • Jobs are associated with the default
user name of the programmer, but no
• Any user can access any job. protection is provided.
• Tasks run as the user who started the
mdce process on the worker machines
(typically root or Local System).
• This is the default, and is the behavior
in all releases prior to R2010b.
1 Jobs are identified with the submitting • A dialog requires you to establish a
user. user name when you first access the job
manager.
• Any user can access any job; a dialog
• Your job manager (MJS) user name
warns if the accessed job belongs to
does not have to match your system/
another user.
network user name.
• Tasks run as the user who started the
mdce process on the worker machines • No passwords are used.
(typically root or Local System).
2-17
2 Network Administration
2-18
Set MJS Cluster Security
• Jobs and tasks are identified with the • The job manager (MJS) must use
secure communication with the
submitting user, and are password
workers (set in the mdce_def file).
protected. Other users cannot access
your jobs. • When you start the job manager (MJS),
• Tasks run as the user who submitted it prompts you to provide a new
the job. password for that job manager’s admin
account, which can be used for
accessing all users’ jobs and tasks.
• A dialog box requires you to establish a
user name and password when you
first access the job manager (MJS)
from the MATLAB client.
• Your job manager (MJS) user name
and password must be the same as
your system/network user name and
password, because the worker must log
you in to run the task as you.
• All users that tasks run as, require
read and write permissions to the
CHECKPOINTBASE folder and all its
subfolders.
The job manager and the workers should run at the same security level. A worker
running at too low a security level will fail to register with the job manager, because the
job manager does not trust it.
2-19
2 Network Administration
For any security level, the job manager (MJS) identifies every job with the user who
submits the job. Therefore, whenever you access the MJS or a job, the MJS must be
aware of who you are.
At security level 0, the MJS and job objects’ UserName property is set to the login name
of the person who creates the job; this setting can be changed at any time. For all higher
security levels, the first access to the MJS causes a dialog box to open which asks for
your username; if the security level is 2 or 3, you must also provide a password. The
username and password you provide for the MJS needs to match your network username
and password only if you are using security level 3; otherwise, you can create a new
username and password unique for the MJS. For your convenience, you can choose how
long to save your username and password on the local computer, so that you do not need
to enter them every time you access your job.
For information about changing a password and logging out of an MJS, see
changePassword and logout.
• USE_SECURE_COMMUNICATION = true
• ALL_SERVER_SOCKETS_IN_CLUSTER = true (default)
You must also provide a value for the SHARED_SECRET_FILE parameter in the
mdce_def file, identifying where the file can be found from the job manager (MJS)
perspective. To create this file, run either script:
• matlabroot/toolbox/distcomp/bin/createSharedSecret (UNIX)
• matlabroot\toolbox\distcomp\bin\createSharedSecret.bat (Windows)
The secret file establishes trust between the processes on different machines.
2-20
Set MJS Cluster Security
• In a shared file system, all the nodes can point to the same secret file, and they can
even all share the same mdce_def file.
• In a nonshared file system, create a secret file with the provided script, then copy the
file to each node and make sure each node’s mdce_def file indicates where its
particular secret file is located.
Note Secure communication is required when using job manager (MJS) security level 3.
2-21
2 Network Administration
This section offers advice on solving problems you might encounter with MATLAB
Distributed Computing Server software.
License Errors
When starting a MATLAB worker, a licensing problem might result in the message
License checkout failed. No such FEATURE exists.
License Manager Error -5
There are many reasons why you might receive this error:
• This message usually indicates that you are trying to use a product for which you are
not licensed. Look at your license.dat file located within your MATLAB
installation to see if you are licensed to use this product.
• If you are licensed for this product, this error may be the result of having extra
carriage returns or tabs in your license file. To avoid this, ensure that each line begins
with either #, SERVER, DAEMON, or INCREMENT.
After fixing your license.dat file, restart your license manager and MATLAB
should work properly.
• This error may also be the result of an incorrect system date. If your system date is
before the date that your license was made, you will get this error.
• If you receive this error when starting a worker with MATLAB Distributed
Computing Server software:
2-22
Troubleshoot Common Problems
• You may be calling the startworker command from an installation that does not
have access to a worker license. For example, starting a worker from a client
installation of the Parallel Computing Toolbox product causes the following error:
The mdce service on the host hostname
returned the following error:
==============================================================
Most likely, the MATLAB worker failed to start due to a
licensing problem, or MATLAB crashed during startup. Check
the worker log file
/tmp/mdce_user/node_node_worker_05-11-01_16-52-03_953.log
for more detailed information. The mdce log file
/tmp/mdce_user/mdce-service.log
may also contain some additional information.
===============================================================
Diagnostic Information:
Feature: MATLAB_Distrib_Comp_Engine
License path: /apps/matlab/etc/license.dat
FLEXnet Licensing error: -15,570. System Error: 115
• If you installed only the Parallel Computing Toolbox product, and you are
attempting to run a worker on the same machine, you will receive this error
because the MATLAB Distributed Computing Server product is not installed, and
therefore the worker cannot obtain a license.
2-23
2 Network Administration
Required Ports
With Job Manager
BASE_PORT
The mdce_def file specifies and describes the ports required by the job manager and all
workers. See the following file in the MATLAB installation used for each cluster process:
Communicating Jobs
On worker machines running a UNIX operating system, the number of ports required by
MPICH for the running of communicating jobs ranges from BASE_PORT + 1000 to
BASE_PORT + 2000.
Before the worker processes start, you can control the range of ports used by the workers
for communicating jobs by defining the environment variable MPICH_PORT_RANGE with
the value minport:maxport.
2-24
Troubleshoot Common Problems
Client Ports
With the pctconfig function, you specify the ports used by the client. If the default
ports cannot be used, this function allows you to configure ports separately for
communication with the job scheduler and communication with pmode or a parallel pool.
3 On the Registry Editor window, select Edit > New > DWORD Value.
4 In the list of entries on the right, change the new value name to MaxUserPort and
press Enter.
5 Right-click on the MaxUserPort entry name and select Modify.
6 In the Edit DWORD Value dialog, enter 65534 in the Value data field. Select
Decimal for the Base value. Click OK.
This parameter controls the maximum port number that is used when a program
requests any available user port from the system. Typically, ephemeral (short-lived)
ports are allocated between the values of 1024 and 5000 inclusive. This action allows
allocation for port numbers up to 65534.
7 Quit the Registry Editor.
8 Reboot your machine.
2-25
2 Network Administration
First, be sure that the machines in question agree on their IP resolutions. The IP address
for a particular host should be the same for itself as it is from the perspective of another
host. For example, if a process on hostB cannot connect to one on hostA, find out the
hostA IP address for itself, then see what the IP address for hostA is from hostB. They
should be the same.
If the machines can identify each other, the nodestatus command can be useful for
diagnosing problems between their processes. Use the function to determine what
MATLAB Distributed Computing Server processes are running on the local host, and
which are accessible from remote hosts. If a worker on hostA cannot register with its job
manager on hostB, run nodestatus on both hosts to see what each can see on hostB.
On hostB, execute:
The results should be the same, showing the same listing of job managers and workers.
If the output indicates problems, run the command again with a higher information level
to receive more detailed information:
nodestatus -remotehost hostB -infolevel 3
If you cannot successfully add hosts to the listing by specifying host name, you can use
their IP addresses instead (see “Add Hosts” on page 4-3). If you suspect any
communications problems, in the Admin Center GUI click Test Connectivity (see “Test
Connectivity” on page 4-10). This testing verifies that the nodes can identify each other
and allow their processes to communicate with each other.
2-26
Troubleshoot Common Problems
Using DNS for cluster discovery requires that you have a DNS SRV record of the
following general form:
_mdcs._tcp.domainname.com SSSS IN SRV PPPP WWWW MJS_PORT MJS_FQDN_HOSTNAME
• _mdcs._tcp. The record must start with this text, followed by your domain name
(like company.com or university.edu) that the client machine searches.
• SSSS indicates how long (in seconds) the DNS record can be cached; 3600 is
recommended.
• IN SRV is required as shown, indicating that this is a service record.
• PPPP and WWWW indicate priority and wait values. These are not used, so 0 is
recommended for each.
• MJS_PORT is the port on which you connect to the MJS server. The default is 27350,
but if you change it for the server you must change it here accordingly.
• MJS_FQDN_HOSTNAME is the fully qualified domain name for the host serving the
MJS. For example, mjs-1.company.com.
A valid DNS SRV record for the company.com network running an MJS on machine
mjs-1 might look like this:
For your network, create the appropriate DNS SRV record using the standard procedure
for your DNS system. Then you can verify that your network is configured with the
necessary DNS SRV records by using standard utilities, such as the nslookup command.
For example, this system command indicates the existence of the applicable DNS SRV
records:
2-27
2 Network Administration
Multicast
To use multicast, it is required on the head node running the MATLAB job scheduler
(MJS) and on the client system.
This Java class has a number of simple methods to attempt to join a specified multicast
group. Once the class has successfully joined the group, it has methods to send messages
to the group, listen for messages from the group, and display what it receives. You can
use this class both from a command-line call to Java software and inside MATLAB.
0 : host1name : 0
1 : host2name : 0
The following example shows how to use the Java class inside MATLAB.
Start MATLAB on two machines (e.g., host1name and host2name) for which you want
to test multicast. In each MATLAB session, enter the following commands:
2-28
Troubleshoot Common Problems
m = com.mathworks.toolbox.distcomp.test.MulticastTester('239.1.1.1', 9999);
m.startSendingThread;
m.startListeningThread;
These instructions cause each MATLAB session to issue a stream of multicast test
packets, and to listen for test packets. If multicast is working between the machines, you
see a stream of lines like the following:
0 : host1name : 0
1 : host2name : 0
2 : host2name : 1
3 : host2name : 2
The number on the left in each character vector is the line number for the received
packet. The text in the center is the host from which the packet is received. The number
on the right is the packet number sent by the sending host. It is normal for a host to
report a test packet from itself.
If either machine does not receive a stream of test packets, or if the remote host is not
included in either stream, then multicast communication is not operating properly.
To terminate the test stream, execute the following in both MATLAB sessions:
m.stopSendingThread;
m.stopListeningThread;
2-29
3
Product Installation
3 Product Installation
Cluster Description
To set up a cluster, you first install MATLAB Distributed Computing Server on a node
called the head node. You can also install the license manager on the head node. After
performing this installation, you can then optionally install MATLAB Distributed
Computing Server on the individual cluster nodes, called worker nodes. You do not need
to install the license manager on worker nodes.
This figure shows the installations that you perform on your cluster nodes. This is only
one possible configuration. (You can install the cluster license manager and MATLAB
Distributed Computing Server on separate nodes, but this document does not cover this
type of installation.)
You install Parallel Computing Toolbox software on the computer that you use to write
MATLAB applications. This is called the client node.
This figure shows the installations that you must perform on client nodes.
Your installation on the client node might include other MathWorks products.
3-2
Install Products and Choose Cluster Configuration
Install Products
On the Cluster Nodes
Install the MathWorks products on your cluster as a network installation. You can
install in a central location, or individually on each cluster node.
If you need help with this step, you can find instructions for this release at “Installation,
Licensing, and Activation” in the Documentation Center. These instructions include
steps for installing, licensing, and activating your installation.
Note MathWorks highly recommends installing all MathWorks products on the cluster.
MATLAB Distributed Computing Server cannot run jobs whose code requires products
that are not installed.
Backwards Compatibility Note You can upgrade your MATLAB Job Scheduler clusters and
continue to use the R2016a release of Parallel Computing Toolbox on your MATLAB
Desktop client to connect to it. This only applies to the R2016a release onward. You must
install the same release of MATLAB Distributed Computing Server for each release of
MATLAB you want to support. You can configure MATLAB Job Scheduler with the
location of these installations in the mdce_def file.
• Install MATLAB Distributed Computing Server for each release that the cluster
supports. For example, to use R2016a and R2016b with your cluster, install both the
R2016a and R2016b releases of MATLAB Distributed Computing Server.
• Specify the R2016a installation of MATLAB Distributed Computing Server in the
MDCS_ADDITIONAL_MATLABROOTS variable in the mdce_def file. This file is provided
in matlabroot/toolbox/distcomp/bin for Linux (mdce_def.sh) and Windows
(mdce_def.bat). For more information, see mdce.
On the client computers from which you will write applications to submit jobs to the
cluster, install the MathWorks products for which you are licensed, including Parallel
Computing Toolbox.
3-3
3 Product Installation
If you need help with this step, you can find instructions for the current release at
“Installation, Licensing, and Activation” in the Documentation Center. These
instructions include steps for installing, licensing, and activating your installation.
Note You must use the generic scheduler interface for any of the following:
• Schedulers not supported by the direct integration (PBS Pro, Torque, LSF, HPC
Server)
• A nonshared file system when you want to use ssh as a submission tool through a
submission host
3-4
Configure for an MJS
The following figure shows the processes that run on your cluster nodes.
Note The MATLAB job scheduler (MJS) was formerly known as the MathWorks job
manager. The process is the same, is started in the same way, and performs the same
functions.
3-5
3 Product Installation
Note If you do not have a Windows cluster, or if you have already installed a previous
version of MATLAB Distributed Computing Server on your Windows cluster, you can
skip this step and proceed to Step 2.
This command adds MATLAB as an allowed program. If you are using other
firewalls, you must configure them for similar accommodation.
The user that mdce runs as requires access to the cluster MATLAB installation location.
By default, mdce runs as the user LocalSystem. If your network allows LocalSystem
to access the install location, you can proceed to the next step. (If you are not sure of your
network configuration and the access provided for LocalSystem, contact the MathWorks
install support team.)
Note If LocalSystem cannot access the install location, you must run mdce as a
different user.
1 With any standard text editor (such as WordPad) open the mdce_def file found at:
matlabroot\toolbox\distcomp\bin\mdce_def.bat
2 Find the line for setting the MDCEUSER parameter, and provide a value in the form
domain\username:
set MDCEUSER=mydomain\myusername
3 Provide the user password by setting the MDCEPASS parameter:
set MDCEPASS=password
3-6
Configure for an MJS
If you have an older version of MATLAB Distributed Computing Server running on your
cluster nodes, you should stop the mdce services before starting the services of the new
installation.
Stop mdce on Windows
If this is your first installation of the parallel computing products, proceed to “Step 3:
Start the mdce Service, MJS, and Workers” on page 3-8.
a If you are using Windows 7 or Windows Vista™, you must run the command
window with administrator privileges. Click the Windows menu Start > (All)
Programs > Accessories; then right-click Command Window, and select
Run as Administrator. This option is available only if you are running User
Account Control (UAC).
b If you are using Windows XP, open a DOS command window by selecting the
Windows menu Start > Run, then in the Open field, type
cmd
2 In the command window, navigate to the folder of the old installation that contains
the control scripts.
cd oldmatlabroot\toolbox\distcomp\bin
3 Stop and uninstall the old mdce service and remove its associated files by typing the
command:
mdce uninstall -clean
Note Using the -clean flag permanently removes all existing job data. Be sure this
data is no longer needed before removing it.
4 Repeat the instructions of this step on all worker nodes.
Stop mdce on UNIX
1 Log in as root. (If you cannot log in as root, you must alter the following parameters
in the matlabroot/toolbox/distcomp/bin/mdce_def.sh file to point to a folder
3-7
3 Product Installation
for which you have write privileges: CHECKPOINTBASE, LOGBASE, PIDBASE, and
LOCKBASE if applicable.)
2 On each cluster node, stop the mdce service and remove its associated files by
typing the commands:
cd oldmatlabroot/toolbox/distcomp/bin
./mdce stop -clean
Note Using the -clean flag permanently removes all existing job data. Be sure this
data is no longer needed before removing it.
You can start the MJS (job manager) by using a GUI or the command line. Choose one:
Using Admin Center GUI
Note To use Admin Center, you must run it on a computer that has direct network
connectivity to all the nodes of your cluster. If you cannot run Admin Center on such a
computer, follow the instructions in “Using the Command-Line Interface (Windows)” on
page 3-15 or “Using the Command-Line Interface (UNIX)” on page 3-18.
matlabroot\toolbox\distcomp\bin ( on Windows)
matlabroot/toolbox/distcomp/bin ( on UNIX)
Note To start the mdce service on remote machines from Admin Center,
requires that you run Admin Center as a user who has administrator privileges
on all the machines.
3-8
Configure for an MJS
If there are no past sessions of Admin Center saved for you, the GUI opens with
a blank listing, superimposed by a welcome dialog box, which provides
information on how to get started.
3-9
3 Product Installation
3-10
Configure for an MJS
It might take a moment for Admin Center to communicate with all the nodes,
start the services, and acquire the status of all of them. When Admin Center
completes the update, the listing should look something like the following figure.
e At this point, you should test the connectivity between the nodes. This assures
that your cluster can perform the necessary communications for running other
MATLAB Distributed Computing Server processes.
3-11
3 Product Installation
If any of the connectivity tests fail, double-click the icon that indicates a failure
to get information about that specific test; or use the Log tab to get all test
results. With this information, you can refer to “Troubleshoot Common
Problems” on page 2-22. If you need further help, contact the MathWorks install
support team.
g If your tests pass, click Close to return to the Admin Center GUI.
2 Start the MJS
a To start an MJS (job manager), click Start in the MJS module. (This is one of
several ways to open the New MJS dialog box.) In the New MJS dialog box,
specify a name and host for your MJS. This example shows an MJS called MyMJS
to run on host node1.
3-12
Configure for an MJS
b Click OK to start the MJS and return to the Admin Center GUI.
3 Start the Workers
a To start workers, click Start in the Workers module. (This is one of several ways
to open the Start Workers dialog box.)
b In the Start Workers dialog box, specify the number of workers to start on each
host. The number is up to you, but you cannot exceed the total number of
licenses you have. A good starting value might be to start one worker per
computational core on your hosts.
c Select the hosts to start the workers on. Click Select All if you want to start
workers on all listed hosts.
d Select the MJS for your workers. If you have only one MJS running in this
Admin Center session, that is the default.
The following example shows a setup for starting eight workers on four hosts
(two workers each). Your names and numbers will vary.
3-13
3 Product Installation
e Click OK to start the workers and return to the Admin Center dialog box. It
might take a moment for Admin Center to initialize all the workers and acquire
their status.
When all the workers are started, Admin Center looks something like the following
figure. If your workers are all idle and connected, your cluster is ready for use.
3-14
Configure for an MJS
If you encounter any problems or failures, contact the MathWorks install support team.
For more information about Admin Center functionality, such as stopping processes or
saving sessions, see “Cluster Processes and Profiles”.
Using the Command-Line Interface (Windows)
You must install the mdce service on all nodes (head node and worker nodes). Begin
on the head node.
i If you are using Windows 7 or Windows Vista, you must run the command
window with administrator privileges. Click the Windows menu Start >
(All) Programs > Accessories; then right-click Command Window, and
select Run as Administrator. This option is available only if you are
running User Account Control (UAC).
3-15
3 Product Installation
ii If you are using Windows XP, open a DOS command window by selecting
the Windows menu Start > Run, then in the Open field, type:
cmd
b In the DOS command window, navigate to the folder with the control scripts:
cd matlabroot\toolbox\distcomp\bin
c Install the mdce service by typing the command:
mdce install
d Start the mdce service by typing the command:
mdce start
e Repeat the instructions of this step on all worker nodes.
As an alternative to items 3–5, you can install and start the mdce service on several
nodes remotely from one machine by typing:
cd matlabroot\toolbox\distcomp\bin
remotemdce install -remotehost hostA,hostB,hostC . . .
remotemdce start -remotehost hostA,hostB,hostC . . .
where hostA,hostB,hostC refers to a list of your host names. Note that there are
no spaces between host names, only a comma. If you need to indicate protocol,
platform (such as in a mixed environment), or other information, see the help for
remotemdce by typing:
remotemdce -help
Once installed, the mdce service starts running each time the machine reboots. The
mdce service continues to run until explicitly stopped or uninstalled, regardless of
whether an MJS or worker session is running.
2 Start the MJS
To start the MATLAB job scheduler (MJS), enter the following commands in a DOS
command window. You do not have to be at the machine on which the MJS runs, as
long as you have access to the MATLAB Distributed Computing Server installation.
a In your DOS command window, navigate to the folder with the startup scripts:
cd matlabroot\toolbox\distcomp\bin
3-16
Configure for an MJS
b Start the MJS, using any unique text you want for the name <MyMJS>:
Note If you are executing startjobmanager on the host where the MJS runs,
you do not need to specify the -remotehost flag.
If you have more than one MJS on your cluster, each must have a unique name.
3 Start the Workers
Note Before you can start a worker on a machine, the mdce service must already be
running on that machine, and the license manager for MATLAB Distributed
Computing Server must be running on the network.
For each node used as a worker, enter the following commands in a DOS command
window. You do not have to be at the machines where the MATLAB workers will
run, as long as you have access to the MATLAB Distributed Computing Server
installation.
cd matlabroot\toolbox\distcomp\bin
b Start the workers on each node, using the text for <MyMJS> that identifies the
name of the MJS you want this worker registered with. Enter this text on a
single line:
To run more than one worker session on the same node, give each worker a
unique name by including the -name option on the startworker command, and
run it for each worker on that node:
3-17
3 Product Installation
For more information about mdce, MJS, and worker processes, such as how to shut
them down or customize them, see “MJS Cluster Customization”.
On each cluster node, start the mdce service by typing the commands:
cd matlabroot/toolbox/distcomp/bin
./mdce start
Alternatively (on Linux, but not Macintosh), you can start the mdce service on
several nodes remotely from one machine by typing
cd matlabroot/toolbox/distcomp/bin
./remotemdce start -remotehost hostA,hostB,hostC . . .
where hostA,hostB,hostC refers to a list of your host names. Note that there are
no spaces between host names, only a comma. If you need to indicate protocol,
platform (such as in a mixed environment), or other information, see the help for
remotemdce by typing
./remotemdce -help
2 Start the MJS
To start the MATLAB job scheduler (MJS), enter the following commands. You do
not have to be at the machine on which the MJS runs, as long as you have access to
the MATLAB Distributed Computing Server installation.
cd matlabroot/toolbox/distcomp/bin
b Start the MJS, using any unique text you want for the name <MyMJS>. Enter
this text on a single line.
3-18
Configure for an MJS
Note If you have more than one MJS on your cluster, each must have a unique
name.
3 Start the Workers
Note Before you can start a worker on a machine, the mdce service must already be
running on that machine, and the license manager for MATLAB Distributed
Computing Server must be running on the network.
For each computer hosting a MATLAB worker, enter the following commands. You
do not have to be at the machines where the MATLAB workers run, as long as you
have access to the MATLAB Distributed Computing Server installation.
cd matlabroot/toolbox/distcomp/bin
b Start the workers on each node, using the text for <MyMJS> that identifies the
name of the MJS you want this worker registered with. Enter this text on a
single line:
To run more than one worker session on the same machine, give each worker a
unique name with the -name option:
For more information about mdce, MJS, and worker processes, such as how to shut
them down or customize them, see “MJS Cluster Customization”.
3-19
3 Product Installation
Step 4: Install the mdce Service to Start Automatically at Boot Time (UNIX)
Although this step is not required, it is helpful in case of a system crash. Once configured
for this, the mdce service starts running each time the machine reboots. The mdce
service continues to run until explicitly stopped, regardless of whether an MJS or worker
session is running.
• “Debian, Fedora, SUSE, and Red Hat (non-Fedora) Platforms” on page 3-20
• “Macintosh Platform” on page 3-21
On each cluster node, register the mdce service as a known service and configure it to
start automatically at system boot time by following these steps:
3-20
Configure for an MJS
• SUSE platform:
cd /etc/init.d/rc5.d;
ln -s ../mdce S99MDCE
• Red Hat platform (non-Fedora):
cd /etc/rc.d/rc5.d;
ln -s ../../init.d/mdce S99MDCE
Macintosh Platform
On each cluster node, register the mdce service as a known service with launchd, and
configure it to start automatically at system boot time by following these steps:
1 Navigate to the toolbox folder and stop the running mdce service:
cd matlabroot/toolbox/distcomp/bin
sudo ./mdce stop
2 Create the following link if it does not already exist:
sudo ln -s matlabroot/toolbox/distcomp/bin/mdce /usr/sbin/mdce
3 Copy the launchd .plist file for mdce to /Library/LaunchDaemons:
sudo cp ./util/com.mathworks.mdce.plist /Library/LaunchDaemons
4 Start mdce and observe that it starts inside launchd:
sudo ./mdce start
This command adds MATLAB as an allowed program. If you are using other
firewalls, you must configure them for similar accommodation.
3-21
3 Product Installation
If you use a machine that runs a total of nJ job managers and nW workers, the mdce
service reserves a total of 6+nJ+3*nW consecutive ports for its own use. All job managers
and workers, even those on different hosts, that are going to work together must use the
same base port. Otherwise the job managers and workers will not be able to contact each
other. In addition, MPI communication occurs on ports starting at BASE_PORT+1000 and
use nW consecutive ports.
For example, if you use a machine with 1 job manager and 16 workers, then you need the
following ranges of ports to be open:
Some operating systems are reluctant to immediately free TCP ports from the
TIME_WAIT state for use by the same or other processes. Therefore you must allow
unfirewalled communication on 2*nW ports for MPI communications.
To connect from MATLAB to a cluster with a non-default BASE_PORT, you must append
the value of BASE_PORT to the 'Host' property in the MJS cluster profile. You must do
this in the form Hostname:BASE_PORT, for example myMJSHost:44001.
To verify the network connection from the client computer to the MJS computer, follow
these instructions.
Note In these instructions, matlabroot refers to the folder where MATLAB is installed
on the client computer. Do not confuse this with the MATLAB Distributed Computing
Server cluster computers.
3-22
Configure for an MJS
1 On the client computer where Parallel Computing Toolbox is installed, open a DOS
command window (for Windows software) or a shell (for UNIX software) and go to
the control script folder.
cd matlabroot\toolbox\distcomp\bin (for Windows)
cd matlabroot/toolbox/distcomp/bin (for UNIX)
2 Run nodestatus to verify your cluster communications. Substitute <MJS Host>
with the host name of your MJS computer.
nodestatus -remotehost <MJS Host>
If successful, you should see the status of your MJS (job manager) and its workers.
Otherwise, refer to “Troubleshoot Common Problems” on page 2-22.
1 Start the Cluster Profile Manager from the MATLAB desktop by selecting on the
Home tab in the Environment area Parallel > Manage Cluster Profiles.
2 Create a new profile in the Cluster Profile Manager by selecting New > MATLAB
Job Scheduler (MJS).
3 With the new profile selected in the list, click Rename and edit the profile name to
be MJStest. Press Enter.
4 In the Properties tab, provide settings for the following fields:
So far, the dialog box should look like the following figure:
3-23
3 Product Installation
In this step you validate your cluster profile, and thereby your installation. You can
specify the number of workers to use when validating your profile. If you do not specify
the number of workers in the Validation tab, then the validation will attempt to use as
many workers as the value specified by the NumWorkers property on the Properties
tab. In the case of MJS, you cannot specify the value of NumWorkers on the Properties
tab and the whole cluster would be used. You can specify a smaller number of workers to
validate your configuration without occupying the whole cluster.
1 If it is not already open, start the Cluster Profile Manager from the MATLAB
desktop by selecting on the Home tab in the Environment area Parallel >
Manage Cluster Profiles.
2 Select your cluster profile in the listing.
3 Click Validation tab.
4 Use the checkboxes to choose all tests, or a subset of the validation stages, and
specify the number of workers to use when validating your profile.
5 Click Validate.
The Validation Results tab shows the output. The following figure shows the results of a
profile that passed all validation tests.
3-24
Configure for an MJS
Note If your validation does not pass, contact the MathWorks install support team.
If your validation passed, you now have a valid profile that you can use in other parallel
applications. You can make any modifications to your profile appropriate for your
applications, such as NumWorkersRange, AttachedFiles, AdditionalPaths, etc.
To save your profile for other users, select the profile and click Export, then save your
profile to a file in a convenient location. Later, when running the Cluster Profile
Manager, other users can import your profile by clicking Import.
3-25
3 Product Installation
Supported versions: Windows Compute Cluster Server 2003, Windows HPC Server 2008,
Windows HPC Server 2008 R2, Microsoft HPC Pack 2012, Microsoft HPC Pack 2012 R2,
and Microsoft HPC Pack 2016.
Note If you are using an HPC Pack in a network share installation, the network share
location must be in the “Intranet” zone. You might need to adjust the Internet Options
for your cluster nodes and add the network share location to the list of Intranet sites.
This command performs some of the setup required for all machines in the cluster.
The location of the MATLAB installation must be the same on every cluster node.
Note If you need to override the script default values, modify the values defined in
MicrosoftHPCServerSetup.xml before running
MicrosoftHPCServerSetup.bat. Use the -def_file argument to the script
when using a MicrosoftHPCServerSetup.xml file in a custom location. For
example:
MicrosoftHPCServerSetup.bat -cluster -def_file <filename>
3-26
Configure for HPC Pack
You modify the file only on the node where you actually run the script.
An example of one of the values you might set is for CLUSTER_NAME. If you provide a
friendly name for the cluster in this parameter, it is recognized by MATLAB’s
discover clusters feature and displayed in the resulting cluster list.
Note If you are using HPC Pack in a network share installation, the network share
location must be in the “Intranet” zone. You might need to adjust the Internet Options
for your cluster nodes and add the network share location to the list of Intranet sites.
1 Open a command window with administrator privileges and run the following file
command
matlabroot\toolbox\distcomp\bin\MicrosoftHPCServerSetup.bat -client
This command performs some of the setup required for a client machine.
Note If you need to override the default values the script, modify the values defined
in MicrosoftHPCServerSetup.xml before running
MicrosoftHPCServerSetup.bat. Use the -def_file argument to the script
when using a MicrosoftHPCServerSetup.xml file in a custom location. For
example:
MicrosoftHPCServerSetup.bat -client -def_file <filename>
2 To submit jobs or discover the cluster from MATLAB, the Microsoft HPC Pack client
utilities must be installed on your MATLAB client machine. If they are not already
installed and up to date, ask your system administrator for the correct client utilities
to install. The utilities are available from http://www.microsoft.com/hpc/en/us/
default.aspx.
If you have installed multiple versions of the Microsoft HPC Pack client utilities,
MATLAB uses the most recent install. To configure MATLAB to use a specific
install, set the environment variable 'MATLAB_HPC_SERVER_HOME' to the install
location of the client utilities you want to use.
3-27
3 Product Installation
1 Start the Cluster Profile Manager from the MATLAB desktop by selecting on the
Home tab in the Environment area Parallel > Manage Cluster Profiles.
2 Create a new profile in the Cluster Profile Manager by selecting New > HPC
Server.
3 With the new profile selected in the list, click Rename and edit the profile name to
be HPCtest. Press Enter.
4 In the Properties tab, provide text for the following fields:
3-28
Configure for HPC Pack
So far, the dialog box should look like the following figure:
In this step you validate your cluster profile, and thereby your installation. You can
specify the number of workers to use when validating your profile. If you do not specify
the number of workers in the Validation tab, then the validation will attempt to use as
many workers as the value specified by the NumWorkers property on the Properties
tab. You can specify a smaller number of workers to validate your configuration without
occupying the whole cluster.
3-29
3 Product Installation
1 If it is not already open, start the Cluster Profile Manager from the MATLAB
desktop by selecting on the Home tab in the Environment area Parallel >
Manage Cluster Profiles.
2 Select your cluster profile in the listing.
3 Click Validation tab.
4 Use the checkboxes to choose all tests, or a subset of the validation stages, and
specify the number of workers to use when validating your profile.
5 Click Validate.
The Validation Results tab shows the output. The following figure shows the results of a
profile that passed all validation tests.
Note If your validation does not pass, contact the MathWorks install support team.
If your validation passed, you now have a valid profile that you can use in other parallel
applications. You can make any modifications to your profile appropriate for your
applications, such as NumWorkersRange, AttachedFiles, AdditionalPaths, etc.
3-30
Configure for HPC Pack
To save your profile for other users, select the profile and click Export, then save your
profile to a file in a convenient location. Later, when running the Cluster Profile
Manager, other users can import your profile by clicking Import.
3-31
3 Product Installation
In this section...
“Configure Platform LSF Scheduler on Windows Cluster” on page 3-32
“Configure Windows Firewalls on Client” on page 3-34
“Validate Installation Using an LSF, PBS Pro, or TORQUE Scheduler” on page 3-35
Note You must use the generic scheduler interface for any of the following:
• Any third-party scheduler not listed above (e.g., Sun Grid Engine, GridMP, etc.)
• PBS other than PBS Pro
• A nonshared file system when the client cannot directly submit to the scheduler (e.g.,
TORQUE on Windows)
For further information about mpiexec and smpd, see the MPICH2 home page at http://
www.mcs.anl.gov/research/projects/mpich2/. For user’s guides and installation
instructions on that page, select Documentation > User Docs.
To use mpiexec to distribute a job, the smpd service must be running on all nodes that
will be used for running MATLAB workers.
Note The smpd executable does not support running from a mapped drive. Use either a
local installation, or the full UNC pathname to the executable. Microsoft Windows Vista
3-32
Configure for PBS Pro, Platform LSF, TORQUE
does not support the smpd executable on network share installations, so with Vista the
installation must be local.
Without Delegation
This command installs the service and starts it. As long as the service remains
installed, it will start each time the node boots.
3 If this is a worker machine and you did not run the installer on it to install MATLAB
Distributed Computing Server software (for example, if you are running MATLAB
Distributed Computing Server software from a shared installation), execute the
following command in a DOS command window.
matlabroot\bin\matlab.bat -install_vcrt
This command installs the Microsoft run-time libraries needed for running jobs with
your scheduler.
4 If you are using Windows firewalls on your cluster nodes, execute the following in a
DOS command window.
matlabroot\toolbox\distcomp\bin\addMatlabToWindowsFirewall.bat
This command adds MATLAB as an allowed program. If you are using other
firewalls, you must configure them to make similar accommodation.
5 Log in as the user who will be submitting jobs for execution on this node.
6 Register this user to use mpiexec by typing:
matlabroot\bin\win64\mpiexec -register
7 Repeat steps 5–6 for all users who will run jobs on this machine.
8 Repeat all these steps on all Windows nodes in your cluster.
3-33
3 Product Installation
matlabroot\bin\win64\smpd -register_spn
This command installs the service and starts it. As long as the service remains
installed, it will start each time the node boots.
3 If this is a worker machine and you did not run the installer on it to install MATLAB
Distributed Computing Server software (for example, if you are running MATLAB
Distributed Computing Server software from a shared installation), execute the
following command in a DOS command window.
matlabroot\bin\matlab.bat -install_vcrt
This command installs the Microsoft run-time libraries needed for running jobs with
your scheduler.
4 If you are using Windows firewalls on your cluster nodes, execute the following in a
DOS command window.
matlabroot\toolbox\distcomp\bin\addMatlabToWindowsFirewall.bat
This command adds MATLAB as an allowed program. If you are using other
firewalls, you must configure them for similar accommodation.
5 Repeat these steps on all Windows nodes in your cluster.
matlabroot\toolbox\distcomp\bin\addMatlabToWindowsFirewall.bat
This command adds MATLAB as an allowed program. If you are using other
firewalls, you must configure them for similar accommodation.
3-34
Configure for PBS Pro, Platform LSF, TORQUE
1 Start the Cluster Profile Manager from the MATLAB desktop by selecting on the
Home tab in the Environment area Parallel > Manage Cluster Profiles.
2 Create a new profile in the Cluster Profile Manager by selecting New > LSF (or
PBS Pro or Torque, as appropriate).
3 With the new profile selected in the list, click Rename and edit the profile name to
be InstallTest. Press Enter.
4 In the Properties tab, provide settings for the following fields:
3-35
3 Product Installation
The dialog box should look something like this, or slightly different for PBS Pro
or TORQUE schedulers.
In this step you verify your cluster profile, and thereby your installation. You can specify
the number of workers to use when validating your profile. If you do not specify the
number of workers in the Validation tab, then the validation will attempt to use as
many workers as the value specified by the NumWorkers property on the Properties
tab. You can specify a smaller number of workers to validate your configuration without
occupying the whole cluster.
1 If it is not already open, start the Cluster Profile Manager from the MATLAB
desktop by selecting on the Home tab in the Environment areaParallel >
Manage Cluster Profiles.
2 Select your cluster profile in the listing.
3 Click Validation tab.
4 Use the checkboxes to choose all tests, or a subset of the validation stages, and
specify the number of workers to use when validating your profile.
5 Click Validate.
3-36
Configure for PBS Pro, Platform LSF, TORQUE
The Validation Results tab shows the output. The following figure shows the results of a
profile that passed all validation tests.
Note If your validation does not pass, contact the MathWorks install support team.
If your validation passed, you now have a valid profile that you can use in other parallel
applications. You can make any modifications to your profile appropriate for your
applications, such as NumWorkersRange, AttachedFiles, AdditionalPaths, etc.
To save your profile for other users, select the profile and click Export, then save your
profile to a file in a convenient location. Later, when running the Cluster Profile
Manager, other users can import your profile by clicking Import.
3-37
3 Product Installation
Note You must use the generic scheduler interface for any of the following:
From the following sections, you can select the ones that apply to your configuration:
To support usage of the generic scheduler interface, templates and scripts can be
installed from the following locations:
Each installer provides templates and scripts for the supported submission modes for
shared file system, nonshared file system, or remote submission. Each submission mode
has its own subfolder within the installation folder, which contains a file named README
that provides specific instructions on how to use the scripts.
Submission Mode
3-38
Configure for a Generic Scheduler
• Shared — When the client machine is able to submit directly to the cluster and there
is a shared file system present between the client and the cluster machines.
• Remote Submission — When there is a shared file system present between the client
and the cluster machines, but the client machine is not able to submit directly to the
cluster (for example, if the scheduler’s client utilities are not installed).
• Nonshared — When there is not a shared file system between client and cluster
machines.
Before using the support scripts, decide which submission mode describes your particular
network setup.
You can use an MPI build that differs from the one provided with Parallel Computing
Toolbox. For more information about using this option with the generic scheduler
interface, see “Use Different MPI Builds on UNIX Systems” on page 2-5.
For further information about mpiexec and smpd, see the MPICH2 home page at http://
www.mcs.anl.gov/research/projects/mpich2/. For user’s guides and installation
instructions on that page, select Documentation > User Docs.
To use mpiexec to distribute a job, the smpd service must be running on all nodes that
will be used for running MATLAB workers.
Note The smpd executable does not support running from a mapped drive. Use either a
local installation, or the full UNC pathname to the executable. Microsoft Windows Vista
does not support the smpd executable on network share installations, so with Vista the
installation must be local.
3-39
3 Product Installation
Without Delegation
This command installs the service and starts it. As long as the service remains
installed, it will start each time the node boots.
3 If this is a worker machine and you did not run the installer on it to install MATLAB
Distributed Computing Server software (for example, if you are running MATLAB
Distributed Computing Server software from a shared installation), execute the
following command in a DOS command window.
matlabroot\bin\matlab.bat -install_vcrt
This command installs the Microsoft run-time libraries needed for running jobs with
your scheduler.
4 Add MATLAB as an allowed program to your firewall. If you are using Windows
firewalls on your cluster nodes, you can do this by executing the following script in a
DOS command window:
matlabroot\toolbox\distcomp\bin\addMatlabToWindowsFirewall.bat
If you are using other firewalls, you must configure these separately to add
MATLAB as an allowed program.
5 Log in as the user who will be submitting jobs for execution on this node.
6 Register this user to use mpiexec by typing:
matlabroot\bin\win64\mpiexec -register
7 Repeat steps 5–6 for all users who will run jobs on this machine.
8 Repeat all these steps on all Windows nodes in your cluster.
3-40
Configure for a Generic Scheduler
This command installs the service and starts it. As long as the service remains
installed, it will start each time the node boots.
3 If this is a worker machine and you did not run the installer on it to install MATLAB
Distributed Computing Server software (for example, if you are running MATLAB
Distributed Computing Server software from a shared installation), execute the
following command in a DOS command window.
matlabroot\bin\matlab.bat -install_vcrt
This command installs the Microsoft run-time libraries needed for running jobs with
your scheduler.
4 Add MATLAB as an allowed program to your firewall. If you are using Windows
firewalls on your cluster nodes, you can do this by executing the following script in a
DOS command window:
matlabroot\toolbox\distcomp\bin\addMatlabToWindowsFirewall.bat
If you are using other firewalls, you must configure these separately to add
MATLAB as an allowed program.
5 Repeat these steps on all Windows nodes in your cluster.
The following steps create the parallel environment (PE), and then make the parallel
environment runnable on a particular queue. You should perform these steps on the head
node of your cluster.
3-41
3 Product Installation
Each submission mode has its own subfolder within the installation folder, which
contains a file named README that provides specific instructions on how to use the
scripts.
2 Modify the contents of matlabpe.template to use the desired number of slots and
the correct location of the startmatlabpe.sh and stopmatlabpe.sh files. (These
files can exist in a shared location accessible by all hosts, or they can be copied to the
same local on each host.) You can also change other values or add additional values
to matlabpe.template to suit your cluster. For more information, refer to the
sge_pe documentation provided with your scheduler.
3 Add the “matlab” parallel environment, using a shell command like:
qconf -Ap matlabpe.template
4 Make the “matlab” parallel environment runnable on all queues:
qconf -mq all.q
This will bring up a text editor for you to make changes: search for the line pe_list,
and add matlab.
5 Ensure you can submit a trivial job to the PE:
$ echo "hostname" | qsub -pe matlab 1
6 Use qstat to check that the job runs correctly, and check that the output file
contains the name of the host that ran the job. The default filename for the output
file is ~/STDIN.o###, where ### is the Grid Engine job number.
Note The example submit functions for Grid Engine family rely on the presence of
the “matlab” parallel environment. If you change the name of the parallel
environment to something other than “matlab”, you must ensure that you also
change the submit functions.
3-42
Configure for a Generic Scheduler
If you are using other firewalls, you must configure these separately to add MATLAB as
an allowed program.
Note The remainder of this chapter illustrates only the case of using LSF in a nonshared
file system. For other schedulers or a shared file system, look for the appropriate scripts
and modify them as necessary, using the following instructions as a guide. If you have
any questions, contact the MathWorks install support team.
This section provides guidelines for setting up your cluster profile to use the generic
scheduler interface with an LSF scheduler in a network without a shared file system
between the client the cluster machines. You can install templates and scripts for LSF
from here:
The scripts necessary to set up your test can be found in the nonshared subfolder within
the installation folder. These scripts are written for an LSF scheduler, but might require
modification to work in your network. The following diagram illustrates the cluster
setup:
3-43
3 Product Installation
MATLAB
worker
r/w
Copy
Local drive (sFTP)
(Local job data location, e.g.,
C:\Temp\joblocation)
Shared drive
(Cluster job data location, e.g.,
/network/share/joblocation)
In this type of configuration, job data is copied from the client host running a Windows
operating system to a host on the cluster (cluster login node) running a UNIX operating
system. From the cluster login node, the LSF bsub command submits the job to the
scheduler. When the job finishes, its output is copied back to the client host.
Requirements
• The client node and cluster login node must support ssh and sFTP.
• The cluster login node must be able to call the bsub command to submit a job to an
LSF scheduler. You can find more about this in the README file in the nonshared
subfolder within the installation folder.
If these requirements are met, use the following steps to implement the solution:
3-44
Configure for a Generic Scheduler
3-45
3 Product Installation
l In the AdditionalProperties table, select Add and specify a new property with
name RemoteJobStorageLocation, value /network/share/joblocation,
and type String.
6 Click Done to save your cluster profile changes.
3-46
Configure for a Generic Scheduler
3-47
3 Product Installation
In this step you validate your cluster profile, and thereby your installation. You can
specify the number of workers to use when validating your profile. If you do not specify
the number of workers in the Validation tab, then the validation will attempt to use as
many workers as the value specified by the NumWorkers property on the Properties
tab. You can specify a smaller number of workers to validate your configuration without
occupying the whole cluster.
1 If it is not already open, start the Cluster Profile Manager from the MATLAB
desktop by selecting on the Home tab in the Environment area Parallel >
Manage Cluster Profiles.
2 Select your cluster profile in the listing.
3 Click Validation tab.
4 Use the checkboxes to choose all tests, or a subset of the validation stages, and
specify the number of workers to use when validating your profile.
5 Click Validate.
The Validation Results tab shows the output. The following figure shows the results of a
profile that passed all validation tests.
Note If your validation fails any stage, contact the MathWorks install support team.
3-48
Configure for a Generic Scheduler
If your validation passed, you now have a valid profile that you can use in other parallel
applications. You can make any modifications to your profile appropriate for your
applications, such as NumWorkersRange, AttachedFiles, AdditionalPaths, etc.
To save your profile for other users, select the profile and click Export, then save your
profile to a file in a convenient location. Later, when running the Cluster Profile
Manager, other users can import your profile by clicking Import. See “Configure a
Hadoop Cluster” on page 3-53.
To learn how to distribute a Generic cluster profile and integration scripts for others to
use, see “Distribute a Generic Cluster Profile and Integration Scripts” on page 3-50.
3-49
3 Product Installation
1 Install the appropriate support package for your third-party scheduler (see “Support
Scripts” on page 3-38).
2 Use the Generic Profile Wizard to create a Generic cluster profile with the default
MATLAB integration scripts.
• If you prefer to put the integration scripts in a read-only shared location, follow the
steps in “Shared IntegrationScriptsLocation Folder” on page 3-50. This option
simplifies subsequent steps and allows any changes you make to the integration
scripts to take effect immediately for all users.
• If you prefer to give other users their own copy of your integration scripts, follow the
steps in “Distribute Copies of the IntegrationScriptsLocation Folder” on page 3-51.
3-50
Distribute a Generic Cluster Profile and Integration Scripts
1 Open MATLAB and navigate to Home > Parallel > Manage Cluster Profiles
to open the Cluster Profile Manager.
2 Select your profile in the list and click Export.
3 Choose a name for the .settings file, which contains your profile, and click
Save.
4 Send a copy of the .settings file to other users.
• To import your profile, other users must:
Note If you make changes to your integrations scripts, you will have to distribute copies
of your updated integration scripts for the changes to take effect for other users.
1 Open MATLAB and navigate to Home > Parallel > Manage Cluster Profiles
to open the Cluster Profile Manager.
2 Select your profile in the list and click Export.
3 Choose a name for the .settings file, which contains the exported profile, and
click Save.
4 Send other users a copy of
3-51
3 Product Installation
See Also
“Submission Mode” on page 3-38 | “Support Scripts” on page 3-38
3-52
Configure a Hadoop Cluster
The software needs to append a value to this property so that task processes are able
to correctly run MATLAB. This property is passed as part of the job metadata given
to Hadoop during job submission.
$HADOOP_PREFIX/lib/commons-codec-1.9.jar
3-53
3 Product Installation
For more information, see the documentation for “Static Path” (MATLAB).
8 For Cloudera, add the following to the beginning of the static class path of MATLAB
and MATLAB Distributed Computing Server:
$HADOOP_PREFIX/jars/commons-codec-1.9.jar
For more information, see the documentation for “Static Path” (MATLAB).
See Also
parallel.cluster.Hadoop
Related Examples
• “Install Products and Choose Cluster Configuration” on page 3-2
• “Use Tall Arrays on a Spark Enabled Hadoop Cluster” (Parallel Computing Toolbox)
• “Run mapreduce on a Hadoop Cluster” (Parallel Computing Toolbox)
• “Read and Analyze Hadoop Sequence File” (MATLAB)
3-54
4
Admin Center
You start Admin Center outside a MATLAB session by executing the following:
The first time you start Admin Center, you see the following welcome dialog box.
A new session of Admin Center has no cluster hosts listed, so the usual first step is to
identify the hosts you want to include in your listing. To do this, click Add or Find.
Further information continues in the next section, “Set Up Resources” on page 4-3.
If you start Admin Center again on the same host, your previous session for that
machine is loaded; and unless the update rate is set to never, Admin Center performs an
update immediately for the listed hosts and processes. To clear this information and start
a new session, select the pull-down File > New Session.
4-2
Set Up Resources
Set Up Resources
In this section...
“Add Hosts” on page 4-3
“Start mdce Service” on page 4-4
“Start an MJS” on page 4-5
“Start Workers” on page 4-7
“Stop, Destroy, Resume, Restart Processes” on page 4-8
“Move a Worker” on page 4-8
“Update the Display” on page 4-9
Add Hosts
To specify the hosts you want listed in Admin Center, click Add or Find in the Welcome
dialog box, or if this is not a new session, click Add or Find in the Hosts module.
In the Add or Find Hosts dialog box, identify the hosts you want to add to the listing, by
one of the following methods:
• Select Enter Hostnames and provide short host names, fully qualified domain
names, or individual IP addresses for the hosts.
• Select Enter IP Range and provide the range of IP addresses for your hosts.
If one of the hosts you have specified is running a MATLAB job scheduler (MJS), Admin
Center automatically finds and lists all the hosts running workers registered with that
MJS. Similarly, if you specify a host that is running a worker, Admin Center finds and
lists the host running that worker’s MJS, and then also all hosts running other workers
under that MJS.
4-3
4 Admin Center
If you want to add or remove hosts to your cluster, Admin Center allows you to start and
stop the mdce service on those hosts. To start the mdce service on a group of hosts with
the same platform, select all those hosts in the Hosts module, and click Start mdce
Service in the left column of the panel.
Alternative methods for starting mdce include selecting the pull-down Hosts > Start
mdce Service, or right-clicking a listed host and selecting, Start mdce Service.
4-4
Set Up Resources
A dialog box leads you through the procedure of starting the mdce service on the selected
hosts. There are five steps to the procedure in which you provide or confirm information
for the service:
1 Specify remote platform — Windows or UNIX. You can start mdce on multiple hosts
at the same time, but they all must be the same platform. If you have a mixed
platform cluster, run the mdce startup separately for each type of platform.
2 Specify remote communication — Choose the protocol for communication with the
hosts.
3 Specify locations — Specify the location of the MATLAB installation and the
mdce_def file for the hosts.
4 Confirm before starting — Review information before proceeding.
5 Summary — Status about the startup attempt.
The dialog box looks like this for the first step:
At each step, you can click Help to read detailed information about that step.
Start an MJS
To start an MJS, click Start in the MJS module.
In the New MATLAB Job Scheduler dialog box, provide a name for the MJS, and select a
host to run it on.
4-5
4 Admin Center
Alternative methods for starting an MJS include selecting the pull-down MJS > Start,
or right-clicking a listed host and selecting, Start MJS.
With an MJS running on your cluster, Admin Center might look like the following figure,
with the MJS listed in the MJS module, as well as being listed by name in the Hosts
module in the line for the host on which it is running.
4-6
Set Up Resources
Start Workers
To start MATLAB workers, click Start in the Workers module.
In the Start Workers dialog box, specify the numbers of workers to start on each host,
and select the hosts to run them. From the list, select the MJS for these workers. Click
OK to start the workers. Admin center automatically provides names for the workers,
based on the hosts running them.
Alternative methods for starting workers include selecting the pull-down Workers >
Start, or right-clicking a listed host or MJS and selecting Start Workers.
With workers running on your cluster, Admin Center might look like the following figure,
which shows the workers listed in the Workers module. Also, the number of workers
running under the MJS is listed in the MJS module, and the number of workers for each
MJS is listed in the Hosts module.
4-7
4 Admin Center
To get more information on any host, MJS, or worker listed in Admin Center, right-click
its name in the display and select Properties. Alternatively, you can find the
Properties option under the Hosts, MJS, and Workers drop-down menus.
Move a Worker
To move a worker from one host to another, you must completely shut it down, than start
a new worker on the desired host:
4-8
Set Up Resources
Use a similar process to move an MJS from one host to another. Note, however, that all
workers registered with the MJS must be destroyed and started again, registering them
with the new instance of the MJS.
4-9
4 Admin Center
Test Connectivity
Admin Center lets you test communications between your MJS node, worker nodes, and
the node where Admin Center is running.
• Client — Verifies that the node running Admin Center is properly configured so that
further cluster testing can proceed.
• Client to Nodes — Verifies that the node running Admin Center can identify and
communicate with the other nodes in the cluster.
• Nodes to Nodes — Verifies that the other nodes in the cluster can identify each
other, and that each node allows its mdce service to communicate with the mdce
service on the other cluster nodes.
• Nodes to Client — Verifies that other cluster nodes can identify and communicate
with the node running Admin Center.
First click Test Connectivity to open the Connectivity Testing dialog box. By default,
the dialog box displays the results of the last test. To run new tests and update the
display, click Run.
During test execution, Admin Center displays this progress dialog box.
When the tests are complete, the Running Tests dialog box automatically closes, and
Admin Center displays the test results in the Connectivity Testing dialog box.
4-10
Test Connectivity
The possible test result symbols are described in the following table.
Test Result Description
Test passed.
Test failed.
Test was skipped, possibly because prerequisite tests did not pass.
Test that include failures or other results might look like the following figure.
4-11
4 Admin Center
Double-click any of the symbols in the test results to drill down for more detail. Use the
Log tab to see the raw data from the tests.
The results of the tests that run on only the client are displayed in the lower-left corner
of the dialog box. To drill into client-only test results, click More Info.
4-12
Export and Import Sessions
You can import that saved session data into a subsequent session of Admin Center by
selecting the pull-down File > Import Session. The imported data includes cluster
definition and test results.
Note When importing a session file, Admin Center automatically sets its update rate to
never (i.e., disabled), so that you can statically examine a cluster setup from the time the
session was saved for evaluation or diagnostic purposes.
4-13
4 Admin Center
4-14
5
admincenter
Start Admin Center GUI
Syntax
admincenter
Description
admincenter opens the MATLAB Distributed Computing Server Admin Center. When
setting up or using a MATLAB job scheduler (MJS) cluster, Admin Center allows you to
establish and verify your cluster, and to diagnose possible problems.
See Also
mdce | nodestatus | remotemdce
5-2
createSharedSecret
createSharedSecret
Create shared secret for secure communication
Syntax
createSharedSecret
createSharedSecret -file <filename>
Description
createSharedSecret creates a shared secret file used for secure communication
between job managers and workers. The file is named secret in the current folder.
Before passing sensitive data from one service to another (e.g., between job manager and
workers), these services need to establish a trust relationship using a shared secret. This
script creates a file that serves as a shared secret between the services. Each service is
trusted that has access to that secret file.
Create the secret file only once per cluster on one machine, then copy it into the location
specified by SHARED_SECRET_FILE in the mdce_def file on each machine before starting
any job managers or workers. In a shared file system, all nodes can point to the same file.
Shared secrets can be reused in subsequent sessions.
Examples
Create a shared secret file in a central location for all the nodes of the cluster:
cd matlabInstallDir/toolbox/distcomp/bin
createSharedSecret -file /share/secret
Then make sure that the nodes' shared or copied mdce_def files set the parameter
SHARED_SECRET_FILE to /share/secret before starting the mdce service on each.
5-3
5 Control Scripts — Alphabetical List
See Also
mdce
5-4
mdce
mdce
Install, start, stop, or uninstall mdce service
Syntax
mdce install
mdce uninstall
mdce start
mdce stop
mdce console
mdce restart
mdce ... -mdcedef <mdce_defaults_file>
mdce ... -clean
mdce status
mdce -version
mdce -usemhlm
Description
The mdce service ensures that all other processes are running and that it is possible to
communicate with them. Once the mdce service is running, you can use the nodestatus
command to obtain information about the mdce service and all the processes it
maintains.
mdce install installs the mdce service in the Microsoft Windows Service Control
Manager. This causes the service to automatically start when the Windows operating
system boots up. The service must be installed before it is started.
mdce uninstall uninstalls the mdce service from the Windows Service Control
Manager. Note that if you wish to install mdce service as a different user, you must first
uninstall the service and then reinstall as the new user.
5-5
5 Control Scripts — Alphabetical List
mdce start starts the mdce service. This creates the required logging and
checkpointing directories, and then starts the service as specified in the mdce defaults
file.
mdce stop stops running the mdce service. This automatically stops all job managers
and workers on the computer, but leaves their checkpoint information intact so that they
will start again when the mdce service is started again.
mdce console starts the mdce service as a process in the current terminal or command
window rather than as a service running in the background.
mdce restart performs the equivalent of mdce stop followed by mdce start. This
command is available only on UNIX and Macintosh operating systems.
QQQ
mdce ... -clean performs a complete cleanup of all service checkpoint and log files
before installing or starting the service, or after stopping or uninstalling it. This deletes
all information about any job managers or workers this service has ever maintained.
mdce status reports the status of the mdce service, indicating whether it is running
and with what PID. Use nodestatus to obtain more detailed information about the
mdce service. The mdce status command is available only on UNIX and Macintosh
operating systems.
mdce -version prints version information of the mdce process to standard output, then
exits.
mdce -usemhlm ensures that workers use MathWorks Hosted License Manager. Unless
you specify –usemhlm, mdce uses FlexLM-based licensing.
See Also
nodestatus | startjobmanager | startworker | stopjobmanager | stopworker
5-6
nodestatus
nodestatus
Status of mdce processes running on node
Syntax
nodestatus
nodestatus -flags
Description
nodestatus displays the status of the mdce service and the processes which it
maintains. The mdce service must already be running on the specified computer.
nodestatus -flags accepts the following input flags. Multiple flags can be used
together on the same command.
Flag Operation
-remotehost <hostname> Displays the status of the mdce service and
the processes it maintains on the specified
host. The default value is the local host.
-infolevel <level> Specifies how much status information to
report, using a level of 1-3. 1 means only
the basic information, 3 means all
information available. The default value is
1.
5-7
5 Control Scripts — Alphabetical List
Flag Operation
-baseport <port_number> Specifies the base port that the mdce
service on the remote host is using. You
need to specify this only if the value of
BASE_PORT in the local mdce_def file does
not match the base port being used by the
mdce service on the remote host.
-v Verbose mode displays the progress of the
command execution.
-json View the output in JavaScript Object
Notation (JSON) format. Output in json
format is easy to parse.
Examples
Display basic information about the mdce processes on the local host.
nodestatus
Display detailed information about the status of the mdce processes on host node27.
See Also
mdce | startjobmanager | startworker | stopjobmanager | stopworker
5-8
remotecopy
remotecopy
Copy file or folder to or from one or more remote hosts using transport protocol
Syntax
remotecopy <flags> <protocol options>
Description
remotecopy <flags> <protocol options> copies a file or folder to or from one or
more remote hosts by using a transport protocol (such as rsh or ssh). Copying from
multiple hosts creates a separate file per host, appending the hostname to the specified
filename.
The following table describes the supported flags and options. You can combined multiple
flags in the same command, preceding each flag by a dash (-).
Flags and Options Operation
-local <file-or-foldername> Specify the name of the file or folder on the local
host.
-remote <file-or-foldername> Specify the name of the file or folder on the
remote host.
-from Specify to copy from the remote hosts to the local
host. You must use either the -from flag, or the
-to flag.
-to Specify to copy to the remote hosts from the local
host. You must use either the -from flag, or the
-to flag.
-remotehost host1[,host2[,...] Specify the names of the hosts where you want
to copy to or from. Separate the host names by
commas without any white spaces. This is a
mandatory argument.
5-9
5 Control Scripts — Alphabetical List
For example:
remotecopy -protocol sftp -help
<protocol options> Specify particular options for the protocol type
being used.
Note The file permissions on the copy might not be the same as the permissions on the
original file.
Examples
Copy the local file mdce_def.sh to two other machines. (Enter this command on a single
line.)
5-10
remotecopy
Retrieve folders of the same name from two hosts to the local machine. (Enter command
on a single line.)
remotecopy -local C:\temp\log -from -remote C:\temp\mdce\log
-remotehost winHost1,winHost2
See Also
remotemdce
5-11
5 Control Scripts — Alphabetical List
remotemdce
Execute mdce command on one or more remote hosts by transport protocol
Syntax
remotemdce <mdce options> <flags> <protocol options>
Description
remotemdce <mdce options> <flags> <protocol options> allows you to execute
the mdce service on one or more remote hosts.
For a description of the mdce service, see the mdce reference page.
The following table describes the supported flags and options. You can combined multiple
flags in the same command, preceding each flag by a dash (-).
Flags and Options Operation
<mdce options> Options and arguments of the mdce command,
such as start, stop, etc. See the mdce
reference page for a full list.
-matlabroot <installfoldername> The MATLAB installation folder on the remote
hosts, required only if the remote installation
folder differs from the one on the local machine.
-remotehost host1[,host2[,...] The names of the hosts where you want to run
the mdce command. Separate the host names by
commas without any white spaces. This is a
mandatory argument.
-remoteplatform { unix | windows } The platform of the remote hosts. This option is
required only if different from the local platform.
-quiet Prevent mdce from prompting the user for
missing information. The command fails if all
required information is not specified.
5-12
remotemdce
For example:
remotemdce -protocol winsc -help
Note If you are using OpenSSHd on a Microsoft Windows operating system, you can
encounter a problem when using backslashes in path names for your command options.
In most cases, you can work around this problem by using forward slashes instead. For
example, to specify the file C:\temp\mdce_def.bat, you should identify it as C:/temp/
mdce_def.bat.
Examples
Start mdce on three remote machines of the same platform as the client:
remotemdce start -remotehost hostA,hostB,hostC
5-13
5 Control Scripts — Alphabetical List
Start mdce in a clean state on two UNIX operating system machines from a Windows
operating system machine, using the ssh protocol. Enter the following command on a
single line:
remotemdce start -clean -matlabroot /usr/local/matlab
-remotehost unixHost1,unixHost2 -remoteplatform UNIX
-protocol ssh
See Also
mdce | remotecopy
5-14
pausejobmanager
pausejobmanager
Pause job manager process
Syntax
pausejobmanager
pausejobmanager -flags
Description
pausejobmanager pauses a job manager that is running under the mdce service.
pausejobmanager -flags accepts the following input flags. Multiple flags can be used
together on the same command.
Flag Operation
-name <job_manager_name> Specifies the name of the job manager to
pause. The default is the value of
DEFAULT_JOB_MANAGER_NAME parameter
the mdce_def file.
-remotehost <hostname> Specifies the name of the host where you
want to pause the job manager. The default
value is the local host.
-baseport <port_number> Specifies the base port that the mdce
service on the remote host is using. You
need to specify this only if the value of
BASE_PORT in the local mdce_def file does
not match the base port being used by the
mdce service on the remote host.
5-15
5 Control Scripts — Alphabetical List
Flag Operation
-v Verbose mode displays the progress of the
command execution.
Examples
Pause the job manager MyJobManager on the local host.
See Also
mdce | nodestatus | resumejobmanager | startjobmanager | stopjobmanager
5-16
resumejobmanager
resumejobmanager
Resume job manager process
Syntax
resumejobmanager
resumejobmanager -flags
Description
resumejobmanager resumes a job manager that is running under the mdce service.
resumejobmanager -flags accepts the following input flags. Multiple flags can be
used together on the same command.
Flag Operation
-name <job_manager_name> Specifies the name of the job manager to
resume. The default is the value of
DEFAULT_JOB_MANAGER_NAME parameter
the mdce_def file.
-remotehost <hostname> Specifies the name of the host where you
want to resume the job manager. The
default value is the local host.
-baseport <port_number> Specifies the base port that the mdce
service on the remote host is using. You
need to specify this only if the value of
BASE_PORT in the local mdce_def file does
not match the base port being used by the
mdce service on the remote host.
5-17
5 Control Scripts — Alphabetical List
Flag Operation
-v Verbose mode displays the progress of the
command execution.
Examples
Resume the job manager MyJobManager on the local host.
See Also
mdce | nodestatus | pausejobmanager | startjobmanager | stopjobmanager
5-18
startjobmanager
startjobmanager
Start job manager process
Syntax
startjobmanager
startjobmanager -flags
Description
startjobmanager starts a job manager process and the associated job manager lookup
process under the mdce service, which maintains them after that. The job manager
handles the storage of jobs and the distribution of tasks contained in jobs to MATLAB
workers that are registered with it. The mdce service must already be running on the
specified computer.
startjobmanager -flags accepts the following input flags. Multiple flags can be used
together on the same command.
Flag Operation
-name <job_manager_name> Specifies the name of the job manager. This
identifies the job manager to MATLAB worker
sessions and MATLAB clients. The default is the
value of the DEFAULT_JOB_MANAGER_NAME
parameter in the mdce_def file.
-remotehost <hostname> Specifies the name of the host where you want to
start the job manager and the job manager
lookup process. If omitted, they start on the local
host.
5-19
5 Control Scripts — Alphabetical List
Flag Operation
-clean Deletes all checkpoint information stored on
disk from previous instances of this job manager
before starting. This cleans the job manager so
that it initializes with no existing jobs or tasks.
-baseport <port_number> Specifies the base port that the mdce service on
the remote host is using. You need to specify this
only if the value of BASE_PORT in the local
mdce_def file does not match the base port
being used by the mdce service on the remote
host.
-useMSMPI Use Microsoft MPI (MS-MPI) for clusters on
Windows platforms.
-v Verbose mode displays the progress of the
command execution.
Examples
Start the job manager MyJobManager on the local host.
See Also
mdce | nodestatus | pausejobmanager | resumejobmanager | startworker |
stopjobmanager | stopworker
5-20
startworker
startworker
Start MATLAB worker session
Syntax
startworker
startworker -flags
Description
startworker starts a MATLAB worker process under the mdce service, which
maintains it after that. The worker registers with the specified job manager, from which
it will get tasks for evaluation. The mdce service must already be running on the
specified computer.
startworker -flags accepts the following input flags. Multiple flags can be used
together on the same command, except where noted.
Flag Operation
-name <worker_name> Specifies the name of the MATLAB worker.
The default is the value of the
DEFAULT_WORKER_NAME parameter in the
mdce_def file.
-remotehost <hostname> Specifies the name of the computer where you
want to start the MATLAB worker. If omitted,
the worker is started on the local computer.
5-21
5 Control Scripts — Alphabetical List
Flag Operation
-jobmanager <job_manager_name> Specifies the name of the job manager this
MATLAB worker will receive tasks from. The
default is the value of the
DEFAULT_JOB_MANAGER_NAME parameter in
the mdce_def file.
-jobmanagerhost <job_manager_hostname> Specifies the host on which the job manager is
running. The worker contacts the job manager
lookup process on that host to register with
the job manager.
Examples
Start a worker on the local host, using the default worker name, registering with the job
manager MyJobManager on the host JMHost.
startworker -jobmanager MyJobManager -jobmanagerhost JMHost
Start a worker on the host WorkerHost, using the default worker name, and registering
with the job manager MyJobManager on the host JMHost. (The following command
should be entered on a single line.)
5-22
startworker
Start two workers, named worker1 and worker2, on the host WorkerHost, registering
with the job manager MyJobManager that is running on the host JMHost. Note that to
start two workers on the same computer, you must give them different names. (Each of
the two commands below should be entered on a single line.)
startworker -name worker1 -remotehost WorkerHost
-jobmanager MyJobManager -jobmanagerhost JMHost
startworker -name worker2 -remotehost WorkerHost
-jobmanager MyJobManager -jobmanagerhost JMHost
See Also
mdce | nodestatus | startjobmanager | stopjobmanager | stopworker
5-23
5 Control Scripts — Alphabetical List
stopjobmanager
Stop job manager process
Syntax
stopjobmanager
stopjobmanager -flags
Description
stopjobmanager stops a job manager that is running under the mdce service.
stopjobmanager -flags accepts the following input flags. Multiple flags can be used
together on the same command.
Flag Operation
-name <job_manager_name> Specifies the name of the job manager to
stop. The default is the value of
DEFAULT_JOB_MANAGER_NAME parameter
the mdce_def file.
-remotehost <hostname> Specifies the name of the host where you
want to stop the job manager and the
associated job manager lookup process. The
default value is the local host.
-clean Deletes all checkpoint information stored
on disk for the current instance of this job
manager after stopping it. This cleans the
job manager of all its job and task data.
5-24
stopjobmanager
Flag Operation
-baseport <port_number> Specifies the base port that the mdce
service on the remote host is using. You
need to specify this only if the value of
BASE_PORT in the local mdce_def file does
not match the base port being used by the
mdce service on the remote host.
-v Verbose mode displays the progress of the
command execution.
Examples
Stop the job manager MyJobManager on the local host.
See Also
mdce | nodestatus | startjobmanager | startworker | stopworker
5-25
5 Control Scripts — Alphabetical List
stopworker
Stop MATLAB worker session
Syntax
stopworker
stopworker -flags
Description
stopworker stops a MATLAB worker process that is running under the mdce service.
stopworker -flags accepts the following input flags. Multiple flags can be used
together on the same command.
Flag Operation
-name <worker_name> Specifies the name of the MATLAB worker
to stop. The default is the value of the
DEFAULT_WORKER_NAME parameter in the
mdce_def file.
-remotehost <hostname> Specifies the name of the host where you
want to stop the MATLAB worker. The
default value is the local host.
-clean Deletes all checkpoint information
associated with this worker name after
stopping it.
5-26
stopworker
Flag Operation
-baseport <port_number> Specifies the base port that the mdce
service on the remote host is using. You
need to specify this only if the value of
BASE_PORT in the local mdce_def file does
not match the base port being used by the
mdce service on the remote host.
-v Verbose mode displays the progress of the
command execution.
Examples
Stop the worker with the default name on the local host.
stopworker
Stop the worker with the default name, running on the computer WorkerHost.
Stop the workers named worker1 and worker2, running on the computer WorkerHost.
See Also
mdce | nodestatus | startjobmanager | startworker | stopjobmanager
5-27
Glossary
client The MATLAB session that defines and submits the job.
This is the MATLAB session in which the programmer
usually develops and prototypes applications. Also known
as the MATLAB client.
client computer The computer running the MATLAB client; often your
desktop.
communicating job Job composed of tasks that communicate with each other
during evaluation. All tasks must run simultaneously. A
special case of communicating job is a parallel pool, used
for executing parfor-loops and spmd blocks.
Glossary-1
Glossary
head node Usually, the node of the cluster designated for running
the job scheduler and license manager. It is often useful to
run all the nonworker related processes on a single
machine.
Glossary-2
Glossary
job scheduler checkpoint Snapshot of information necessary for the MATLAB job
information scheduler to recover from a system crash or reboot.
job scheduler database The database that the MATLAB job scheduler uses to
store the information about its jobs and tasks.
MATLAB job scheduler The MathWorks process that queues jobs and assigns
(MJS) tasks to workers. Formerly known as a job manager.
mdce The service that has to run on all machines before they
can run a MATLAB job scheduler or worker. This is the
engine foundation process, making sure that the job
scheduler and worker processes that it controls are
always running.
mdce_def file The file that defines all the defaults for the mdce
processes by allowing you to set preferences or definitions
in the form of parameter values.
Glossary-3
Glossary
parallel pool A collection of workers that are reserved by the client and
running a special communicating job for execution of
parfor-loops, spmd statements, and distributed
arrays.
random port A random unprivileged TCP port, i.e., a random TCP port
above 1024.
register a worker The action that happens when both worker and MATLAB
job scheduler are started and the worker contacts the job
scheduler.
Glossary-4
Glossary
QQQ
Glossary-5