User manual MATLAB PARALLEL COMPUTING TOOLBOX 4

DON'T FORGET : ALWAYS READ THE USER GUIDE BEFORE BUYING !!!

If this document matches the user guide, instructions manual or user manual, feature sets, schematics you are looking for, download it now. Diplodocs provides you a fast and easy access to the user manual MATLAB PARALLEL COMPUTING TOOLBOX 4. We hope that this MATLAB PARALLEL COMPUTING TOOLBOX 4 user guide will be useful to you.


MATLAB PARALLEL COMPUTING TOOLBOX 4 : Download the complete user guide (2429 Ko)

Manual abstract: user guide MATLAB PARALLEL COMPUTING TOOLBOX 4

Detailed instructions for use are in the User's Guide.

[. . . ] Parallel Computing ToolboxTM 4 User's Guide How to Contact The MathWorks Web Newsgroup www. mathworks. com/contact_TS. html Technical Support www. mathworks. com comp. soft-sys. matlab suggest@mathworks. com bugs@mathworks. com doc@mathworks. com service@mathworks. com info@mathworks. com Product enhancement suggestions Bug reports Documentation error reports Order status, license renewals, passcodes Sales, pricing, and general information 508-647-7000 (Phone) 508-647-7001 (Fax) The MathWorks, Inc. 3 Apple Hill Drive Natick, MA 01760-2098 For contact information about worldwide offices, see the MathWorks Web site. Parallel Computing ToolboxTM User's Guide © COPYRIGHT 2004­2010 by The MathWorks, Inc. The software described in this document is furnished under a license agreement. The software may be used or copied only under the terms of the license agreement. [. . . ] This interface lets you execute jobs on your cluster with any scheduler you might have. The principles of using the generic scheduler interface for parallel jobs are the same as those for distributed jobs. The overview of the concepts and details of submit and decode functions for distributed jobs are discussed fully in "Using the Generic Scheduler Interface" on page 8-33 in the chapter on Programming Distributed Jobs. Coding in the Client Configuring the Scheduler Object Coding a parallel job for a generic scheduler involves the same procedure as coding a distributed job. 1 Create an object representing your scheduler with findResource. 2 Set the appropriate properties on the scheduler object if they are not defined in the configuration. Because the scheduler itself is often common to many users and applications, it is probably best to use a configuration for programming these properties. Among the properties required for a parallel job is ParallelSubmitFcn. The toolbox comes with several submit functions for various schedulers and platforms; see the following section, "Supplied Submit and Decode Functions" on page 9-9. 9-8 Using the Generic Scheduler Interface 3 Use createParallelJob to create a parallel job object for your scheduler. 4 Create a task, run the job, and retrieve the results as usual. Supplied Submit and Decode Functions There are several submit and decode functions provided with the toolbox for your use with the generic scheduler interface. These files are in the directory matlabroot/toolbox/distcomp/examples/integration In this directory are subdirectories for each of several types of scheduler, containing wrappers, submit functions, and decode functions for distributed and parallel jobs. For example, the directory matlabroot/toolbox/distcomp/examples/integration/pbs contains the following files for use with a PBS scheduler: Filename pbsSubmitFcn. m pbsDecodeFunc. m pbsParallelSubmitFcn. m pbsParallelDecode. m pbsWrapper. sh Description Submit function for a distributed job Decode function for a distributed job Submit function for a parallel job Decode function for a parallel job Script that is submitted to PBS to start workers that evaluate the tasks of a distributed job Script that is submitted to PBS to start labs that evaluate the tasks of a parallel job pbsParallelWrapper. sh Depending on your network and cluster configuration, you might need to modify these files before they will work in your situation. At the time of publication, there are directories for PBS schedulers (pbs), Platform LSF schedulers (lsf), generic UNIX-based scripts (ssh), Sun Grid Engine (sge), and mpiexec on Microsoft Windows operating systems (winmpiexec). In addition, the pbs and lsf directories have subdirectories called nonshared, which contain scripts for use when there is a nonshared file 9-9 9 Programming Parallel Jobs system between the client and cluster computers. Each of these subdirectories contains a file called README, which provides instruction on how to use its scripts. As more files or solutions might become available at any time, visit the Support page for this product on the MathWorks Web site at http://www. mathworks. com/support/product/product. html?product=DM. This page also provides contact information in case you have any questions. 9-10 Further Notes on Parallel Jobs Further Notes on Parallel Jobs In this section. . . "Number of Tasks in a Parallel Job" on page 9-11 "Avoiding Deadlock and Other Dependency Errors" on page 9-11 Number of Tasks in a Parallel Job Although you create only one task for a parallel job, the system copies this task for each worker that runs the job. For example, if a parallel job runs on four workers (labs), the Tasks property of the job contains four task objects. The first task in the job's Tasks property corresponds to the task run by the lab whose labindex is 1, and so on, so that the ID property for the task object and labindex for the lab that ran that task have the same value. Therefore, the sequence of results returned by the getAllOutputArguments function corresponds to the value of labindex and to the order of tasks in the job's Tasks property. Avoiding Deadlock and Other Dependency Errors Because code running in one lab for a parallel job can block execution until some corresponding code executes on another lab, the potential for deadlock exists in parallel jobs. This is most likely to occur when transferring data between labs or when making code dependent upon the labindex in an if statement. Suppose you have a codistributed array D, and you want to use the gather function to assemble the entire array in the workspace of a single lab. if labindex == 1 assembled = gather(D); end The reason this fails is because the gather function requires communication between all the labs across which the array is distributed. When the if statement limits execution to a single lab, the other labs required for execution of the function are not executing the statement. As an alternative, you can use gather itself to collect the data into the workspace of a single lab: assembled = gather(D, 1). 9-11 9 Programming Parallel Jobs In another example, suppose you want to transfer data from every lab to the next lab on the right (defined as the next higher labindex). [. . . ] The general term "scheduler" can also refer to a job manager. job manager checkpoint information Snapshot of information necessary for the job manager to recover from a system crash or reboot. job manager database The database that the job manager uses to store the information about its jobs and tasks. job manager lookup process The process that allows clients, workers, and job managers to find each other. [. . . ]

DISCLAIMER TO DOWNLOAD THE USER GUIDE MATLAB PARALLEL COMPUTING TOOLBOX 4




Click on "Download the user Manual" at the end of this Contract if you accept its terms, the downloading of the manual MATLAB PARALLEL COMPUTING TOOLBOX 4 will begin.

 

Copyright © 2015 - manualRetreiver - All Rights Reserved.
Designated trademarks and brands are the property of their respective owners.