SCE toolboxes for the development of high-level parallel applications
Pages 518 - 525
Abstract
Users of Scientific Computing Environments (SCE) benefit from faster high-level software development at the cost of larger run time due to the interpreted environment. For time-consuming SCE applications, dividing the workload among several computers can be a cost-effective acceleration technique. Using our PVM and MPI toolboxes, Matlab$^{\rm {\sc {\textregistered}}}$ and Octave users in a computer cluster can parallelize their interpreted applications using the native cluster programming paradigm — message-passing. Our toolboxes are complete interfaces to the corresponding libraries, support all the compatible datatypes in the base SCE and have been designed with performance and maintainability in mind. Although in this paper we focus on our new toolbox, MPITB for Octave, we describe the general design of these toolboxes and of the development aids offered to end users, mention some related work, mention speedup results obtained by some of our users and introduce speedup results for the NPB-EP benchmark for MPITB in both SCE's.
References
[1]
Eaton, J. W.: GNU Octave Manual. Network Theory Ltd. (2002) ISBN: 0-9541617-2-6.
[2]
Moler, C. B.: Numerical Computing with MATLAB. SIAM (2004) ISBN: 0-89871-560-1.
[3]
Geist, A., Beguelin, A., Dongarra, J., Jiang, W., Manchek, R., Sunderam, V.: PVM: Parallel Virtual Machine. A Users' Guide and Tutorial for Networked Parallel Computing. The MIT Press (1994). ISBN: 0-262-57108-0.
[4]
MPI Forum: MPI: A Message-Passing Interface Standard. Int. J. Supercomput. Appl. High Perform. Comput. Vol.8, no.3/4 (1994) 159-416. See also the MPI Forum Documents: MPI 2.0 standard (2003) University of Tennessee, Knoxville. Web http://www.mpiforum. org/.
[5]
Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message Passing Interface. 2nd Edition. The MIT Press (1999) ISBN: 0262571323.
[6]
Burns, G., Daoud, R., Vaigl, J.: LAM: an Open Cluster Environment for MPI. Proceedings of Supercomputing symposium (1994), 379-386
[7]
Squyres, J., Lumsdaine, A.: A Component Architecture for LAM/MPI. Proceedings of the 10th European PVM/MPI Users' Group Meeting, Lecture Notes in Computer Science, Vol.2840 (2003) 379-387.
[8]
Fernández, J., Cañas, A., Díaz, A.F., González, J., Ortega, J., Prieto, A.: Performance of Message-Passing MATLAB Toolboxes, Proceedings of the VECPAR 2002, Lecture Notes on Computer Science, Vol.2565 (2003), 228-241. Toolboxes URL http://atc.ugr.es/ ~ javier.
[9]
Bailey, D. et al: The NAS Parallel Benchmarks. RNR Technical Report RNR-94-007 (1994)
[10]
Bailey, D. et al: The NAS Parallel Benchmarks 2.0. Report NAS-95-020 (1995). Reports and software available from http://www.nas.nasa.gov/Software/NPB/.
[11]
Buss, B.J.: Comparison of serial and parallel implementations of Benchmark codes in MATLAB, Octave and FORTRAN. M.Sc. Thesis, Ohio State University (2005). Thesis and software available from http://www.ece.osu.edu/~bussb/research/.
[12]
Dormido C., S., de Madrid, A.P., Dormido B., S.: Parallel Dynamic Programming on Clusters of Workstations, IEEE Transactions on Parallel and Distributed Systems, vol.16, no.9 (2005), 785-798
[13]
Goasguen, S., Venugopal, R., Lundstrom, M.S.: Modeling Transport in Nanoscale Silicon and Molecular Devices on Parallel Machines, Proceedings of the 3rd IEEE Conference on Nanotechnology (2003), Vol.1, 398-401. DOI 10.1109/NANO.2003.1231802.
[14]
Goasguen, S.; Butt, A.R.; Colby, K.D.; Lundstrorn, M.S.: Parallelization of the nano-scale device simulator nanoMOS-2.0 using a 100 nodes linux cluster, Proceedings of the 2nd IEEE Conference on Nanotechnology (2002) 409-412. DOI 10.1109/ NANO. 2002. 1032277
[15]
Zhao, M., Chadha, V., Figueiredo, R.J.: Supporting Application-Tailored Grid File System Sessions with WSRF-Based Services, Procs of the 14th IEEE Int. Symp. on High Perf. Distributed Computing HPDC-14 (2005), 24-33. DOI 10.1109/HPDC.2005.1520930.
[16]
Creel, M.: User-Friendly Parallel Computations with Econometric Examples, Computational Economics, Springer (2005) 26 (2): 107-128. DOI 10.1007/s10614-005-6868-2
[17]
Creel, M.: Parallel-Knoppix Linux, http://pareto.uab.es/mcreel/ParallelKnoppix/.
[18]
Creel, M.: Econometrics Octave package at OctaveForge, http://octave.sf.net/. See package index at http://octave.sourceforge.net/index/extra.html#Econometrics.
[19]
Law, M.: MATLAB Laboratory for MPITB (2003), Coursework resource for MATH-2160 HKBU, available from http://www.math.hkbu.edu.hk/math2160/materials/MPITB.pdf. See also Guest Lecture on Cluster Computing (2005), Coursework resource for COMP-3320, available http://www.comp.hkbu.edu.hk/~jng/comp3320/ClusterLectures-2005.ppt.
[20]
Wang, C.L.: Grid Computing research in Hong Kong, 1st Workshop on Grid Tech. & Apps. (2004), http://www.cs.hku.hk/~clwang/talk/WoGTA04-Taiwan-CLWang-FNL.pdf.
Index Terms
- SCE toolboxes for the development of high-level parallel applications
Index terms have been assigned to the content through auto-classification.
Recommendations
Performance of message-passing MATLAB toolboxes
VECPAR'02: Proceedings of the 5th international conference on High performance computing for computational scienceIn this work we compare some of the freely available parallel Toolboxes for Matlab, which differ in purpose and implementation details: while DP-Toolbox and MultiMatlab offer a higher-level parallel environment, the goals of PVMTB and MPITB, developed ...
Comments
Information & Contributors
Information
Published In
May 2006
1104 pages
ISBN:3540343814
Sponsors
- INTEL: Intel Corporation
- Springer
- SGI
- Microsoft Research: Microsoft Research
- IBM: IBM
Publisher
Springer-Verlag
Berlin, Heidelberg
Publication History
Published: 28 May 2006
Qualifiers
- Article
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 0Total Downloads
- Downloads (Last 12 months)0
- Downloads (Last 6 weeks)0
Reflects downloads up to 18 Aug 2024
Other Metrics
Citations
View Options
View options
Get Access
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in