Although there are many smart devices and networked embedded object applications using World Wide... more Although there are many smart devices and networked embedded object applications using World Wide Web technologies , it is still a big step to go towards a true Web of Things. It is e.g. difficult to build ubiquitous WoT applications that work in and accross multiple environments. Approaches which aggregate WoT ressources by centralizing all the resource information, have problems: total dependency on external infrasture, lack of private WoT management, inflexible communication patterns and limited dynamic ressource discovery and mapping. To solve these problems, we propose uBox, a local WoT platform which can be a stand-alone server to make your WoT environment, with interfaces to connect the other local WoT platforms. This way, which we call uBoXing, we can create World Wide WoT platform with a distributed architecture. This paper describes the concept of a distributed resource management architecture, and how we implement the concept into software. Also, we will discuss the platform with the example application in SmartTecO environment.
International Series in Operations Research & Management Science, 2004
Abstract Data Grids seek to harness geographically distributed resources for large-scale data-int... more Abstract Data Grids seek to harness geographically distributed resources for large-scale data-intensive problems such as those encountered in high energy physics, bioinformatics, and other disciplines. These problems typically involve numerous, loosely coupled jobs that both access and generate large data sets. Effective scheduling in such environments is challenging, because of a need to address a variety of metrics and constraints (eg, resource utilization, response time, global and local allocation policies) while dealing with multiple, ...
Abstract Several parallel algorithms for Fock matrix construction are described. The algorithms c... more Abstract Several parallel algorithms for Fock matrix construction are described. The algorithms calculate only the unique integrals, distribute the Fock and density matrices over the processors of a massively parallel computer, use blocking techniques to construct the distributed data structures, and use clustering techniques on each processor to maximize data reuse. Algorithms based on both square and row blocked distributions of the Fock and density matrices are described and evaluated. Variants of the algorithms are discussed ...
The coordinated use of geographically distributed computers, or metacomputing, can in principle p... more The coordinated use of geographically distributed computers, or metacomputing, can in principle provide more accessible and cost- effective supercomputing than conventional high-performance systems. However, we lack evidence that metacomputing systems can be made easily usable, or that there exist large numbers of applications able to exploit metacomputing resources. In this paper, we present work that addresses both these concerns. The
Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model i... more Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model in which a single thread of control performs high-level operations on distributed arrays. These languages can greatly ease the development of parallel programs. Yet there are large classes of applications for which a mixture of task and data parallelism is most appropriate. Such applications can be structured as collections of data-parallel tasks that communicate by using explicit message passing. Because the Message Passing Interface (MPI) defines standardized, familiar mechanisms for this communication model, we propose that HPF tasks communicate by making calls to a coordination library that provides an HPF binding for MPI. The semantics of a communication interface for sequential languages can be ambiguous when the interface is invoked from a parallel language; we show how these ambiguities can be resolved by describing one possible HPF binding for MPI. We then present the design of a...
Many computations can be structured as sets of communicating data-parallel tasks. Individual task... more Many computations can be structured as sets of communicating data-parallel tasks. Individual tasks may be coded in HPF, pC++, etc.; periodically, tasks exchange distributed arrays via channel operations, virtual file operations, message passing, etc. The implementation of these operations is complicated by the fact that the processes engaging in the communication may execute on different numbers of processors and may have different distributions for communicated data structures. In addition, they may be connected by different sorts of networks. In this paper, we describe a communicating data-parallel tasks (CDT) library that we are developing for constructing applications of this sort. We outline the techniques used to implement this library, and we describe a range of data transfer strategies and several algorithms based on these strategies. We also present performance results for several algorithms. The CDT library is being used as a compiler target for an HPF compiler augmented w...
Although there are many smart devices and networked embedded object applications using World Wide... more Although there are many smart devices and networked embedded object applications using World Wide Web technologies , it is still a big step to go towards a true Web of Things. It is e.g. difficult to build ubiquitous WoT applications that work in and accross multiple environments. Approaches which aggregate WoT ressources by centralizing all the resource information, have problems: total dependency on external infrasture, lack of private WoT management, inflexible communication patterns and limited dynamic ressource discovery and mapping. To solve these problems, we propose uBox, a local WoT platform which can be a stand-alone server to make your WoT environment, with interfaces to connect the other local WoT platforms. This way, which we call uBoXing, we can create World Wide WoT platform with a distributed architecture. This paper describes the concept of a distributed resource management architecture, and how we implement the concept into software. Also, we will discuss the platform with the example application in SmartTecO environment.
International Series in Operations Research & Management Science, 2004
Abstract Data Grids seek to harness geographically distributed resources for large-scale data-int... more Abstract Data Grids seek to harness geographically distributed resources for large-scale data-intensive problems such as those encountered in high energy physics, bioinformatics, and other disciplines. These problems typically involve numerous, loosely coupled jobs that both access and generate large data sets. Effective scheduling in such environments is challenging, because of a need to address a variety of metrics and constraints (eg, resource utilization, response time, global and local allocation policies) while dealing with multiple, ...
Abstract Several parallel algorithms for Fock matrix construction are described. The algorithms c... more Abstract Several parallel algorithms for Fock matrix construction are described. The algorithms calculate only the unique integrals, distribute the Fock and density matrices over the processors of a massively parallel computer, use blocking techniques to construct the distributed data structures, and use clustering techniques on each processor to maximize data reuse. Algorithms based on both square and row blocked distributions of the Fock and density matrices are described and evaluated. Variants of the algorithms are discussed ...
The coordinated use of geographically distributed computers, or metacomputing, can in principle p... more The coordinated use of geographically distributed computers, or metacomputing, can in principle provide more accessible and cost- effective supercomputing than conventional high-performance systems. However, we lack evidence that metacomputing systems can be made easily usable, or that there exist large numbers of applications able to exploit metacomputing resources. In this paper, we present work that addresses both these concerns. The
Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model i... more Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model in which a single thread of control performs high-level operations on distributed arrays. These languages can greatly ease the development of parallel programs. Yet there are large classes of applications for which a mixture of task and data parallelism is most appropriate. Such applications can be structured as collections of data-parallel tasks that communicate by using explicit message passing. Because the Message Passing Interface (MPI) defines standardized, familiar mechanisms for this communication model, we propose that HPF tasks communicate by making calls to a coordination library that provides an HPF binding for MPI. The semantics of a communication interface for sequential languages can be ambiguous when the interface is invoked from a parallel language; we show how these ambiguities can be resolved by describing one possible HPF binding for MPI. We then present the design of a...
Many computations can be structured as sets of communicating data-parallel tasks. Individual task... more Many computations can be structured as sets of communicating data-parallel tasks. Individual tasks may be coded in HPF, pC++, etc.; periodically, tasks exchange distributed arrays via channel operations, virtual file operations, message passing, etc. The implementation of these operations is complicated by the fact that the processes engaging in the communication may execute on different numbers of processors and may have different distributions for communicated data structures. In addition, they may be connected by different sorts of networks. In this paper, we describe a communicating data-parallel tasks (CDT) library that we are developing for constructing applications of this sort. We outline the techniques used to implement this library, and we describe a range of data transfer strategies and several algorithms based on these strategies. We also present performance results for several algorithms. The CDT library is being used as a compiler target for an HPF compiler augmented w...
Uploads