- Sponsor:
- sighpc
This year is special for the WORKS series as this corresponds to the tenth issue of this scientific event dedicated to scientific workflows. The call for papers attracted thirteen submissions from Europe, the USA, India and Mexico. After peer reviews by the program committee, nine of the papers were accepted, covering a variety of topics: workflows tasks scheduling, resources allocation for efficient execution, data flows management, languages, workflow adaptation, virtualization and provenance.
Looking over the past years, the problem of workflow scheduling and resources allocation in distributed infrastructures has always been well represented in the WORKS workshop series, with lan increasing interest for cloud type of resources lately. Together with scalability issues, they have represented a fundamental research axis over a long period of time. Accompanying infrastructure virtualization, effort has also been recently invested in abstract workflow representations and dynamic execution processes allowing for reconfiguration depending on the evolving execution conditions or intermediate results found. Finally, it can be seen from this year program that data flow management is a topic with a renewed interest in the broader context of scientific Big Data analysis.
Proceeding Downloads
A workflow runtime environment for manycore parallel architectures
We introduce a new Manycore Workflow Runtime Environment (MWRE) to efficiently enact traditional scientific workflows on modern manycore computing architectures. In contrast to existing engines that enact workflows acting as external services, MWRE is ...
Orchestrating workflows over heterogeneous networking infrastructures: NEWT: a network edge workflow tool
This paper investigates the use of the workflow methodology for group communication applications in highly distributed and dynamic wireless networks. The need for distributed ad hoc wireless communication applications is commonplace in defense and ...
Towards efficient scheduling of data intensive high energy physics workflows
- Mahantesh Halappanavar,
- Malachi Schram,
- Luis de la Torre,
- Kevin Barker,
- Nathan R. Tallent,
- Darren J. Kerbyson
Data intensive high energy physics workflows executed on geographically distributed resources pose a tremendous challenge for efficient use of computing resources. In this early work paper, we present a hierarchical framework for efficient allocation of ...
Contemporary challenges for data-intensive scientific workflow management systems
Data-intensive sciences now represent the forefront of current scientific computing. To handle this 'Big Data' focus, scientists demand enabling technologies that can adapt to the increasingly distributed, collaborative, and exploratory scientific ...
Co-sites: the autonomous distributed dataflows in collaborative scientific discovery
Online "big data" processing applications have seen increasing importance in the high performance computing domain, including online analytics of large volumes of data output by various scientific applications.
This work contributes to answering the ...
Interlanguage parallel scripting for distributed-memory scientific computing
Scripting languages such as Python and R have been widely adopted as tools for the development of scientific software because of the expressiveness of the languages and their available libraries. However, deploying scripted applications on large-scale ...
Dynamically reconfigurable workflows for time-critical applications
- Kieran Evans,
- Andrew Jones,
- Alun Preece,
- Francisco Quevedo,
- David Rogers,
- Irena Spasić,
- Ian Taylor,
- Vlado Stankovski,
- Salman Taherizadeh,
- Jernej Trnkoczy,
- George Suciu,
- Victor Suciu,
- Paul Martin,
- Junchao Wang,
- Zhiming Zhao
Cloud-based applications that depend on time-critical data processing or network throughput require the capability of reconfiguring their infrastructure on demand as and when conditions change. Although the ability to apply quality of service ...
Enabling workflow repeatability with virtualization support
The value of workflows to the scientific community spans over time and space. Not only results but also performance and resource consumption of a workflow need to be replayed over time and in varying environments. Achieving such repeatability in ...
Workflow provenance: an analysis of long term storage costs
The storage and retrieval of provenance is a critical piece of functionality for many data processing systems. There are numerous cases where, in order to satisfy regulatory requirements (such as drug development and medical data processing), accurately ...
Index Terms
- Proceedings of the 10th Workshop on Workflows in Support of Large-Scale Science