Biophysical muscle models, often called Huxley-type models, are based on the underlying physiolog... more Biophysical muscle models, often called Huxley-type models, are based on the underlying physiology of muscles, making them suitable for modeling non-uniform and unsteady contractions. This kind of model can be computationally intensive, which makes the usage of large-scale simulations difficult. To enable more efficient usage of the Huxley muscle model, we created a data-driven surrogate model, which behaves similarly to the original Huxley muscle model, but it requires significantly less computational power. From several numerical simulations, we acquired a lot of data and trained deep neural networks so that the behavior of the neural network resembles the behavior of the Huxley model. Since muscle models are history-dependent we used time series as an input and we trained a recurrent neural network to produce stress and instantaneous stiffness. The real challenge was to get the neural network to predict these values precisely enough for the numerical simulation to work properly and produce accurate results. In our work, we showed results obtained with the original Huxley model and surrogate Huxley model for several muscle twitch contractions. Based on similarities between the surrogate model and the original model we can conclude that the surrogate has the potential to replace the original model within numerical simulations.
Since multi-scale models of muscles rely on the integration of physical and biochemical propertie... more Since multi-scale models of muscles rely on the integration of physical and biochemical properties across multiple length and time scales, these models are highly CPU consuming and memory intensive. Therefore, their practical implementation and usage in real-world applications is limited by their high requirements for computational power. There are various reported solutions to the problems of the distributed computation of the complex systems that could also be applied to the multi-scale muscle simulations. In this paper, we present a novel load balancing method for parallel multi-scale muscle simulations on distributed computing resources. The method uses data obtained from simple Hill phenomenological model in order to predict computational weights of the integration points within the multi-scale model. Using obtained weights it is possible to improve domain decomposition prior to multi-scale simulation run and consequently significantly reduce computational time. The method is applied to two-scale muscle model where a finite element (FE) macro model is coupled with Huxley's model of cross-bridge kinetics on the microscopic level. The massive parallel solution is based on decomposition of micro model domain and static scheduling policy. It was verified on real-world example, showing high utilization of all involved CPUs and ensuring high scalability, thanks to the novel scheduling approach. Performance analysis clearly shown that inclusion of complexities prediction in reducing the execution time of parallel run by about 40% compared to the same model with scheduler that assumes equal complexities of all micro models.
2021 IEEE 21st International Conference on Bioinformatics and Bioengineering (BIBE), 2021
The paper describes experiences from building and cloudification of the in-silico research platfo... more The paper describes experiences from building and cloudification of the in-silico research platform SilicoFCM, an innovative in-silico clinical trials' solution for the design and functional optimization of whole heart performance and monitoring effectiveness of pharmacological treatment, with the aim to reduce the animal studies and the human clinical trials. The primary aim of cloudification was to prove portability, improve scalability and reduce long-term infrastructure costs. The most computationally expensive part of the platform, the scientific workflow manager, was successfully ported to Amazon Web Services. We benchmarked the performance on three distinct research workflows, each of them having different resource requirements and execution time. The first benchmark was pure performance of running workflow sequentially. The aim of the second test was to stress-test the underlying infrastructure by submitting multiple workflows simultaneously. The benchmark results are promising, painting the infrastructure launching overhead almost negligible in this kind of heavy computational use-case.
2015 IEEE 15th International Conference on Bioinformatics and Bioengineering (BIBE), 2015
In this paper we present a novel approach in multi-scale muscle modeling based on finite element ... more In this paper we present a novel approach in multi-scale muscle modeling based on finite element method and Huxley crossbridge kinetics model. In order to determine the mechanical response of a muscle, we implement basic mechanical principles of motion of deformable bodies using finite element method. Constitutive properties of muscle are defined by the number of molecular interconnections between the myosin and actin filaments. To account for these effects, we used Huxley's micro model based on sliding filament theory to calculate muscle active forces and instantaneous stiffnesses in FE integration points. In order to run these computationally expensive simulations we have also developed a special parallelization strategy which gives speedup of two orders of magnitude. Results obtained using presented multi-scale model are compared to those obtained by Hill's phenomenological model.
The International Journal of High Performance Computing Applications, 2019
Since multi-scale models of muscles rely on the integration of physical and biochemical propertie... more Since multi-scale models of muscles rely on the integration of physical and biochemical properties across multiple length and time scales, they are highly processor and memory intensive. Consequently, their practical implementation and usage in real-world applications is limited by high computational requirements. There are various reported solutions to the problem of parallel computation of various multi-scale models, but due to their inherent complexity, load balancing remains a challenging task. In this article, we present a novel load balancing method for multi-scale simulations based on finite element (FE) method. The method employs a computationally simple single-scale model and machine learning in order to predict computational weights of the integration points within a complex multi-scale model. Employing the obtained weights, it is possible to improve the domain decomposition prior to the complex multi-scale simulation run and consequently reduce computation time. The metho...
Genetic algorithms are powerful techniques for optimization of complexsystems. These methods requ... more Genetic algorithms are powerful techniques for optimization of complexsystems. These methods require a large number of evaluations of candidate solutionswhich take huge CPU time. This paper introduces two web service based frameworksfor parallel evaluation of the population in genetic algorithm using the master-slavemodel. Developed frameworks can be easily incorporated into any genetic algorithm,giving a universal mechanism for distribution of individuals and collection of the eval-uation results. This concept provides parallelization of genetic algorithms on variousdistributed architectures, including multiprocessors and computing clusters. Performedtests have shown that proposed frameworks achieve signicant speedup, especially whenevaluating large-scale problems. In addition, a case study from the eld of hydrologyis presented.
The International Journal of High Performance Computing Applications, 2020
In this paper, we present a generic, scalable and adaptive load balancing parallel Lagrangian par... more In this paper, we present a generic, scalable and adaptive load balancing parallel Lagrangian particle tracking approach in Wiener type processes such as Brownian motion. The approach is particularly suitable in problems involving particles with highly variable computation time, like deposition on boundaries that may include decay, when particle lifetime obeys exponential distribution. At first glance, Lagranginan tracking is highly suitable for a distributed programming model due to the independence of motion of separate particles. However, the commonly employed Decomposition Per Particle (DPP) method, where each process is in charge of a certain number of particles, actually displays poor parallel efficiency due to the high particle lifetime variability when dealing with a wide set of deposition problems that optionally include decay. The proposed method removes DPP defects and brings a novel approach to discrete particle tracking. The algorithm introduces master/slave model dubbe...
Biophysical muscle models, often called Huxley-type models, are based on the underlying physiolog... more Biophysical muscle models, often called Huxley-type models, are based on the underlying physiology of muscles, making them suitable for modeling non-uniform and unsteady contractions. This kind of model can be computationally intensive, which makes the usage of large-scale simulations difficult. To enable more efficient usage of the Huxley muscle model, we created a data-driven surrogate model, which behaves similarly to the original Huxley muscle model, but it requires significantly less computational power. From several numerical simulations, we acquired a lot of data and trained deep neural networks so that the behavior of the neural network resembles the behavior of the Huxley model. Since muscle models are history-dependent we used time series as an input and we trained a recurrent neural network to produce stress and instantaneous stiffness. The real challenge was to get the neural network to predict these values precisely enough for the numerical simulation to work properly and produce accurate results. In our work, we showed results obtained with the original Huxley model and surrogate Huxley model for several muscle twitch contractions. Based on similarities between the surrogate model and the original model we can conclude that the surrogate has the potential to replace the original model within numerical simulations.
Since multi-scale models of muscles rely on the integration of physical and biochemical propertie... more Since multi-scale models of muscles rely on the integration of physical and biochemical properties across multiple length and time scales, these models are highly CPU consuming and memory intensive. Therefore, their practical implementation and usage in real-world applications is limited by their high requirements for computational power. There are various reported solutions to the problems of the distributed computation of the complex systems that could also be applied to the multi-scale muscle simulations. In this paper, we present a novel load balancing method for parallel multi-scale muscle simulations on distributed computing resources. The method uses data obtained from simple Hill phenomenological model in order to predict computational weights of the integration points within the multi-scale model. Using obtained weights it is possible to improve domain decomposition prior to multi-scale simulation run and consequently significantly reduce computational time. The method is applied to two-scale muscle model where a finite element (FE) macro model is coupled with Huxley's model of cross-bridge kinetics on the microscopic level. The massive parallel solution is based on decomposition of micro model domain and static scheduling policy. It was verified on real-world example, showing high utilization of all involved CPUs and ensuring high scalability, thanks to the novel scheduling approach. Performance analysis clearly shown that inclusion of complexities prediction in reducing the execution time of parallel run by about 40% compared to the same model with scheduler that assumes equal complexities of all micro models.
2021 IEEE 21st International Conference on Bioinformatics and Bioengineering (BIBE), 2021
The paper describes experiences from building and cloudification of the in-silico research platfo... more The paper describes experiences from building and cloudification of the in-silico research platform SilicoFCM, an innovative in-silico clinical trials' solution for the design and functional optimization of whole heart performance and monitoring effectiveness of pharmacological treatment, with the aim to reduce the animal studies and the human clinical trials. The primary aim of cloudification was to prove portability, improve scalability and reduce long-term infrastructure costs. The most computationally expensive part of the platform, the scientific workflow manager, was successfully ported to Amazon Web Services. We benchmarked the performance on three distinct research workflows, each of them having different resource requirements and execution time. The first benchmark was pure performance of running workflow sequentially. The aim of the second test was to stress-test the underlying infrastructure by submitting multiple workflows simultaneously. The benchmark results are promising, painting the infrastructure launching overhead almost negligible in this kind of heavy computational use-case.
2015 IEEE 15th International Conference on Bioinformatics and Bioengineering (BIBE), 2015
In this paper we present a novel approach in multi-scale muscle modeling based on finite element ... more In this paper we present a novel approach in multi-scale muscle modeling based on finite element method and Huxley crossbridge kinetics model. In order to determine the mechanical response of a muscle, we implement basic mechanical principles of motion of deformable bodies using finite element method. Constitutive properties of muscle are defined by the number of molecular interconnections between the myosin and actin filaments. To account for these effects, we used Huxley's micro model based on sliding filament theory to calculate muscle active forces and instantaneous stiffnesses in FE integration points. In order to run these computationally expensive simulations we have also developed a special parallelization strategy which gives speedup of two orders of magnitude. Results obtained using presented multi-scale model are compared to those obtained by Hill's phenomenological model.
The International Journal of High Performance Computing Applications, 2019
Since multi-scale models of muscles rely on the integration of physical and biochemical propertie... more Since multi-scale models of muscles rely on the integration of physical and biochemical properties across multiple length and time scales, they are highly processor and memory intensive. Consequently, their practical implementation and usage in real-world applications is limited by high computational requirements. There are various reported solutions to the problem of parallel computation of various multi-scale models, but due to their inherent complexity, load balancing remains a challenging task. In this article, we present a novel load balancing method for multi-scale simulations based on finite element (FE) method. The method employs a computationally simple single-scale model and machine learning in order to predict computational weights of the integration points within a complex multi-scale model. Employing the obtained weights, it is possible to improve the domain decomposition prior to the complex multi-scale simulation run and consequently reduce computation time. The metho...
Genetic algorithms are powerful techniques for optimization of complexsystems. These methods requ... more Genetic algorithms are powerful techniques for optimization of complexsystems. These methods require a large number of evaluations of candidate solutionswhich take huge CPU time. This paper introduces two web service based frameworksfor parallel evaluation of the population in genetic algorithm using the master-slavemodel. Developed frameworks can be easily incorporated into any genetic algorithm,giving a universal mechanism for distribution of individuals and collection of the eval-uation results. This concept provides parallelization of genetic algorithms on variousdistributed architectures, including multiprocessors and computing clusters. Performedtests have shown that proposed frameworks achieve signicant speedup, especially whenevaluating large-scale problems. In addition, a case study from the eld of hydrologyis presented.
The International Journal of High Performance Computing Applications, 2020
In this paper, we present a generic, scalable and adaptive load balancing parallel Lagrangian par... more In this paper, we present a generic, scalable and adaptive load balancing parallel Lagrangian particle tracking approach in Wiener type processes such as Brownian motion. The approach is particularly suitable in problems involving particles with highly variable computation time, like deposition on boundaries that may include decay, when particle lifetime obeys exponential distribution. At first glance, Lagranginan tracking is highly suitable for a distributed programming model due to the independence of motion of separate particles. However, the commonly employed Decomposition Per Particle (DPP) method, where each process is in charge of a certain number of particles, actually displays poor parallel efficiency due to the high particle lifetime variability when dealing with a wide set of deposition problems that optionally include decay. The proposed method removes DPP defects and brings a novel approach to discrete particle tracking. The algorithm introduces master/slave model dubbe...
Uploads
Papers by Milos Ivanovic