9th International Conference "Distributed Computing and Grid Technologies in Science and Education", 2021
There is no single diagnostic marker for neurodegenerative diseases. The biomedical data obtained... more There is no single diagnostic marker for neurodegenerative diseases. The biomedical data obtained during these studies have heterogeneous nature, which greatly complicates their collection, storage and complex analysis. Special methods of statistical analysis due to the described specifics of the data must be applied. The results obtained indicate that for a correct diagnosis, it is necessary to use a comprehensive assessment of all tests.
Computational Science and Its Applications – ICCSA 2020, 2020
Diffraction and radiation forces result from the interaction between the ship hull and the moving... more Diffraction and radiation forces result from the interaction between the ship hull and the moving fluid. These forces are typically simulated using added masses, a method that uses mass to compensate for not computing these forces directly. In this paper we propose simple mathematical model to compute diffraction force. The model is based on Lagrangian description of the flow and uses law of reflection to include diffraction term in the solution. The solution satisfies continuity equation and equation of motion, but is restricted to the boundary of the ship hull. The solution was implemented in velocity potential solver in Virtual testbed—a programme for workstations that simulates ship motions in extreme conditions. Performance benchmarks of the solver showed that it is particularly efficient on graphical accelerators.
Computational Science and Its Applications – ICCSA 2019, 2019
Virtual testbed is a computer programme that simulates ocean waves, ship motions and compartment ... more Virtual testbed is a computer programme that simulates ocean waves, ship motions and compartment flooding. One feature of this programme is that it visualises physical phenomena frame by frame as the simulation progresses. The aim of the studies reported here was to assess how much performance can be gained using graphical accelerators compared to ordinary processors when repeating the same computations in a loop. We rewrote programme’s hot spots in OpenCL to able to execute them on a graphical accelerator and benchmarked their performance with a number of real-world ship models. The analysis of the results showed that data copying in and out of accelerator’s main memory has major impact on performance when done in a loop, and the best performance is achieved when copying in and out is done outside the loop (when data copying inside the loop involves accelerator’s main memory only). This result comes in line with how distributed computations are performed on a set of cluster nodes, and suggests using similar approaches for single heterogeneous node with a graphical accelerator.
Computational Science and Its Applications – ICCSA 2017, 2017
In this article, we propose an approach that allows acceleration of the Time-of-Flight (ToF) even... more In this article, we propose an approach that allows acceleration of the Time-of-Flight (ToF) event reconstruction algorithm implementation, which is a part of the Multi Purpose Detector (MPD) Root application.
Virtualized computing infrastructures are often used to create clusters of resources tailored to ... more Virtualized computing infrastructures are often used to create clusters of resources tailored to solve tasks taking into account particular requirements of these tasks. An important objective is to evaluate such requirements and request optimal amount of resources which becomes challenging for parallel tasks with intercommunication. In previous works we investigated how light-weight container-based virtualization can be used for creating virtual clusters running MPI applications. Such cluster is configured according to the requirements of particular application and allocates only necessary amount of resources from the physical infrastructure leaving space for co-allocated clusters running without conflicts or resource races. In this paper we investigate similar concepts for MapReduce applications based on Hadoop framework that use Cloudply virtualization tool to create and manage light-weight virtual Hadoop clusters on Amazon cloud resources. We investigate performance of several Ha...
In particle accelerator physics the problem is that we can not see what is going on inside the wo... more In particle accelerator physics the problem is that we can not see what is going on inside the working machine. There are a lot of packages for modelling the behaviour of the particles in numerical or analytical way. But for most physicists it is better to see the picture in motion to say exactly what is happening and how to influence on this. The goal of this work is to provide scientists with such a problem-solving environment, which can not only do some numerical calculations, but show the dynamics of changes as a motion 3D picture. To do this we use the power of graphical processors from both sides: for general purpose calculations and for there direct appointment – drawing 3D motion. Besides, this environment should analyse the behaviour of the system to provide the user with all necessary information about the problem and how to deal with it.
To represent the space charge forces of beam a software based on analytical models for space char... more To represent the space charge forces of beam a software based on analytical models for space charge distributions was developed. Special algorithm for predictor-corrector method for beam map evaluation scheme including the space charge forces were used. This method allows us to evaluate the map along the reference trajectory and to analyze beam envelope dynamics. In three dimensional models the number of computing resources we use is significant. For this purpose graphical processors are used. This software is a part of Virtual Accelerator concept which is considered as a set of services and tools of modeling beam dynamics in accelerators on distributed computing resources.
Architecture of a digital computing system determines the technical foundation of a unified mathe... more Architecture of a digital computing system determines the technical foundation of a unified mathematical language for exact arithmetic-logical description of phenomena and laws of continuum mechanics for applications in fluid mechanics and theoretical physics. Deep parallelization of the computing processes serves to the revival of application of functional programming at a new technological level. The efficiency of computations is provided by true reproduction of the fundamental laws of physics and continuum mechanics. Tensor formalization of numerical objects and computing operations serves to spatial interpolation of rheological state parameters and laws of the fluid mechanics as mathematical models in the local coordinates of the elementary numeric cells — large liquid particles. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the efficiency of the algorithms developed by numerical procedures with natural parallel...
Computational Science and Its Applications – ICCSA 2020, 2020
Strong wind causes heavy load on the ship in a seaway bending and pushing it in the direction of ... more Strong wind causes heavy load on the ship in a seaway bending and pushing it in the direction of the wind. In this paper we investigate how wind can be simulated in the framework of Virtual testbed—a near real-time ship motion simulator. We propose simple model that describes air flow around ship hull with constant initial speed and direction which is based on the law of reflection. On the boundary the model reduces to the known model for potential flow around a cylinder, and near the boundary they are not equivalent, but close enough to visualise the effect of the hull on the flow. Then we apply this model to simulate air flow around real-world ship hull and conclude that for any real-world situation ship roll angle and ship speed caused by the wind is small to not cause capsizing, but large enough to be considered in onboard intelligent systems that determine real roll, pitch and yaw angles during ship operation and similar applications.
Processing of large amounts of data often consists of several steps, e.g. pre-and post-processing... more Processing of large amounts of data often consists of several steps, e.g. pre-and post-processing stages, which are executed sequentially with data written to disk after each step, however, when pre-processing stage for each task is different the more efficient way of processing data is to construct a pipeline which streams data from one stage to another. In a more general case some processing stages can be factored into several parallel subordinate stages thus forming a distributed pipeline where each stage can have multiple inputs and multiple outputs. Such processing pattern emerges in a problem of classification of wave energy spectra based on analytic approximations which can extract different wave systems and their parameters (e.g. wave system type, mean wave direction) from spectrum. Distributed pipeline approach achieves good performance compared to conventional "sequential-stage" processing.
Getting a qualified medical examination can be difficult for people in remote areas because medic... more Getting a qualified medical examination can be difficult for people in remote areas because medical staff available can either be inaccessible or it might lack expert knowledge at proper level. Telemedicine technologies can help in such situations. On one hand, such technologies allow highly qualified doctors to consult remotely, thereby increasing the quality of diagnosis and plan treatment. On the other hand, computer-aided analysis of the research results, anamnesis and information on similar cases assist medical staff in their routine activities and decision-making. Creating telemedicine system for a particular domain is a laborious process. It's not sufficient to pick proper medical experts and to fill the knowledge base of the analytical module. It's also necessary to organize the entire infrastructure of the system to meet the requirements in terms of reliability, fault tolerance, protection of personal data and so on. Tools with reusable infrastructure elements, which are common to such systems, are able to decrease the amount of work needed for the development of telemedicine systems. An interactive tool for creating distributed telemedicine systems is described in the article. A list of requirements for the systems is presented; structural solutions for meeting the requirements are suggested. A composition of such elements applicable for distributed systems is described in the article. A cardiac telemedicine system is described as a foundation of the tool
International Journal of Business Intelligence and Data Mining, 2017
Distributed computing clusters are often built with commodity hardware which leads to periodic fa... more Distributed computing clusters are often built with commodity hardware which leads to periodic failures of processing nodes due to relatively low reliability of such hardware. While worker node fault-tolerance is straightforward, fault tolerance of master node poses a bigger challenge. In this paper master node failure handling is based on the concept of master and worker roles that can be dynamically re-assigned to cluster nodes along with maintaining a backup of the master node state on one of worker nodes. In such case no special component is needed to monitor the health of the cluster while master node failures can be resolved except for the cases of simultaneous failure of master and backup. We present experimental evaluation of the technique implementation, show benchmarks demonstrating that a failure of a master does not affect running job, and a failure of backup results in re-computation of only the last job step.
In the problem of simulation of marine object behaviour in a seaway determination of pressures ex... more In the problem of simulation of marine object behaviour in a seaway determination of pressures exerted on the object is often done on assumption of ocean wave amplitudes being small compared to wave height, however, this is not the best approach for real ocean waves. This was done due to underlying wind wave models (such as Longuet—Higgins model) lacking ability to produce large amplitude waves. The other option is to use alternative autoregressive model which is capable of producing real ocean waves, but in this approach pressure calculation scheme should be extended to cover large-amplitude wave case. It is possible to obtain analytical solutions for both two- and three-dimensional problem and it was found that corresponding numerical algorithms are simple and have efficient implementations compared to small amplitude case where the calculation is done by transforming partial differential equations into numerical schemes. In the numerical experiment it was proved that obtained for...
Determining the impact of external excitations on a dynamic marine object such as ship hull in a ... more Determining the impact of external excitations on a dynamic marine object such as ship hull in a seaway is the main goal of simulations. Now such simulations are most often based on approximate mathematical models that use results of the theory of small-amplitude waves. The most complicated software for marine objects behavior simulation LAMP IV (Large Amplitude Motion Program) uses numerical solution of traditional hydrodynamic problem without often-used approximations but on the basis of theory of small-amplitude waves. For efˇciency reasons these simulations can be based on autoregressive model to generate real wave surface. Such a surface possesses all the hydrodynamic characteristics of sea waves, preserves dispersion relation and also shows superior performance compared to other wind wave models. Naturally, the known surface can be used to compute velocityˇeld and in turn to determine pressures in any point under sea surface. The resulting computational algorithm can be used to determine pressures without the use of theory of small-amplitude waves. ¶•¥¤¥²¥´¨¥¸É¥ ¶¥´¨¢μ §¤¥°¸É¢¨Ö ¢ §¢μ²´μ¢ ´´μ°³μ•¸±μ° ¶μ¢¥•Ì´μ¸É¨´ ±μ• ¶Ê¸¸Ê¤´ Ö¢²Ö-¥É¸Ö £² ¢´μ° § ¤ Î¥°¨¸ ¶ÒÉ ´¨°, ±μÉμ•Ò¥ ±´ ¸ÉμÖÐ¥³Ê ¢•¥³¥´¨μ¸´μ¢ ´Ò´ •¥ §Ê²ÓÉ É Ì É¥μ•¨¢ μ²´³ ²μ° ³ ¶²¨ÉʤÒ: ÔÉÊ É¥μ•¨Õ,¨¸±²ÕÎ Ö´¥±μÉμ•Ò¥ μ¡Ð¥ ¶•¨´ÖÉÒ¥ ¶ ¶•μ±¸¨³ ͨ¨,¨¸ ¶μ²Ó-§Ê¥É ¶ ±¥É ¶•¨±² ¤´μ£μ ¶•μ£• ³³´μ£μ μ¡¥¸ ¶¥Î¥´¨Ö LAMP IV (Large Amplitude Motion Program) ¤²Ö¨¸¸²¥¤μ¢ ´¨Ö ¶μ¢¥¤¥´¨Ö ³μ•¸±¨Ì μ¡Ñ¥±Éμ¢. "²Ö ¶μ¢ÒÏ¥´¨Ö ÔËË¥±É¨¢´μ¸É¨É ±¨¥¨¸ ¶ÒÉ ´¨Ö ³μ¦´μ ¶•μ¢μ¤¨ÉÓ´ μ¸´μ¢¥ ¢Éμ•¥£•¥¸¸¨μ´´μ°³μ¤¥²¨,¨¸ ¶μ²Ó §ÊÖ ¥¥ ¤²Ö £¥´¥• ͨ¨¢ §¢μ²´μ¢ ´´μ°³ μ•¸±μ° ¶μ¢¥•Ì´μ¸É¨. ' ± Ö ¶μ¢¥•Ì´μ¸ÉÓ μ¡² ¤ ¥É £¨¤•μ¤¨´ ³¨Î¥¸±¨³¨Ì • ±É¥•¨¸É¨± ³¨³μ•-±¨Ì ¢μ²´,¸μÌ• ´Ö¥É ¤¨¸ ¶¥•¸¨μ´´μ¥¸μμÉ´μÏ¥´¨¥, É ±¦¥¨³¥¥É ¡μ²¥¥ ¡Ò¸É•Ò°Î¨¸²¥´´Ò° ²£μ-•¨É³ ¶μ¸• ¢´¥´¨Õ¸¤•Ê£¨³¨³μ¤¥²Ö³¨¢¥É•μ¢μ£μ ¢μ²´¥´¨Ö. μ²ÊÎ¥´´ÊÕ É ±¨³ μ¡• §μ³ ³μ•¸±ÊÕ ¶μ¢¥•Ì´μ¸ÉÓ ³μ¦´μ¨¸ ¶μ²Ó §μ¢ ÉÓ ¤²Ö • ¸Î¥É ¶μ²Ö¸±μ•μ¸É¥°, ¥£μ Å ¤²Ö • ¸Î¥É ¶μ²Ö ¤ ¢²¥-¨°¢ ²Õ¡μ°Éμα¥ ¶μ¤ ¶μ¢¥•Ì´μ¸ÉÓÕ. ±μ´Î É¥²Ó´Ò° ²£μ•¨É³ ¶μ §¢μ²Ö¥É μ ¶•¥¤¥²ÖÉÓ ¤ ¢²¥´¨Ö, ¥¨¸ ¶μ²Ó §ÊÖ ¶•¥¤ ¶μ²μ¦¥´¨Ö É¥μ•¨¨¢μ²´³ ²μ° ³ ¶²¨ÉʤÒ.
ABSTRACT Efficient management of a distributed system is a common problem for university’s and co... more ABSTRACT Efficient management of a distributed system is a common problem for university’s and commercial computer centres, and handling node failures is a major aspect of it. Failures which are rare in a small commodity cluster, at large scale become common, and there should be a way to overcome them without restarting all parallel processes of an application. The efficiency of existing methods can be improved by forming a hierarchy of distributed processes. That way only lower levels of the hierarchy need to be restarted in case of a leaf node failure, and only root node needs special treatment. Process hierarchy changes in real time and the workload is dynamically rebalanced across online nodes. This approach makes it possible to implement efficient partial restart of a parallel application, and transactional behaviour for computer centre service tasks.
Cloud computing is a model of provisioning configurable computing resources, IT infrastructures a... more Cloud computing is a model of provisioning configurable computing resources, IT infrastructures and applications which can be easily allocated and deallocated by consumer without provider interaction. It can be hard to evaluate performance of newly developed cloud application or infrastructure. Using testbeds for this limits experiments to the scale of the testbed. And achieving reproducible results can be hard or impossible in that case. It's preferable to use simulation tools. Several cloud modelling and simulation frameworks were developed. CloudSim is one of the most powerful. Data centers, physical servers, virtual machines, and applications can be modeled with CloudSim. Application running costs, SLA violations, power usage can be evaluated based on simulated models. In this paper we demostrate feasibility to model real infrastructure with CloudSim framework.
Computational Science and Its Applications – ICCSA 2014, 2014
ABSTRACT One of efficient ways to conduct experiments on HPC platforms is to create custom virtua... more ABSTRACT One of efficient ways to conduct experiments on HPC platforms is to create custom virtual computing environments tailored to the requirements of users and their applications. In this paper we investigate virtual private supercomputer, an approach based on virtualization, data consolidation, and cloud technologies. Virtualization is used to abstract applications from underlying hardware and operating system while data consolidation is applied to store data in a distributed storage system. Both virtualization and data consolidation layers offer APIs for distributed computations and data processing. Combined, these APIs shift the focus from supercomputing technologies to problems being solved. Based on these concepts, we propose an approach to construct virtual clusters with help of cloud computing technologies to be used as on-demand private supercomputers and evaluate performance of this solution.
The problem of synthesis of the onboard integrated intellectual complexes (IC) of decision suppor... more The problem of synthesis of the onboard integrated intellectual complexes (IC) of decision support acceptance in fuzzy environment is discussed. The approach allowing to formalize complex knowledge system within the framework of fuzzy logic basis is formulated. Interpretation of fuzzy models is carried out with use of high performance computing at processing information stream in problems of analysis and forecast of worst-case situations (disputable and extreme).
Master node fault-tolerance is the topic that is often dimmed in the discussion of big data proce... more Master node fault-tolerance is the topic that is often dimmed in the discussion of big data processing technologies. Although failure of a master node can take down the whole data processing pipeline, this is considered either improbable or too difficult to encounter. The aim of the studies reported here is to propose rather simple technique to deal with master-node failures. This technique is based on temporary delegation of master role to one of the slave nodes and transferring updated state back to the master when one step of computation is complete. That way the state is duplicated and computation can proceed to the next step regardless of a failure of a delegate or the master (but not both). We run benchmarks to show that a failure of a master is almost “invisible” to other nodes, and failure of a delegate results in recomputation of only one step of data processing pipeline. We believe that the technique can be used not only in Big Data processing but in other types of applications.
Nowadays supercomputer centers strive to provide their computational resources as services, howev... more Nowadays supercomputer centers strive to provide their computational resources as services, however, present infrastructure is not particularly suited for such a use. First of all, there are standard application programming interfaces to launch computational jobs via command line or a web service, which work well for a program but turn out to be too complex for scientists: they want applications to be delivered to them from a remote server and prefer to interact with them via graphical interface. Second, there are certain applications which are dependent on older versions of operating systems and libraries and it is either non-practical to install those old systems on a cluster or there exists some conflict between these dependencies. Virtualization technologies can solve this problem, but they are not too popular in scientific computing due to overheads introduced by them. Finally, it is difficult to automatically estimate optimal resource pool size for a particular task, thus it o...
9th International Conference "Distributed Computing and Grid Technologies in Science and Education", 2021
There is no single diagnostic marker for neurodegenerative diseases. The biomedical data obtained... more There is no single diagnostic marker for neurodegenerative diseases. The biomedical data obtained during these studies have heterogeneous nature, which greatly complicates their collection, storage and complex analysis. Special methods of statistical analysis due to the described specifics of the data must be applied. The results obtained indicate that for a correct diagnosis, it is necessary to use a comprehensive assessment of all tests.
Computational Science and Its Applications – ICCSA 2020, 2020
Diffraction and radiation forces result from the interaction between the ship hull and the moving... more Diffraction and radiation forces result from the interaction between the ship hull and the moving fluid. These forces are typically simulated using added masses, a method that uses mass to compensate for not computing these forces directly. In this paper we propose simple mathematical model to compute diffraction force. The model is based on Lagrangian description of the flow and uses law of reflection to include diffraction term in the solution. The solution satisfies continuity equation and equation of motion, but is restricted to the boundary of the ship hull. The solution was implemented in velocity potential solver in Virtual testbed—a programme for workstations that simulates ship motions in extreme conditions. Performance benchmarks of the solver showed that it is particularly efficient on graphical accelerators.
Computational Science and Its Applications – ICCSA 2019, 2019
Virtual testbed is a computer programme that simulates ocean waves, ship motions and compartment ... more Virtual testbed is a computer programme that simulates ocean waves, ship motions and compartment flooding. One feature of this programme is that it visualises physical phenomena frame by frame as the simulation progresses. The aim of the studies reported here was to assess how much performance can be gained using graphical accelerators compared to ordinary processors when repeating the same computations in a loop. We rewrote programme’s hot spots in OpenCL to able to execute them on a graphical accelerator and benchmarked their performance with a number of real-world ship models. The analysis of the results showed that data copying in and out of accelerator’s main memory has major impact on performance when done in a loop, and the best performance is achieved when copying in and out is done outside the loop (when data copying inside the loop involves accelerator’s main memory only). This result comes in line with how distributed computations are performed on a set of cluster nodes, and suggests using similar approaches for single heterogeneous node with a graphical accelerator.
Computational Science and Its Applications – ICCSA 2017, 2017
In this article, we propose an approach that allows acceleration of the Time-of-Flight (ToF) even... more In this article, we propose an approach that allows acceleration of the Time-of-Flight (ToF) event reconstruction algorithm implementation, which is a part of the Multi Purpose Detector (MPD) Root application.
Virtualized computing infrastructures are often used to create clusters of resources tailored to ... more Virtualized computing infrastructures are often used to create clusters of resources tailored to solve tasks taking into account particular requirements of these tasks. An important objective is to evaluate such requirements and request optimal amount of resources which becomes challenging for parallel tasks with intercommunication. In previous works we investigated how light-weight container-based virtualization can be used for creating virtual clusters running MPI applications. Such cluster is configured according to the requirements of particular application and allocates only necessary amount of resources from the physical infrastructure leaving space for co-allocated clusters running without conflicts or resource races. In this paper we investigate similar concepts for MapReduce applications based on Hadoop framework that use Cloudply virtualization tool to create and manage light-weight virtual Hadoop clusters on Amazon cloud resources. We investigate performance of several Ha...
In particle accelerator physics the problem is that we can not see what is going on inside the wo... more In particle accelerator physics the problem is that we can not see what is going on inside the working machine. There are a lot of packages for modelling the behaviour of the particles in numerical or analytical way. But for most physicists it is better to see the picture in motion to say exactly what is happening and how to influence on this. The goal of this work is to provide scientists with such a problem-solving environment, which can not only do some numerical calculations, but show the dynamics of changes as a motion 3D picture. To do this we use the power of graphical processors from both sides: for general purpose calculations and for there direct appointment – drawing 3D motion. Besides, this environment should analyse the behaviour of the system to provide the user with all necessary information about the problem and how to deal with it.
To represent the space charge forces of beam a software based on analytical models for space char... more To represent the space charge forces of beam a software based on analytical models for space charge distributions was developed. Special algorithm for predictor-corrector method for beam map evaluation scheme including the space charge forces were used. This method allows us to evaluate the map along the reference trajectory and to analyze beam envelope dynamics. In three dimensional models the number of computing resources we use is significant. For this purpose graphical processors are used. This software is a part of Virtual Accelerator concept which is considered as a set of services and tools of modeling beam dynamics in accelerators on distributed computing resources.
Architecture of a digital computing system determines the technical foundation of a unified mathe... more Architecture of a digital computing system determines the technical foundation of a unified mathematical language for exact arithmetic-logical description of phenomena and laws of continuum mechanics for applications in fluid mechanics and theoretical physics. Deep parallelization of the computing processes serves to the revival of application of functional programming at a new technological level. The efficiency of computations is provided by true reproduction of the fundamental laws of physics and continuum mechanics. Tensor formalization of numerical objects and computing operations serves to spatial interpolation of rheological state parameters and laws of the fluid mechanics as mathematical models in the local coordinates of the elementary numeric cells — large liquid particles. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the efficiency of the algorithms developed by numerical procedures with natural parallel...
Computational Science and Its Applications – ICCSA 2020, 2020
Strong wind causes heavy load on the ship in a seaway bending and pushing it in the direction of ... more Strong wind causes heavy load on the ship in a seaway bending and pushing it in the direction of the wind. In this paper we investigate how wind can be simulated in the framework of Virtual testbed—a near real-time ship motion simulator. We propose simple model that describes air flow around ship hull with constant initial speed and direction which is based on the law of reflection. On the boundary the model reduces to the known model for potential flow around a cylinder, and near the boundary they are not equivalent, but close enough to visualise the effect of the hull on the flow. Then we apply this model to simulate air flow around real-world ship hull and conclude that for any real-world situation ship roll angle and ship speed caused by the wind is small to not cause capsizing, but large enough to be considered in onboard intelligent systems that determine real roll, pitch and yaw angles during ship operation and similar applications.
Processing of large amounts of data often consists of several steps, e.g. pre-and post-processing... more Processing of large amounts of data often consists of several steps, e.g. pre-and post-processing stages, which are executed sequentially with data written to disk after each step, however, when pre-processing stage for each task is different the more efficient way of processing data is to construct a pipeline which streams data from one stage to another. In a more general case some processing stages can be factored into several parallel subordinate stages thus forming a distributed pipeline where each stage can have multiple inputs and multiple outputs. Such processing pattern emerges in a problem of classification of wave energy spectra based on analytic approximations which can extract different wave systems and their parameters (e.g. wave system type, mean wave direction) from spectrum. Distributed pipeline approach achieves good performance compared to conventional "sequential-stage" processing.
Getting a qualified medical examination can be difficult for people in remote areas because medic... more Getting a qualified medical examination can be difficult for people in remote areas because medical staff available can either be inaccessible or it might lack expert knowledge at proper level. Telemedicine technologies can help in such situations. On one hand, such technologies allow highly qualified doctors to consult remotely, thereby increasing the quality of diagnosis and plan treatment. On the other hand, computer-aided analysis of the research results, anamnesis and information on similar cases assist medical staff in their routine activities and decision-making. Creating telemedicine system for a particular domain is a laborious process. It's not sufficient to pick proper medical experts and to fill the knowledge base of the analytical module. It's also necessary to organize the entire infrastructure of the system to meet the requirements in terms of reliability, fault tolerance, protection of personal data and so on. Tools with reusable infrastructure elements, which are common to such systems, are able to decrease the amount of work needed for the development of telemedicine systems. An interactive tool for creating distributed telemedicine systems is described in the article. A list of requirements for the systems is presented; structural solutions for meeting the requirements are suggested. A composition of such elements applicable for distributed systems is described in the article. A cardiac telemedicine system is described as a foundation of the tool
International Journal of Business Intelligence and Data Mining, 2017
Distributed computing clusters are often built with commodity hardware which leads to periodic fa... more Distributed computing clusters are often built with commodity hardware which leads to periodic failures of processing nodes due to relatively low reliability of such hardware. While worker node fault-tolerance is straightforward, fault tolerance of master node poses a bigger challenge. In this paper master node failure handling is based on the concept of master and worker roles that can be dynamically re-assigned to cluster nodes along with maintaining a backup of the master node state on one of worker nodes. In such case no special component is needed to monitor the health of the cluster while master node failures can be resolved except for the cases of simultaneous failure of master and backup. We present experimental evaluation of the technique implementation, show benchmarks demonstrating that a failure of a master does not affect running job, and a failure of backup results in re-computation of only the last job step.
In the problem of simulation of marine object behaviour in a seaway determination of pressures ex... more In the problem of simulation of marine object behaviour in a seaway determination of pressures exerted on the object is often done on assumption of ocean wave amplitudes being small compared to wave height, however, this is not the best approach for real ocean waves. This was done due to underlying wind wave models (such as Longuet—Higgins model) lacking ability to produce large amplitude waves. The other option is to use alternative autoregressive model which is capable of producing real ocean waves, but in this approach pressure calculation scheme should be extended to cover large-amplitude wave case. It is possible to obtain analytical solutions for both two- and three-dimensional problem and it was found that corresponding numerical algorithms are simple and have efficient implementations compared to small amplitude case where the calculation is done by transforming partial differential equations into numerical schemes. In the numerical experiment it was proved that obtained for...
Determining the impact of external excitations on a dynamic marine object such as ship hull in a ... more Determining the impact of external excitations on a dynamic marine object such as ship hull in a seaway is the main goal of simulations. Now such simulations are most often based on approximate mathematical models that use results of the theory of small-amplitude waves. The most complicated software for marine objects behavior simulation LAMP IV (Large Amplitude Motion Program) uses numerical solution of traditional hydrodynamic problem without often-used approximations but on the basis of theory of small-amplitude waves. For efˇciency reasons these simulations can be based on autoregressive model to generate real wave surface. Such a surface possesses all the hydrodynamic characteristics of sea waves, preserves dispersion relation and also shows superior performance compared to other wind wave models. Naturally, the known surface can be used to compute velocityˇeld and in turn to determine pressures in any point under sea surface. The resulting computational algorithm can be used to determine pressures without the use of theory of small-amplitude waves. ¶•¥¤¥²¥´¨¥¸É¥ ¶¥´¨¢μ §¤¥°¸É¢¨Ö ¢ §¢μ²´μ¢ ´´μ°³μ•¸±μ° ¶μ¢¥•Ì´μ¸É¨´ ±μ• ¶Ê¸¸Ê¤´ Ö¢²Ö-¥É¸Ö £² ¢´μ° § ¤ Î¥°¨¸ ¶ÒÉ ´¨°, ±μÉμ•Ò¥ ±´ ¸ÉμÖÐ¥³Ê ¢•¥³¥´¨μ¸´μ¢ ´Ò´ •¥ §Ê²ÓÉ É Ì É¥μ•¨¢ μ²´³ ²μ° ³ ¶²¨ÉʤÒ: ÔÉÊ É¥μ•¨Õ,¨¸±²ÕÎ Ö´¥±μÉμ•Ò¥ μ¡Ð¥ ¶•¨´ÖÉÒ¥ ¶ ¶•μ±¸¨³ ͨ¨,¨¸ ¶μ²Ó-§Ê¥É ¶ ±¥É ¶•¨±² ¤´μ£μ ¶•μ£• ³³´μ£μ μ¡¥¸ ¶¥Î¥´¨Ö LAMP IV (Large Amplitude Motion Program) ¤²Ö¨¸¸²¥¤μ¢ ´¨Ö ¶μ¢¥¤¥´¨Ö ³μ•¸±¨Ì μ¡Ñ¥±Éμ¢. "²Ö ¶μ¢ÒÏ¥´¨Ö ÔËË¥±É¨¢´μ¸É¨É ±¨¥¨¸ ¶ÒÉ ´¨Ö ³μ¦´μ ¶•μ¢μ¤¨ÉÓ´ μ¸´μ¢¥ ¢Éμ•¥£•¥¸¸¨μ´´μ°³μ¤¥²¨,¨¸ ¶μ²Ó §ÊÖ ¥¥ ¤²Ö £¥´¥• ͨ¨¢ §¢μ²´μ¢ ´´μ°³ μ•¸±μ° ¶μ¢¥•Ì´μ¸É¨. ' ± Ö ¶μ¢¥•Ì´μ¸ÉÓ μ¡² ¤ ¥É £¨¤•μ¤¨´ ³¨Î¥¸±¨³¨Ì • ±É¥•¨¸É¨± ³¨³μ•-±¨Ì ¢μ²´,¸μÌ• ´Ö¥É ¤¨¸ ¶¥•¸¨μ´´μ¥¸μμÉ´μÏ¥´¨¥, É ±¦¥¨³¥¥É ¡μ²¥¥ ¡Ò¸É•Ò°Î¨¸²¥´´Ò° ²£μ-•¨É³ ¶μ¸• ¢´¥´¨Õ¸¤•Ê£¨³¨³μ¤¥²Ö³¨¢¥É•μ¢μ£μ ¢μ²´¥´¨Ö. μ²ÊÎ¥´´ÊÕ É ±¨³ μ¡• §μ³ ³μ•¸±ÊÕ ¶μ¢¥•Ì´μ¸ÉÓ ³μ¦´μ¨¸ ¶μ²Ó §μ¢ ÉÓ ¤²Ö • ¸Î¥É ¶μ²Ö¸±μ•μ¸É¥°, ¥£μ Å ¤²Ö • ¸Î¥É ¶μ²Ö ¤ ¢²¥-¨°¢ ²Õ¡μ°Éμα¥ ¶μ¤ ¶μ¢¥•Ì´μ¸ÉÓÕ. ±μ´Î É¥²Ó´Ò° ²£μ•¨É³ ¶μ §¢μ²Ö¥É μ ¶•¥¤¥²ÖÉÓ ¤ ¢²¥´¨Ö, ¥¨¸ ¶μ²Ó §ÊÖ ¶•¥¤ ¶μ²μ¦¥´¨Ö É¥μ•¨¨¢μ²´³ ²μ° ³ ¶²¨ÉʤÒ.
ABSTRACT Efficient management of a distributed system is a common problem for university’s and co... more ABSTRACT Efficient management of a distributed system is a common problem for university’s and commercial computer centres, and handling node failures is a major aspect of it. Failures which are rare in a small commodity cluster, at large scale become common, and there should be a way to overcome them without restarting all parallel processes of an application. The efficiency of existing methods can be improved by forming a hierarchy of distributed processes. That way only lower levels of the hierarchy need to be restarted in case of a leaf node failure, and only root node needs special treatment. Process hierarchy changes in real time and the workload is dynamically rebalanced across online nodes. This approach makes it possible to implement efficient partial restart of a parallel application, and transactional behaviour for computer centre service tasks.
Cloud computing is a model of provisioning configurable computing resources, IT infrastructures a... more Cloud computing is a model of provisioning configurable computing resources, IT infrastructures and applications which can be easily allocated and deallocated by consumer without provider interaction. It can be hard to evaluate performance of newly developed cloud application or infrastructure. Using testbeds for this limits experiments to the scale of the testbed. And achieving reproducible results can be hard or impossible in that case. It's preferable to use simulation tools. Several cloud modelling and simulation frameworks were developed. CloudSim is one of the most powerful. Data centers, physical servers, virtual machines, and applications can be modeled with CloudSim. Application running costs, SLA violations, power usage can be evaluated based on simulated models. In this paper we demostrate feasibility to model real infrastructure with CloudSim framework.
Computational Science and Its Applications – ICCSA 2014, 2014
ABSTRACT One of efficient ways to conduct experiments on HPC platforms is to create custom virtua... more ABSTRACT One of efficient ways to conduct experiments on HPC platforms is to create custom virtual computing environments tailored to the requirements of users and their applications. In this paper we investigate virtual private supercomputer, an approach based on virtualization, data consolidation, and cloud technologies. Virtualization is used to abstract applications from underlying hardware and operating system while data consolidation is applied to store data in a distributed storage system. Both virtualization and data consolidation layers offer APIs for distributed computations and data processing. Combined, these APIs shift the focus from supercomputing technologies to problems being solved. Based on these concepts, we propose an approach to construct virtual clusters with help of cloud computing technologies to be used as on-demand private supercomputers and evaluate performance of this solution.
The problem of synthesis of the onboard integrated intellectual complexes (IC) of decision suppor... more The problem of synthesis of the onboard integrated intellectual complexes (IC) of decision support acceptance in fuzzy environment is discussed. The approach allowing to formalize complex knowledge system within the framework of fuzzy logic basis is formulated. Interpretation of fuzzy models is carried out with use of high performance computing at processing information stream in problems of analysis and forecast of worst-case situations (disputable and extreme).
Master node fault-tolerance is the topic that is often dimmed in the discussion of big data proce... more Master node fault-tolerance is the topic that is often dimmed in the discussion of big data processing technologies. Although failure of a master node can take down the whole data processing pipeline, this is considered either improbable or too difficult to encounter. The aim of the studies reported here is to propose rather simple technique to deal with master-node failures. This technique is based on temporary delegation of master role to one of the slave nodes and transferring updated state back to the master when one step of computation is complete. That way the state is duplicated and computation can proceed to the next step regardless of a failure of a delegate or the master (but not both). We run benchmarks to show that a failure of a master is almost “invisible” to other nodes, and failure of a delegate results in recomputation of only one step of data processing pipeline. We believe that the technique can be used not only in Big Data processing but in other types of applications.
Nowadays supercomputer centers strive to provide their computational resources as services, howev... more Nowadays supercomputer centers strive to provide their computational resources as services, however, present infrastructure is not particularly suited for such a use. First of all, there are standard application programming interfaces to launch computational jobs via command line or a web service, which work well for a program but turn out to be too complex for scientists: they want applications to be delivered to them from a remote server and prefer to interact with them via graphical interface. Second, there are certain applications which are dependent on older versions of operating systems and libraries and it is either non-practical to install those old systems on a cluster or there exists some conflict between these dependencies. Virtualization technologies can solve this problem, but they are not too popular in scientific computing due to overheads introduced by them. Finally, it is difficult to automatically estimate optimal resource pool size for a particular task, thus it o...
Uploads
Papers by Alexander Degtyarev