Smart automation is acquiring a high importance in current distribution systems. The high number ... more Smart automation is acquiring a high importance in current distribution systems. The high number of buses, the radial topology, the small number of sensors and automated devices, require new approaches in managing fault conditions. These approaches must be able to deal with a high level of uncertainty of the state of the system and the measurement data. In this paper a novel method for fault location and isolation is proposed, which is based on the principle of entropy minimization. The algorithm builds a switch operation strategy which is able to locate the fault in a minimum number of manoeuvres, and therefore to reduce the impact of blackouts in terms of power unavailability. The application of the method on different distribution network topologies, with different levels of automation in terms of fault indicators and remotely controlled switches, demonstrates the potential of the method for distribution system analysis and supporting system automation planning.
The design of cloud computing technologies need to guarantee high levels of availability and for ... more The design of cloud computing technologies need to guarantee high levels of availability and for this reason there is a large interest in new fault tolerant techniques that are able to keep the resilience of the systems at the desired level. The modeling of these techniques require input information about the operational state of the systems that have a stochastic nature. The aim of this paper is to provide insights into the stochastic behavior of cloud services. By exploiting the willingness of service providers to publicly expose failure incident information on the web, we collected and analyzed dependability features of a large number of incident reports counting more than 10,600 incidents related to 106 services. Through the analysis of failure data information we provide some useful insights about the Poisson nature of cloud service's failure processes by fitting well known models and assessing their suitability.
With Network Function Virtualization (NFV), the management and orchestration of network services ... more With Network Function Virtualization (NFV), the management and orchestration of network services require a new set of functionalities to be added on top of legacy models of operation. Due to the introduction of the virtualization layer and the decoupling of the network functions and their running infrastructure, the operation models need to include new elements like virtual network functions (VNFs) and a new set of relationships between them and the NFV Infrastructure (NFVI). The NFV Management and Orchestration (MANO) framework plays the key role in managing and orchestrating the NFV infrastructure, network services and the associated VNFs. Failures of the MANO hinders the network ability to react to new service requests or events related to the normal lifecycle operation of network services. Thus, it becomes extremely important to ensure a high level of availability for the MANO architecture. The goal of this work is to model, analyze, and evaluate the impact that different failure modes have on the MANO availability. A model based on Stochastic Activity Networks (SANs), derived from current standard-compliant microservice-based implementations, is proposed as a case study. The case study is used to quantitatively evaluate the steady-state availability and identify the most important parameters influencing the system availability for different deployment configurations.
In a computational grid, at time t, the task is to allocate the user defined jobs efficiently by ... more In a computational grid, at time t, the task is to allocate the user defined jobs efficiently by meeting the deadlines and making use of all the available resources. In the past, objectives were combined and the problem is very often simplified to a single objective problem. In this paper, we formulate a novel Evolutionary Multi-Objective (EMO) approach by using the Pareto dominance and the objectives are formulated independently. We report some preliminary experiments and the performance of the EMO approach is compared with simulated annealing and particle swarm optimization techniques. Empirical results indicate that the proposed EMO approach is very efficient.
ABSTRACT In the Asynchronous Transfer Mode (ATM) based Broadband Integrated Services Digital Netw... more ABSTRACT In the Asynchronous Transfer Mode (ATM) based Broadband Integrated Services Digital Network (B-ISDN) it is necessary to have a good insight into how the cell streams from various types of sources interact and modify each other and thereby introduce a cross influence of the network performance (NP) quality of service (QoS). Insight into this interaction is necessary for a proper dimensioning of networks and network elements. Since the cell streams have very complex stochastic properties, e.g. a large and irregular autocorrelation, an exact analysis is in the general case out of the question. The report presents an approximation where the number of cells in a buffer is expressed as a linear combination of the numbers of arrivals to the buffer up to that slot. Various approximating models of this type are discussed, and models are presented using the 'shift operator'. The linear expressions are fitted so that certain relations of the exact model will be satisfied. On this basis, an approach for deriving the correlation between the waiting times in two subsequent buffers is also outlined.
Wireless Communications and Mobile Computing, Aug 13, 2018
The fifth generation (5G) of cellular networks promises to be a major step in the evolution of wi... more The fifth generation (5G) of cellular networks promises to be a major step in the evolution of wireless technology. 5G is planned to be used in a very broad set of application scenarios. These scenarios have strict heterogeneous requirements that will be accomplished by enhancements on the radio access network and a collection of innovative wireless technologies. Softwarization technologies, such as Software-Defined Networking (SDN) and Network Function Virtualization (NFV), will play a key role in integrating these different technologies. Network slicing emerges as a cost-efficient solution for the implementation of the diverse 5G requirements and verticals. The 5G radio access and core networks will be based on a SDN/NFV infrastructure, which will be able to orchestrate the resources and control the network in order to efficiently and flexibly and with scalability provide network services. In this paper, we present the up-to-date status of the software-defined 5G radio access and core networks and a broad range of future research challenges on the orchestration and control aspects.
Bitcoin has become the leading cryptocurrency system, but the limit on its transaction processing... more Bitcoin has become the leading cryptocurrency system, but the limit on its transaction processing capacity has resulted in increased transaction fee and delayed transaction confirmation. As such, it is pertinent to understand and probably predict how transactions are handled by Bitcoin such that a user may adapt the transaction requests and a miner may adjust the block generation strategy and/or the mining pool to join. To this aim, the present paper introduces results from an analysis of transaction handling in Bitcoin. Specifically, the analysis consists of two-part. The first part is an exploratory data analysis revealing key characteristics in Bitcoin transaction handling. The second part is a predictability analysis intended to provide insights on transaction handling such as (i) transaction confirmation time, (ii) block attributes, and (iii) who has created the block. The result shows that some models do reasonably well for (ii), but surprisingly not for (i) or (iii).
Due to large scale introduction of Distributed Energy Resources (DERs) in the next generation dis... more Due to large scale introduction of Distributed Energy Resources (DERs) in the next generation distribution grid, real time monitoring and control is increasingly needed to maintain a stable operation. In such scenario, monitoring systems and state estimation are key tools for getting reliable and accurate knowledge of the grid. These real time applications have strong requirements on communication latency, reliability and security. This paper presents a method, based on Stochastic Activity Network modeling, for analyzing the performance and dependability of advanced communication technologies, such as LTE and 5G, for supporting monitoring system applications. A novel software tool, based on Möbius linked to external libraries, is developed and employed to analyze the impact of communication failures on the state estimation of a distribution grid. The application of the tool and its capabilities are demonstrated through a case study. The approach is promising both with respect to strength as a modelling tool and the kind of results obtained.
Communications in computer and information science, 2012
Live P2P streaming has been widely used in recent years because of its scalability and low costs.... more Live P2P streaming has been widely used in recent years because of its scalability and low costs. At the same time, it meets some new challenges that centralized systems never faced, for example, churn and content pollution. It is the first paper which uses stochastic activity networks (SANs) to model and analyze pushbased live P2P streaming systems from the users' point of view. The model provides a new line of thinking to model push-based live P2P streaming, which enables the effect of churn, pollution and buffer size to be studied simultaneously. It explains why there is a gap between theoretical results in previous studies of content pollution in P2P streaming and the observed effect. It also points out the threshold of buffer size for non-significant QoS improvement.
Blockchain has been considered as an important technique to enable secure management of virtual n... more Blockchain has been considered as an important technique to enable secure management of virtual network functions and network slices. To understand such capabilities of a blockchain, e.g. transaction confirmation time, demands a thorough study on the transaction characteristics of the blockchain. This paper presents a comprehensive study on the transaction characteristics of Bitcoin -the first blockchain application, focusing on the underlying fundamental processes. A set of results and finding are obtained, which provide new insight into understanding the transaction and traffic characteristics of Bitcoin. As a highlight, the validity of several hypotheses / assumptions used in the literature is examined with measurement for the first time.
We present the design and implementation of the Jgroup distributed object platform and its replic... more We present the design and implementation of the Jgroup distributed object platform and its replication management framework ARM. Jgroup extends Java RMI through the group communication paradigm and has been designed specifically for application support in partitionable distributed systems. ARM is layered on top of Jgroup and provides extensible replica distribution schemes and application-specific recovery strategies. The combination Jgroup/ARM can reduce significantly the effort necessary for developing, deploying and managing dependable, partition-aware applications.
Blockchain is a technology that provides a distributed ledger that stores previous records while ... more Blockchain is a technology that provides a distributed ledger that stores previous records while maintaining consistency and security. Bitcoin is the first and largest decentralized electronic cryptographic system that uses blockchain technology. It faces a challenge in making all the nodes synchronize and have the same overall view with the cost of scalability and performance. In addition, with miners' financial interest playing a significant role in choosing transactions from the backlog, small fee or small fee per byte value transactions will exhibit more delays. To study the issues related to the system's performance, we developed an M (t)/M N /1 model. The backlog's arrival follows an inhomogeneous Poison process to the system that has infinite buffer capacity, and the service time is distributed exponentially, which removes N transactions at time. Besides validating the model with measurement data, we have used the model to study the reward distribution when miners take transaction selection strategies like fee per byte, fee-based, and FIFO. The analysis shows that smaller fee transactions exhibit higher waiting times, even with increasing the block size. Moreover, the miner transaction selection strategy impacts the final gain.
The underlying network infrastructure faces challenges from addressing maintenance, security, per... more The underlying network infrastructure faces challenges from addressing maintenance, security, performance, and scalability to make the network more reliable and stable. Software-defined networking, blockchain, and network function virtualization were proposed and realized to address such issues in both academic and industry wise. This paper analyzes and summarizes works from implementing different categories of blockchains as an element or enabler of network functions to resolve the limitation. Blockchain as a network function has been proposed to give support to the underlying network infrastructure to provide services that have less lag, are more cost-effective, have better performance, guarantee security between participating parties, and protect the privacy of the users. This paper provides a review of recent work that makes use of blockchain to address such networking related challenges and the possible setbacks in the proposal.
2019 3rd International Conference on Smart Grid and Smart Cities (ICSGSC), 2019
The future trends for smart distribution grids will be increasingly characterized by more use of ... more The future trends for smart distribution grids will be increasingly characterized by more use of advanced communication technologies, like the function virtualization. The smart grid can benefit from virtualization and emerging standard ICT technologies such as 5G, to become more cost effective. However, stringent performance requirements must be met and higher reliability, robustness, flexibility and scalability must be provided. This paper discusses the usage of virtualzation concepts in 5G technology for protection function/application in distribution grid. A 5G based architecture using an edge computing infrastructure to hosting the control and protection applications is proposed. A stochastic activity network based model is used to analyze and compare the reliability of the proposed 5G based architecture with a functionally identical Ethernet based IEC 61850 architecture. The 5G architecture seems to result in a significant gain in availability provided that redundancy on the radio links is used to compensate the loss due to random fading processes.
Network communications and the Internet pervade our daily activities so deeply that we strongly d... more Network communications and the Internet pervade our daily activities so deeply that we strongly depend on the availability and quality of the services they provide. For this reason, natural and technological disasters, by affecting network and service availability, have a potentially huge impact on our daily lives. Ensuring adequate levels of resiliency is hence a key issue that future network paradigms, such as 5G, need to address. This paper provides an overview of the main avenues of research on this topic within the context of the RECODIS COST Action.
The dependability of ICT systems is vital for today's society. However, operational systems are n... more The dependability of ICT systems is vital for today's society. However, operational systems are not fault free. Providers and customers have to define clear availability requirements and penalties on the delivered services by using SLAs. Fulfilling the stipulated availability may be expensive. The lack of mechanisms that allow a fine control of the SLA risk may lead to overdimension the provided resources. Therefore, a relevant question for ICT service providers is: How to guarantee the SLA availability in a cost efficient way? This paper studies how to combine different fault tolerant techniques with different costs and properties, in order to economically fulfill a given SLA requirement. GEARSHIFT is a mechanism that enables ICT providers to set the fault tolerance technique (gear ratio) needed, depending on the current service conditions and requirements. We illustrate how to use the proposed model in a backbone network scenario, using measurements from a production national network. Finally, we show that the total costs of delivering an ICT service follow a simple convex function, which allows an easy selection of the optimal risk by tuning properly the combination of fault tolerant techniques.
Modern power systems are increasingly relying on Information and Communication Technologies (ICT)... more Modern power systems are increasingly relying on Information and Communication Technologies (ICT) to support their operation. This digitalization process introduces new complexity, which requires novel methodologies to assess the reliability of power systems. Currently, co-simulation and Discrete Event Simulation (DES) are the most popular approaches to analyse the complexity of power grids seen as cyber-physical systems, and to help decision makers in identifying potential sources of failures and implement mitigation actions. This paper compares these two methods. Co-simulation and DES approaches are applied to a power system voltage regulation case study, and the capability of the methods to assess unsolved overvoltages due to simultaneous failures of power system and ICT system is comparatively discussed. Simulation time and assessment of voltage regulation operational costs for both methods are also compared. The paper's main goal is to provide guidance to researchers in evaluating and developing the most suitable simulation approaches for reliability studies in cyber-physical power systems.
Smart automation is acquiring a high importance in current distribution systems. The high number ... more Smart automation is acquiring a high importance in current distribution systems. The high number of buses, the radial topology, the small number of sensors and automated devices, require new approaches in managing fault conditions. These approaches must be able to deal with a high level of uncertainty of the state of the system and the measurement data. In this paper a novel method for fault location and isolation is proposed, which is based on the principle of entropy minimization. The algorithm builds a switch operation strategy which is able to locate the fault in a minimum number of manoeuvres, and therefore to reduce the impact of blackouts in terms of power unavailability. The application of the method on different distribution network topologies, with different levels of automation in terms of fault indicators and remotely controlled switches, demonstrates the potential of the method for distribution system analysis and supporting system automation planning.
The design of cloud computing technologies need to guarantee high levels of availability and for ... more The design of cloud computing technologies need to guarantee high levels of availability and for this reason there is a large interest in new fault tolerant techniques that are able to keep the resilience of the systems at the desired level. The modeling of these techniques require input information about the operational state of the systems that have a stochastic nature. The aim of this paper is to provide insights into the stochastic behavior of cloud services. By exploiting the willingness of service providers to publicly expose failure incident information on the web, we collected and analyzed dependability features of a large number of incident reports counting more than 10,600 incidents related to 106 services. Through the analysis of failure data information we provide some useful insights about the Poisson nature of cloud service's failure processes by fitting well known models and assessing their suitability.
With Network Function Virtualization (NFV), the management and orchestration of network services ... more With Network Function Virtualization (NFV), the management and orchestration of network services require a new set of functionalities to be added on top of legacy models of operation. Due to the introduction of the virtualization layer and the decoupling of the network functions and their running infrastructure, the operation models need to include new elements like virtual network functions (VNFs) and a new set of relationships between them and the NFV Infrastructure (NFVI). The NFV Management and Orchestration (MANO) framework plays the key role in managing and orchestrating the NFV infrastructure, network services and the associated VNFs. Failures of the MANO hinders the network ability to react to new service requests or events related to the normal lifecycle operation of network services. Thus, it becomes extremely important to ensure a high level of availability for the MANO architecture. The goal of this work is to model, analyze, and evaluate the impact that different failure modes have on the MANO availability. A model based on Stochastic Activity Networks (SANs), derived from current standard-compliant microservice-based implementations, is proposed as a case study. The case study is used to quantitatively evaluate the steady-state availability and identify the most important parameters influencing the system availability for different deployment configurations.
In a computational grid, at time t, the task is to allocate the user defined jobs efficiently by ... more In a computational grid, at time t, the task is to allocate the user defined jobs efficiently by meeting the deadlines and making use of all the available resources. In the past, objectives were combined and the problem is very often simplified to a single objective problem. In this paper, we formulate a novel Evolutionary Multi-Objective (EMO) approach by using the Pareto dominance and the objectives are formulated independently. We report some preliminary experiments and the performance of the EMO approach is compared with simulated annealing and particle swarm optimization techniques. Empirical results indicate that the proposed EMO approach is very efficient.
ABSTRACT In the Asynchronous Transfer Mode (ATM) based Broadband Integrated Services Digital Netw... more ABSTRACT In the Asynchronous Transfer Mode (ATM) based Broadband Integrated Services Digital Network (B-ISDN) it is necessary to have a good insight into how the cell streams from various types of sources interact and modify each other and thereby introduce a cross influence of the network performance (NP) quality of service (QoS). Insight into this interaction is necessary for a proper dimensioning of networks and network elements. Since the cell streams have very complex stochastic properties, e.g. a large and irregular autocorrelation, an exact analysis is in the general case out of the question. The report presents an approximation where the number of cells in a buffer is expressed as a linear combination of the numbers of arrivals to the buffer up to that slot. Various approximating models of this type are discussed, and models are presented using the 'shift operator'. The linear expressions are fitted so that certain relations of the exact model will be satisfied. On this basis, an approach for deriving the correlation between the waiting times in two subsequent buffers is also outlined.
Wireless Communications and Mobile Computing, Aug 13, 2018
The fifth generation (5G) of cellular networks promises to be a major step in the evolution of wi... more The fifth generation (5G) of cellular networks promises to be a major step in the evolution of wireless technology. 5G is planned to be used in a very broad set of application scenarios. These scenarios have strict heterogeneous requirements that will be accomplished by enhancements on the radio access network and a collection of innovative wireless technologies. Softwarization technologies, such as Software-Defined Networking (SDN) and Network Function Virtualization (NFV), will play a key role in integrating these different technologies. Network slicing emerges as a cost-efficient solution for the implementation of the diverse 5G requirements and verticals. The 5G radio access and core networks will be based on a SDN/NFV infrastructure, which will be able to orchestrate the resources and control the network in order to efficiently and flexibly and with scalability provide network services. In this paper, we present the up-to-date status of the software-defined 5G radio access and core networks and a broad range of future research challenges on the orchestration and control aspects.
Bitcoin has become the leading cryptocurrency system, but the limit on its transaction processing... more Bitcoin has become the leading cryptocurrency system, but the limit on its transaction processing capacity has resulted in increased transaction fee and delayed transaction confirmation. As such, it is pertinent to understand and probably predict how transactions are handled by Bitcoin such that a user may adapt the transaction requests and a miner may adjust the block generation strategy and/or the mining pool to join. To this aim, the present paper introduces results from an analysis of transaction handling in Bitcoin. Specifically, the analysis consists of two-part. The first part is an exploratory data analysis revealing key characteristics in Bitcoin transaction handling. The second part is a predictability analysis intended to provide insights on transaction handling such as (i) transaction confirmation time, (ii) block attributes, and (iii) who has created the block. The result shows that some models do reasonably well for (ii), but surprisingly not for (i) or (iii).
Due to large scale introduction of Distributed Energy Resources (DERs) in the next generation dis... more Due to large scale introduction of Distributed Energy Resources (DERs) in the next generation distribution grid, real time monitoring and control is increasingly needed to maintain a stable operation. In such scenario, monitoring systems and state estimation are key tools for getting reliable and accurate knowledge of the grid. These real time applications have strong requirements on communication latency, reliability and security. This paper presents a method, based on Stochastic Activity Network modeling, for analyzing the performance and dependability of advanced communication technologies, such as LTE and 5G, for supporting monitoring system applications. A novel software tool, based on Möbius linked to external libraries, is developed and employed to analyze the impact of communication failures on the state estimation of a distribution grid. The application of the tool and its capabilities are demonstrated through a case study. The approach is promising both with respect to strength as a modelling tool and the kind of results obtained.
Communications in computer and information science, 2012
Live P2P streaming has been widely used in recent years because of its scalability and low costs.... more Live P2P streaming has been widely used in recent years because of its scalability and low costs. At the same time, it meets some new challenges that centralized systems never faced, for example, churn and content pollution. It is the first paper which uses stochastic activity networks (SANs) to model and analyze pushbased live P2P streaming systems from the users' point of view. The model provides a new line of thinking to model push-based live P2P streaming, which enables the effect of churn, pollution and buffer size to be studied simultaneously. It explains why there is a gap between theoretical results in previous studies of content pollution in P2P streaming and the observed effect. It also points out the threshold of buffer size for non-significant QoS improvement.
Blockchain has been considered as an important technique to enable secure management of virtual n... more Blockchain has been considered as an important technique to enable secure management of virtual network functions and network slices. To understand such capabilities of a blockchain, e.g. transaction confirmation time, demands a thorough study on the transaction characteristics of the blockchain. This paper presents a comprehensive study on the transaction characteristics of Bitcoin -the first blockchain application, focusing on the underlying fundamental processes. A set of results and finding are obtained, which provide new insight into understanding the transaction and traffic characteristics of Bitcoin. As a highlight, the validity of several hypotheses / assumptions used in the literature is examined with measurement for the first time.
We present the design and implementation of the Jgroup distributed object platform and its replic... more We present the design and implementation of the Jgroup distributed object platform and its replication management framework ARM. Jgroup extends Java RMI through the group communication paradigm and has been designed specifically for application support in partitionable distributed systems. ARM is layered on top of Jgroup and provides extensible replica distribution schemes and application-specific recovery strategies. The combination Jgroup/ARM can reduce significantly the effort necessary for developing, deploying and managing dependable, partition-aware applications.
Blockchain is a technology that provides a distributed ledger that stores previous records while ... more Blockchain is a technology that provides a distributed ledger that stores previous records while maintaining consistency and security. Bitcoin is the first and largest decentralized electronic cryptographic system that uses blockchain technology. It faces a challenge in making all the nodes synchronize and have the same overall view with the cost of scalability and performance. In addition, with miners' financial interest playing a significant role in choosing transactions from the backlog, small fee or small fee per byte value transactions will exhibit more delays. To study the issues related to the system's performance, we developed an M (t)/M N /1 model. The backlog's arrival follows an inhomogeneous Poison process to the system that has infinite buffer capacity, and the service time is distributed exponentially, which removes N transactions at time. Besides validating the model with measurement data, we have used the model to study the reward distribution when miners take transaction selection strategies like fee per byte, fee-based, and FIFO. The analysis shows that smaller fee transactions exhibit higher waiting times, even with increasing the block size. Moreover, the miner transaction selection strategy impacts the final gain.
The underlying network infrastructure faces challenges from addressing maintenance, security, per... more The underlying network infrastructure faces challenges from addressing maintenance, security, performance, and scalability to make the network more reliable and stable. Software-defined networking, blockchain, and network function virtualization were proposed and realized to address such issues in both academic and industry wise. This paper analyzes and summarizes works from implementing different categories of blockchains as an element or enabler of network functions to resolve the limitation. Blockchain as a network function has been proposed to give support to the underlying network infrastructure to provide services that have less lag, are more cost-effective, have better performance, guarantee security between participating parties, and protect the privacy of the users. This paper provides a review of recent work that makes use of blockchain to address such networking related challenges and the possible setbacks in the proposal.
2019 3rd International Conference on Smart Grid and Smart Cities (ICSGSC), 2019
The future trends for smart distribution grids will be increasingly characterized by more use of ... more The future trends for smart distribution grids will be increasingly characterized by more use of advanced communication technologies, like the function virtualization. The smart grid can benefit from virtualization and emerging standard ICT technologies such as 5G, to become more cost effective. However, stringent performance requirements must be met and higher reliability, robustness, flexibility and scalability must be provided. This paper discusses the usage of virtualzation concepts in 5G technology for protection function/application in distribution grid. A 5G based architecture using an edge computing infrastructure to hosting the control and protection applications is proposed. A stochastic activity network based model is used to analyze and compare the reliability of the proposed 5G based architecture with a functionally identical Ethernet based IEC 61850 architecture. The 5G architecture seems to result in a significant gain in availability provided that redundancy on the radio links is used to compensate the loss due to random fading processes.
Network communications and the Internet pervade our daily activities so deeply that we strongly d... more Network communications and the Internet pervade our daily activities so deeply that we strongly depend on the availability and quality of the services they provide. For this reason, natural and technological disasters, by affecting network and service availability, have a potentially huge impact on our daily lives. Ensuring adequate levels of resiliency is hence a key issue that future network paradigms, such as 5G, need to address. This paper provides an overview of the main avenues of research on this topic within the context of the RECODIS COST Action.
The dependability of ICT systems is vital for today's society. However, operational systems are n... more The dependability of ICT systems is vital for today's society. However, operational systems are not fault free. Providers and customers have to define clear availability requirements and penalties on the delivered services by using SLAs. Fulfilling the stipulated availability may be expensive. The lack of mechanisms that allow a fine control of the SLA risk may lead to overdimension the provided resources. Therefore, a relevant question for ICT service providers is: How to guarantee the SLA availability in a cost efficient way? This paper studies how to combine different fault tolerant techniques with different costs and properties, in order to economically fulfill a given SLA requirement. GEARSHIFT is a mechanism that enables ICT providers to set the fault tolerance technique (gear ratio) needed, depending on the current service conditions and requirements. We illustrate how to use the proposed model in a backbone network scenario, using measurements from a production national network. Finally, we show that the total costs of delivering an ICT service follow a simple convex function, which allows an easy selection of the optimal risk by tuning properly the combination of fault tolerant techniques.
Modern power systems are increasingly relying on Information and Communication Technologies (ICT)... more Modern power systems are increasingly relying on Information and Communication Technologies (ICT) to support their operation. This digitalization process introduces new complexity, which requires novel methodologies to assess the reliability of power systems. Currently, co-simulation and Discrete Event Simulation (DES) are the most popular approaches to analyse the complexity of power grids seen as cyber-physical systems, and to help decision makers in identifying potential sources of failures and implement mitigation actions. This paper compares these two methods. Co-simulation and DES approaches are applied to a power system voltage regulation case study, and the capability of the methods to assess unsolved overvoltages due to simultaneous failures of power system and ICT system is comparatively discussed. Simulation time and assessment of voltage regulation operational costs for both methods are also compared. The paper's main goal is to provide guidance to researchers in evaluating and developing the most suitable simulation approaches for reliability studies in cyber-physical power systems.
Uploads
Papers by Bjarne Helvik