Distributed Renewable Energy Sources (DRESs) such as wind and solar are becoming a promising alte... more Distributed Renewable Energy Sources (DRESs) such as wind and solar are becoming a promising alternative for the energy supply in modern (smart) electricity grids as part of future sustainable smart cities. Successful integration of DRESs requires efficient, resilient, and secure communication in order to satisfy the highly challenging and real-time constraints of smart city applications. Regardless of the various research solutions proposed in this context within the last decade, the relevant standardization is a non-trivial issue and is still in its infancy. In this position paper, we briefly review the currently employed DRES communications standards and identify the gaps in their present status. Finally, we discuss and suggest potential pathways for further improvement.
The Industrial Internet of Things (IIoT) is an ecosystem that consists of -- among others -- vari... more The Industrial Internet of Things (IIoT) is an ecosystem that consists of -- among others -- various networked sensors and actuators, achieving mainly advancements related with lowering production costs and providing workflow flexibility. Introducing access control in such environments is considered to be challenging, mainly due to the variety of technologies and protocols in IIoT devices and networks. Thus, various access control models and mechanisms should be examined, as well as the additional access control requirements posed by these industrial environments. To achieve these aims, we elaborate on existing state-of-the-art access control models and architectures and investigate access control requirements in IIoT, respectively. These steps provide valuable indications on what type of an access control model and architecture may be beneficial for application in the IIoT. We describe an access control architecture capable of achieving access control in IIoT using a layered approach and based on existing virtualization concepts (e.g., the cloud). Furthermore, we provide information on the functionality of the individual access control related components, as well as where these should be placed in the overall architecture. Considering this research area to be challenging, we finally discuss open issues and anticipate these directions to provide interesting multi-disciplinary insights in both industry and academia.
Critical infrastructures – such as electricity networks, power stations and Smart Grids – are inc... more Critical infrastructures – such as electricity networks, power stations and Smart Grids – are increasingly monitored and controlled by computing and communication technologies. The need to address security and protection of electricity infrastructures with a high priority has broadly been recognized. This is driven by many factors, including the rapid evolution of threats and consistent technological advancements of malicious actors as well as potentially catastrophic consequences of disruptions of such systems. Surveillance and security technologies are traditionally used in these contexts as a protection mechanism that maintains situational awareness and provides appropriate alerts. Surveillance is a cumbersome process because of the need to monitor a diverse set of objects, but it is absolutely essential to detect promptly the occurrence of adverse events or conditions. The aims of this paper are twofold: First, we describe two surveillance architectures in which different technologies can be used jointly for boosting the safety and security of electricity utilities and other key resources and critical infrastructures. Second, we review the typical surveillance and security technologies and evaluate them in the context of critical infrastructures, which may help in making recommendations and improvements for the future. To accomplish these aims, we extracted and consolidated information from major survey papers. This led to identifying the surveillance and security technologies, their application areas, and challenges that they face. We also investigate the perceived performance of the identified technologies in critical infrastructures. The latter comes from interviewing experts who operate in critical infrastructures, and thus provide indications for protecting critical infrastructures, not least because of their increasing use of cyber-physical elements.
Network communications and the Internet pervade our daily activities so deeply that we strongly d... more Network communications and the Internet pervade our daily activities so deeply that we strongly depend on the availability and quality of the services they provide. For this reason, natural and technological disasters, by affecting network and service availability, have a potentially huge impact on our daily lives. Ensuring adequate levels of resiliency is hence a key issue that future network paradigms, such as 5G, need to address. This paper provides an overview of the main avenues of research on this topic within the context of the RECODIS COST Action.
Usage control models provide an integration of access control, digital rights, and trust manageme... more Usage control models provide an integration of access control, digital rights, and trust management. To achieve this integration, usage control models support additional concepts such as attribute mutability and continuity of decision. However, these concepts may introduce an additional level of complexity to the underlying model, rendering its definition a cumbersome and prone to errors process. Applying a formal verification technique allows for a rigorous analysis of the interactions amongst the components, and thus for formal guarantees in respect of the correctness of a model. In this paper, we elaborate on a case study, where we express the high-level functional model of the UseCON usage control model in the TLA+ formal specification language, and verify its correctness.
Cyber-physical systems (CPS) are characterised by interactions of physical and computational comp... more Cyber-physical systems (CPS) are characterised by interactions of physical and computational components. A CPS also interacts with its operational environment, and thus with other entities including humans. Humans are an important aspect of human CPS (HCPS) since they are responsible for using (e.g., administering) these types of system. Such interactions are usually expressed though access control policies, which in many cases (e.g., when performing critical operations) are required to support the property of resilience to cope with challenges to the normal operation of the HCPS. In this paper, we pinpoint the importance of resilience as a property in access control policies and we describe a mechanism to conduct its formal verification. Finally, we identify potential future directions in the verification of access control properties, complementary to resilience.
Mobile Edge Computing (MEC) and Fog are emerging computing models that extend the cloud and its s... more Mobile Edge Computing (MEC) and Fog are emerging computing models that extend the cloud and its services to the edge of the network. The emergence of both MEC and fog introduce new requirements, which mean their supported deployment models must be investigated. In this paper, we point out the influence and strong impact of the extended cloud (i.e., the MEC and fog) on existing communication and networking service models of the cloud. Although the relation between them is fairly evident, there are important properties, notably those of security and resilience, that we study in relation to the newly posed requirements from the MEC and fog. Although security and resilience have been already investigated in the context of the cloud - to a certain extent - existing solutions may not be applicable in the context of the extended cloud. Our approach includes the examination of models and architectures that underpin the extended cloud, and we provide a contemporary discussion on the most evident characteristics associated with them. We examine the technologies that implement these models and architectures, and analyse them with respect to security and resilience requirements. Furthermore, approaches to security and resilience-related mechanisms are examined in the cloud (specifically, anomaly detection and policy based resilience management), and we argue that these can also be applied in order to improve security and achieve resilience in the extended cloud environment.
There has been a proliferation of industry-focused cyber security qualifications, which use diffe... more There has been a proliferation of industry-focused cyber security qualifications, which use different techniques to assess the competencies of cyber security professionals and certify them to employers. There is, however, a lingering question about these qualifications: do they effectively assess the competencies of cyber security professionals? 74 cyber security qualifications were analysed to determine how competency assessment is performed in practice, and five distinct techniques were identified together with the frequency of their use within qualifications. These techniques formed the basis of a large-scale survey of the perceptions of 153 industry stakeholders on the effectiveness of individual techniques and their cost-effectiveness as combinations. Despite a perceived low effectiveness of Multiple Choice Examinations, industry qualifications were found to rely on it heavily, often as a sole technique, and few qualifications utilised the cost-effective combinations identified by stakeholders.
Access control offers mechanisms to control and limit the actions or operations that are performe... more Access control offers mechanisms to control and limit the actions or operations that are performed by a user on a set of resources in a system. Many access control models exist that are able to support this basic requirement. One of the properties examined in the context of these models is their ability to successfully restrict access to resources. Nevertheless, considering only restriction of access may not be enough in some environments, as in critical infrastructures. The protection of systems in this type of environment requires a new line of enquiry. It is essential to ensure that appropriate access is always possible, even when users and resources are subjected to challenges of various sorts. Resilience in access control is conceived as the ability of a system not to restrict but rather to ensure access to resources. In order to demonstrate the application of resilience in access control, we formally define an attribute based access control model (ABAC) based on guidelines provided by the National Institute of Standards and Technology (NIST). We examine how ABAC-based resilience policies can be specified in temporal logic and how these can be formally verified. The verification of resilience is done using an automated model checking technique, which eventually may lead to reducing the overall complexity required for the verification of resilience policies and serve as a valuable tool for administrators.
Many security infrastructures incorporate some sort of surveillance technologies to operate as an... more Many security infrastructures incorporate some sort of surveillance technologies to operate as an early incident warning or even prevention system. The special case of surveillance by cameras and human security staff has a natural reflection in game-theory as the well-known “Cops-and-Robbers” game (a.k.a. graph searching). Traditionally, such models assume a deterministic outcome of the gameplay, e.g., the robber is caught when it shares its location with a cop. In real life, however, the detection rate is far from perfect (as models assume), and thus required to play the game with uncertain outcomes. This work applies a simple game-theoretic model for the identification of optimal surveillance tours in light of imperfect detection rates of incidents. This particularly aids standardized risk management processes, where decision-making is based on qualitative assessments (e.g., from “low damage” to “critical danger”) and only nominally quantified likelihoods (e.g., “low”, ”medium” and ”high”). The unique feature of our approach is threefold: 1) it is conceptually simple and easy to apply (as we play a finite game); 2) it can treat the uncertainty in the outcome as a full-fledged categorical distribution (rather than requiring numerical data to optimize some characteristic measure like averages), and 3) it optimizes the whole distribution of randomly suffered damages, thus avoiding information loss due to data aggregation (which is required in many standard game-theoretic models using numbers for their specification). The result is an optimal layout of surveillance tours, paired with an optimal surveillance schedule to minimize risk.
Resilience against disaster scenarios is essential to network operators, not only because of the ... more Resilience against disaster scenarios is essential to network operators, not only because of the potential economic impact of a disaster but also because communication networks form the basis of crisis management. COST RECODIS aims at studying measures, rules, techniques and prediction mechanisms for different disaster scenarios. This paper gives an overview of different solutions in the context of technology-related disasters. After a general overview, the paper focuses on resilient Software Defined Networks
Attacks on critical infrastructures' Supervisory Control and Data Acquisition (SCADA) systems are... more Attacks on critical infrastructures' Supervisory Control and Data Acquisition (SCADA) systems are beginning to increase. They are often initiated by highly skilled attackers, who are capable of deploying sophisticated attacks to exfiltrate data or even to cause physical damage. In this paper, we rehearse the rationale for protecting against cyber attacks and evaluate a set of Anomaly Detection (AD) techniques in detecting attacks by analysing traffic captured in a SCADA network. For this purpose, we have implemented a tool chain with a reference implementation of various state-of-the-art AD techniques to detect attacks, which manifest themselves as anomalies. Specifically, in order to evaluate the AD techniques, we apply our tool chain on a dataset created from a gas pipeline SCADA system in Mississippi State University's lab, which include artefacts of both normal operations and cyber attack scenarios. Our evaluation elaborate on several performance metrics of the examined AD techniques such as precision; recall; accuracy; F-score and G-score. The results indicate that detection rate may change significantly when considering various attack types and different detections modes (i.e., supervised and unsupervised), and also provide indications that there is a need for a robust, and preferably real-time AD technique to introduce resilience in critical infrastructures.
Utility networks are part of every nation’s critical infrastructure, and their protection is now ... more Utility networks are part of every nation’s critical infrastructure, and their protection is now seen as a high priority objective. In this paper, we propose a threat awareness architecture for critical infrastructures, which we believe will raise security awareness and increase resilience in utility networks. We first describe an investigation of trends and threats that may impose security risks in utility networks. This was performed on the basis of a viewpoint approach that is capable of identifying technical and non-technical issues (e.g., behaviour of humans). The result of our analysis indicated that utility networks are affected strongly by technological trends, but that humans comprise an important threat to them. This provided evidence and confirmed that the protection of utility networks is a multi-variable problem, and thus, requires the examination of information stemming from various viewpoints of a network. In order to accomplish our objective, we propose a systematic threat awareness architecture in the context of a resilience strategy, which ultimately aims at providing and maintaining an acceptable level of security and safety in critical infrastructures. As a proof of concept, we demonstrate partially via a case study the application of the proposed threat awareness architecture, where we examine the potential impact of attacks in the context of social engineering across a European utility company.
Next-generation systems, such as the big data cloud, have to cope with several challenges, e.g., ... more Next-generation systems, such as the big data cloud, have to cope with several challenges, e.g., move of excessive amount of data at a dictated speed, and thus, require the investigation of concepts additional to security in order to ensure their orderly function. Resilience is such a concept, which when ensured by systems or networks they are able to provide and maintain an acceptable level of service in the face of various faults and challenges. In this paper, we investigate the multi-commodity flows problem, as a task within our $D^2R^2+DR$ resilience strategy, and in the context of big data cloud systems. Specifically, proximal gradient optimization is proposed for determining optimal computation flows since such algorithms are highly attractive for solving big data problems. Many such problems can be formulated as the global consensus optimization ones, and can be solved in a distributed manner by the alternating direction method of multipliers (ADMM) algorithm. Numerical evaluation of the proposed model is carried out in the context of specific deployments of a situation-aware information infrastructure.
Cloud computing is now extremely popular because of its use of elastic resources to provide optim... more Cloud computing is now extremely popular because of its use of elastic resources to provide optimized, cost-effective and on-demand services. However, clouds may be subject to challenges arising from cyber attacks including DoS and malware, as well as from sheer complexity problems that manifest themselves as anomalies. Anomaly detection techniques are used increasingly to improve the resilience of cloud environments and indirectly reduce the cost of recovery from outages. Most anomaly detection techniques are computationally expensive in a cloud context, and often require problem specific parameters to be predefined in advance, impairing their use in real-time detection. Aiming to overcome these problems, we propose a technique for anomaly detection based on data density. The density is computed recursively, so the technique is memory-less and unsupervised, and therefore suitable for realtime cloud environments. We demonstrate the efficacy of the proposed technique using an emulated dataset from a testbed, under various attack types and intensities, and in the face of VM migration. The obtained results, which include precision, recall, accuracy, F-score and G-score, show that network level attacks are detectable with high accuracy.
The assurance technique is a fundamental component of the assurance ecosystem; it is the mechanis... more The assurance technique is a fundamental component of the assurance ecosystem; it is the mechanism by which we assess security to derive a measure of assurance. Despite this importance, the characteristics of these assurance techniques have not been comprehensively explored within academic research from the perspective of industry stakeholders. Here, a framework of 20 "assurance techniques" is defined along with their interdependencies. A survey was conducted which received 153 responses from industry stakeholders, in order to determine perceptions of the characteristics of these assurance techniques. These characteristics include the expertise required, number of people required, time required for completion, effectiveness and cost. The extent to which perceptions differ between those in practitioner and management roles is considered. The findings were then used to compute a measure of cost-effectiveness for each assurance technique. Survey respondents were also asked about their perceptions of complementary assurance techniques. These findings were used to establish 15 combinations, of which the combined effectiveness and cost-effectiveness was assessed.
Attacks on critical infrastructures are beginning to increase in number and severity. They are of... more Attacks on critical infrastructures are beginning to increase in number and severity. They are often initiated by highly skilled attackers, who are capable of deploying advanced attacks to exfiltrate data or even to cause physical damage. In this paper, we re-visit the rationale for protecting against cyber attacks and propose a framework to monitor, detect and evaluate anomalous behaviour within critical infrastructures. Specifically, we describe a multi-level approach for assuring resilience in critical infrastructures and services, taking into account organisational, technological and individuals' (OTI) viewpoints. The framework supports detection of anomalies by using appropriate techniques at the different levels of infrastructure and service. As a proof of concept, we derive a set of suitable metrics by monitoring a European utility network, then we simulate a detection process and evaluate the results.
Assurance techniques generate evidence that allow us to make claims of assurance about security. ... more Assurance techniques generate evidence that allow us to make claims of assurance about security. For the purpose of certification to an assurance scheme, this evidence enables us to answer the question: are the implemented security controls consistent with organisational risk posture? This paper uses interviews with security practitioners to assess how ICS security assessments are conducted in practice, before introducing the five "PASIV" principles to ensure the safe use of assurance techniques. PASIV is then applied to three phases of the system development life cycle (development; procurement; operational), to determine when and when not, these assurance techniques can be used to generate evidence. Focusing then on the operational phase, this study assesses how assurances techniques generate evidence for the 35 security control families of ISO/IEC 27001:2013.
This project explored the strengths and weaknesses of commonly-used assurance activities and prov... more This project explored the strengths and weaknesses of commonly-used assurance activities and provided an analysis of the relative costs and benefits of each. The analysis considered the examination of activities in the context of both national (UK) and international assurance schemes, and resulted in understanding the level of effectiveness and cost-effectiveness provided by the individual assurance activities.
Distributed Renewable Energy Sources (DRESs) such as wind and solar are becoming a promising alte... more Distributed Renewable Energy Sources (DRESs) such as wind and solar are becoming a promising alternative for the energy supply in modern (smart) electricity grids as part of future sustainable smart cities. Successful integration of DRESs requires efficient, resilient, and secure communication in order to satisfy the highly challenging and real-time constraints of smart city applications. Regardless of the various research solutions proposed in this context within the last decade, the relevant standardization is a non-trivial issue and is still in its infancy. In this position paper, we briefly review the currently employed DRES communications standards and identify the gaps in their present status. Finally, we discuss and suggest potential pathways for further improvement.
The Industrial Internet of Things (IIoT) is an ecosystem that consists of -- among others -- vari... more The Industrial Internet of Things (IIoT) is an ecosystem that consists of -- among others -- various networked sensors and actuators, achieving mainly advancements related with lowering production costs and providing workflow flexibility. Introducing access control in such environments is considered to be challenging, mainly due to the variety of technologies and protocols in IIoT devices and networks. Thus, various access control models and mechanisms should be examined, as well as the additional access control requirements posed by these industrial environments. To achieve these aims, we elaborate on existing state-of-the-art access control models and architectures and investigate access control requirements in IIoT, respectively. These steps provide valuable indications on what type of an access control model and architecture may be beneficial for application in the IIoT. We describe an access control architecture capable of achieving access control in IIoT using a layered approach and based on existing virtualization concepts (e.g., the cloud). Furthermore, we provide information on the functionality of the individual access control related components, as well as where these should be placed in the overall architecture. Considering this research area to be challenging, we finally discuss open issues and anticipate these directions to provide interesting multi-disciplinary insights in both industry and academia.
Critical infrastructures – such as electricity networks, power stations and Smart Grids – are inc... more Critical infrastructures – such as electricity networks, power stations and Smart Grids – are increasingly monitored and controlled by computing and communication technologies. The need to address security and protection of electricity infrastructures with a high priority has broadly been recognized. This is driven by many factors, including the rapid evolution of threats and consistent technological advancements of malicious actors as well as potentially catastrophic consequences of disruptions of such systems. Surveillance and security technologies are traditionally used in these contexts as a protection mechanism that maintains situational awareness and provides appropriate alerts. Surveillance is a cumbersome process because of the need to monitor a diverse set of objects, but it is absolutely essential to detect promptly the occurrence of adverse events or conditions. The aims of this paper are twofold: First, we describe two surveillance architectures in which different technologies can be used jointly for boosting the safety and security of electricity utilities and other key resources and critical infrastructures. Second, we review the typical surveillance and security technologies and evaluate them in the context of critical infrastructures, which may help in making recommendations and improvements for the future. To accomplish these aims, we extracted and consolidated information from major survey papers. This led to identifying the surveillance and security technologies, their application areas, and challenges that they face. We also investigate the perceived performance of the identified technologies in critical infrastructures. The latter comes from interviewing experts who operate in critical infrastructures, and thus provide indications for protecting critical infrastructures, not least because of their increasing use of cyber-physical elements.
Network communications and the Internet pervade our daily activities so deeply that we strongly d... more Network communications and the Internet pervade our daily activities so deeply that we strongly depend on the availability and quality of the services they provide. For this reason, natural and technological disasters, by affecting network and service availability, have a potentially huge impact on our daily lives. Ensuring adequate levels of resiliency is hence a key issue that future network paradigms, such as 5G, need to address. This paper provides an overview of the main avenues of research on this topic within the context of the RECODIS COST Action.
Usage control models provide an integration of access control, digital rights, and trust manageme... more Usage control models provide an integration of access control, digital rights, and trust management. To achieve this integration, usage control models support additional concepts such as attribute mutability and continuity of decision. However, these concepts may introduce an additional level of complexity to the underlying model, rendering its definition a cumbersome and prone to errors process. Applying a formal verification technique allows for a rigorous analysis of the interactions amongst the components, and thus for formal guarantees in respect of the correctness of a model. In this paper, we elaborate on a case study, where we express the high-level functional model of the UseCON usage control model in the TLA+ formal specification language, and verify its correctness.
Cyber-physical systems (CPS) are characterised by interactions of physical and computational comp... more Cyber-physical systems (CPS) are characterised by interactions of physical and computational components. A CPS also interacts with its operational environment, and thus with other entities including humans. Humans are an important aspect of human CPS (HCPS) since they are responsible for using (e.g., administering) these types of system. Such interactions are usually expressed though access control policies, which in many cases (e.g., when performing critical operations) are required to support the property of resilience to cope with challenges to the normal operation of the HCPS. In this paper, we pinpoint the importance of resilience as a property in access control policies and we describe a mechanism to conduct its formal verification. Finally, we identify potential future directions in the verification of access control properties, complementary to resilience.
Mobile Edge Computing (MEC) and Fog are emerging computing models that extend the cloud and its s... more Mobile Edge Computing (MEC) and Fog are emerging computing models that extend the cloud and its services to the edge of the network. The emergence of both MEC and fog introduce new requirements, which mean their supported deployment models must be investigated. In this paper, we point out the influence and strong impact of the extended cloud (i.e., the MEC and fog) on existing communication and networking service models of the cloud. Although the relation between them is fairly evident, there are important properties, notably those of security and resilience, that we study in relation to the newly posed requirements from the MEC and fog. Although security and resilience have been already investigated in the context of the cloud - to a certain extent - existing solutions may not be applicable in the context of the extended cloud. Our approach includes the examination of models and architectures that underpin the extended cloud, and we provide a contemporary discussion on the most evident characteristics associated with them. We examine the technologies that implement these models and architectures, and analyse them with respect to security and resilience requirements. Furthermore, approaches to security and resilience-related mechanisms are examined in the cloud (specifically, anomaly detection and policy based resilience management), and we argue that these can also be applied in order to improve security and achieve resilience in the extended cloud environment.
There has been a proliferation of industry-focused cyber security qualifications, which use diffe... more There has been a proliferation of industry-focused cyber security qualifications, which use different techniques to assess the competencies of cyber security professionals and certify them to employers. There is, however, a lingering question about these qualifications: do they effectively assess the competencies of cyber security professionals? 74 cyber security qualifications were analysed to determine how competency assessment is performed in practice, and five distinct techniques were identified together with the frequency of their use within qualifications. These techniques formed the basis of a large-scale survey of the perceptions of 153 industry stakeholders on the effectiveness of individual techniques and their cost-effectiveness as combinations. Despite a perceived low effectiveness of Multiple Choice Examinations, industry qualifications were found to rely on it heavily, often as a sole technique, and few qualifications utilised the cost-effective combinations identified by stakeholders.
Access control offers mechanisms to control and limit the actions or operations that are performe... more Access control offers mechanisms to control and limit the actions or operations that are performed by a user on a set of resources in a system. Many access control models exist that are able to support this basic requirement. One of the properties examined in the context of these models is their ability to successfully restrict access to resources. Nevertheless, considering only restriction of access may not be enough in some environments, as in critical infrastructures. The protection of systems in this type of environment requires a new line of enquiry. It is essential to ensure that appropriate access is always possible, even when users and resources are subjected to challenges of various sorts. Resilience in access control is conceived as the ability of a system not to restrict but rather to ensure access to resources. In order to demonstrate the application of resilience in access control, we formally define an attribute based access control model (ABAC) based on guidelines provided by the National Institute of Standards and Technology (NIST). We examine how ABAC-based resilience policies can be specified in temporal logic and how these can be formally verified. The verification of resilience is done using an automated model checking technique, which eventually may lead to reducing the overall complexity required for the verification of resilience policies and serve as a valuable tool for administrators.
Many security infrastructures incorporate some sort of surveillance technologies to operate as an... more Many security infrastructures incorporate some sort of surveillance technologies to operate as an early incident warning or even prevention system. The special case of surveillance by cameras and human security staff has a natural reflection in game-theory as the well-known “Cops-and-Robbers” game (a.k.a. graph searching). Traditionally, such models assume a deterministic outcome of the gameplay, e.g., the robber is caught when it shares its location with a cop. In real life, however, the detection rate is far from perfect (as models assume), and thus required to play the game with uncertain outcomes. This work applies a simple game-theoretic model for the identification of optimal surveillance tours in light of imperfect detection rates of incidents. This particularly aids standardized risk management processes, where decision-making is based on qualitative assessments (e.g., from “low damage” to “critical danger”) and only nominally quantified likelihoods (e.g., “low”, ”medium” and ”high”). The unique feature of our approach is threefold: 1) it is conceptually simple and easy to apply (as we play a finite game); 2) it can treat the uncertainty in the outcome as a full-fledged categorical distribution (rather than requiring numerical data to optimize some characteristic measure like averages), and 3) it optimizes the whole distribution of randomly suffered damages, thus avoiding information loss due to data aggregation (which is required in many standard game-theoretic models using numbers for their specification). The result is an optimal layout of surveillance tours, paired with an optimal surveillance schedule to minimize risk.
Resilience against disaster scenarios is essential to network operators, not only because of the ... more Resilience against disaster scenarios is essential to network operators, not only because of the potential economic impact of a disaster but also because communication networks form the basis of crisis management. COST RECODIS aims at studying measures, rules, techniques and prediction mechanisms for different disaster scenarios. This paper gives an overview of different solutions in the context of technology-related disasters. After a general overview, the paper focuses on resilient Software Defined Networks
Attacks on critical infrastructures' Supervisory Control and Data Acquisition (SCADA) systems are... more Attacks on critical infrastructures' Supervisory Control and Data Acquisition (SCADA) systems are beginning to increase. They are often initiated by highly skilled attackers, who are capable of deploying sophisticated attacks to exfiltrate data or even to cause physical damage. In this paper, we rehearse the rationale for protecting against cyber attacks and evaluate a set of Anomaly Detection (AD) techniques in detecting attacks by analysing traffic captured in a SCADA network. For this purpose, we have implemented a tool chain with a reference implementation of various state-of-the-art AD techniques to detect attacks, which manifest themselves as anomalies. Specifically, in order to evaluate the AD techniques, we apply our tool chain on a dataset created from a gas pipeline SCADA system in Mississippi State University's lab, which include artefacts of both normal operations and cyber attack scenarios. Our evaluation elaborate on several performance metrics of the examined AD techniques such as precision; recall; accuracy; F-score and G-score. The results indicate that detection rate may change significantly when considering various attack types and different detections modes (i.e., supervised and unsupervised), and also provide indications that there is a need for a robust, and preferably real-time AD technique to introduce resilience in critical infrastructures.
Utility networks are part of every nation’s critical infrastructure, and their protection is now ... more Utility networks are part of every nation’s critical infrastructure, and their protection is now seen as a high priority objective. In this paper, we propose a threat awareness architecture for critical infrastructures, which we believe will raise security awareness and increase resilience in utility networks. We first describe an investigation of trends and threats that may impose security risks in utility networks. This was performed on the basis of a viewpoint approach that is capable of identifying technical and non-technical issues (e.g., behaviour of humans). The result of our analysis indicated that utility networks are affected strongly by technological trends, but that humans comprise an important threat to them. This provided evidence and confirmed that the protection of utility networks is a multi-variable problem, and thus, requires the examination of information stemming from various viewpoints of a network. In order to accomplish our objective, we propose a systematic threat awareness architecture in the context of a resilience strategy, which ultimately aims at providing and maintaining an acceptable level of security and safety in critical infrastructures. As a proof of concept, we demonstrate partially via a case study the application of the proposed threat awareness architecture, where we examine the potential impact of attacks in the context of social engineering across a European utility company.
Next-generation systems, such as the big data cloud, have to cope with several challenges, e.g., ... more Next-generation systems, such as the big data cloud, have to cope with several challenges, e.g., move of excessive amount of data at a dictated speed, and thus, require the investigation of concepts additional to security in order to ensure their orderly function. Resilience is such a concept, which when ensured by systems or networks they are able to provide and maintain an acceptable level of service in the face of various faults and challenges. In this paper, we investigate the multi-commodity flows problem, as a task within our $D^2R^2+DR$ resilience strategy, and in the context of big data cloud systems. Specifically, proximal gradient optimization is proposed for determining optimal computation flows since such algorithms are highly attractive for solving big data problems. Many such problems can be formulated as the global consensus optimization ones, and can be solved in a distributed manner by the alternating direction method of multipliers (ADMM) algorithm. Numerical evaluation of the proposed model is carried out in the context of specific deployments of a situation-aware information infrastructure.
Cloud computing is now extremely popular because of its use of elastic resources to provide optim... more Cloud computing is now extremely popular because of its use of elastic resources to provide optimized, cost-effective and on-demand services. However, clouds may be subject to challenges arising from cyber attacks including DoS and malware, as well as from sheer complexity problems that manifest themselves as anomalies. Anomaly detection techniques are used increasingly to improve the resilience of cloud environments and indirectly reduce the cost of recovery from outages. Most anomaly detection techniques are computationally expensive in a cloud context, and often require problem specific parameters to be predefined in advance, impairing their use in real-time detection. Aiming to overcome these problems, we propose a technique for anomaly detection based on data density. The density is computed recursively, so the technique is memory-less and unsupervised, and therefore suitable for realtime cloud environments. We demonstrate the efficacy of the proposed technique using an emulated dataset from a testbed, under various attack types and intensities, and in the face of VM migration. The obtained results, which include precision, recall, accuracy, F-score and G-score, show that network level attacks are detectable with high accuracy.
The assurance technique is a fundamental component of the assurance ecosystem; it is the mechanis... more The assurance technique is a fundamental component of the assurance ecosystem; it is the mechanism by which we assess security to derive a measure of assurance. Despite this importance, the characteristics of these assurance techniques have not been comprehensively explored within academic research from the perspective of industry stakeholders. Here, a framework of 20 "assurance techniques" is defined along with their interdependencies. A survey was conducted which received 153 responses from industry stakeholders, in order to determine perceptions of the characteristics of these assurance techniques. These characteristics include the expertise required, number of people required, time required for completion, effectiveness and cost. The extent to which perceptions differ between those in practitioner and management roles is considered. The findings were then used to compute a measure of cost-effectiveness for each assurance technique. Survey respondents were also asked about their perceptions of complementary assurance techniques. These findings were used to establish 15 combinations, of which the combined effectiveness and cost-effectiveness was assessed.
Attacks on critical infrastructures are beginning to increase in number and severity. They are of... more Attacks on critical infrastructures are beginning to increase in number and severity. They are often initiated by highly skilled attackers, who are capable of deploying advanced attacks to exfiltrate data or even to cause physical damage. In this paper, we re-visit the rationale for protecting against cyber attacks and propose a framework to monitor, detect and evaluate anomalous behaviour within critical infrastructures. Specifically, we describe a multi-level approach for assuring resilience in critical infrastructures and services, taking into account organisational, technological and individuals' (OTI) viewpoints. The framework supports detection of anomalies by using appropriate techniques at the different levels of infrastructure and service. As a proof of concept, we derive a set of suitable metrics by monitoring a European utility network, then we simulate a detection process and evaluate the results.
Assurance techniques generate evidence that allow us to make claims of assurance about security. ... more Assurance techniques generate evidence that allow us to make claims of assurance about security. For the purpose of certification to an assurance scheme, this evidence enables us to answer the question: are the implemented security controls consistent with organisational risk posture? This paper uses interviews with security practitioners to assess how ICS security assessments are conducted in practice, before introducing the five "PASIV" principles to ensure the safe use of assurance techniques. PASIV is then applied to three phases of the system development life cycle (development; procurement; operational), to determine when and when not, these assurance techniques can be used to generate evidence. Focusing then on the operational phase, this study assesses how assurances techniques generate evidence for the 35 security control families of ISO/IEC 27001:2013.
This project explored the strengths and weaknesses of commonly-used assurance activities and prov... more This project explored the strengths and weaknesses of commonly-used assurance activities and provided an analysis of the relative costs and benefits of each. The analysis considered the examination of activities in the context of both national (UK) and international assurance schemes, and resulted in understanding the level of effectiveness and cost-effectiveness provided by the individual assurance activities.
The sovereignty of nations is highly dependent on the continuous and uninterrupted operation of c... more The sovereignty of nations is highly dependent on the continuous and uninterrupted operation of critical infrastructures. Recent security incidents on SCADA networks show that threats in these environments are increasing in sophistication and number. To protect critical infrastructures against cyber attacks and to cope with their complexity, we advocate the application of a resilience strategy. This strategy provides the guidelines and processes to investigate and ensure the resilience of systems. In this abstract, we briefly refer to our definition of resilience, our research work on the verification of resilience policies, and our resilience architecture for protecting SCADA networks against cyber attacks.
Uploads
Papers by Antonios Gouglidis