I am a PhD student in the Department of Computer Science at the University of Illinois at Urbana-Champaign. I am part of the Systems Software Research Group and I am working with Prof. Roy Campbell.
My primary research interests are in the areas of computer security and distributed systems. In particular, I am interested in the problem of evaluating the security of a network configuration and, more in general, in information management in distributed systems. Supervisors: Roy H. Campbell
Abstract This paper considers mission assurance for critical cloud applications, a set of applica... more Abstract This paper considers mission assurance for critical cloud applications, a set of applications with growing importance to governments and military organizations. Specifically we consider applications in which assigned tasks or duties are performed in accordance with an intended purpose or plan in order to accomplish an assured mission.
Abstract: Policies are used extensively in managing the security of large computer infrastructure... more Abstract: Policies are used extensively in managing the security of large computer infrastructure systems. Many large organizations and several government entities such as the National Institute for Standards and Technology (NIST) and the North American Electric Reliability Corporation (NERC) define security policies to specify the allowed configurations of the systems under their watch. The goal of such policies is to help reduce the vulnerability of the infrastructure to attacks, misconfiguration and operator error.
Abstract Public communication during natural and man-made disasters is a key issue that must be a... more Abstract Public communication during natural and man-made disasters is a key issue that must be addressed to protect lives and properties. The choice of the best protective actions to take depends on a global situation-awareness that is not available to the general public. Emergency personnel and public authorities have the duty to inform the population before, during and after catastrophic events to support the disaster response.
Abstract In this paper, we consider the problem of customized information dissemination over peer... more Abstract In this paper, we consider the problem of customized information dissemination over peer-based pub/sub systems. In customized dissemination, user's (consumers), in addition to specifying their information needs (subscription queries), also specify input about the format in which they wish to receive the published content.
The monitoring of modern large scale infrastructure systems often relies on complex event process... more The monitoring of modern large scale infrastructure systems often relies on complex event processing (CEP) rules to detect security and performance problems. For example, the continuous monitoring of compliance to regulatory requirements such as PCI-DSS and NERC CIP requires analyzing events to identify if specific conditions over the configurations of devices occur. In multi-organization systems, detecting these problems often requires integrating events generated by different organi- zations. As events provide information about the infrastructure’ internal structure, organizations are interested in reducing the amount of information shared with external entities.
This paper analyses the problem of detecting policy violations in network infrastructure systems managed by two organizations (e.g., a cloud user and a cloud provider). We focus on CEP monitoring systems and we introduce two protocols for selecting the events to share between the two organizations to ensure the detection of all possible policy violations. Our experimental evaluation shows that reciprocal information sharing between the two organizations significantly reduces the amount of information to transfer. In our SNMP monitoring test case, we obtain a 80% reduction in the information shared by any single organization.
Monitoring systems observe important information that could be a valuable resource to malicious u... more Monitoring systems observe important information that could be a valuable resource to malicious users: attackers can use the knowledge of topology information, application logs, or configuration data to target attacks and make them hard to detect. The increasing need for correlating information across distributed systems to better detect potential attacks and to meet regulatory requirements can potentially exacerbate the problem if the monitoring is centralized. A single zero-day vulnerability would permit an attacker to access all information.
This paper introduces a novel algorithm for performing policy-based security monitoring. We use policies to distribute information across several hosts, so that any host compromise has limited impact on the confidentiality of the data about the overall system. Experiments show that our solution spreads information uniformly across distributed monitoring hosts and forces attackers to perform multiple actions to acquire important data.
Abstract This paper considers mission assurance for critical cloud applications, a set of applica... more Abstract This paper considers mission assurance for critical cloud applications, a set of applications with growing importance to governments and military organizations. Specifically we consider applications in which assigned tasks or duties are performed in accordance with an intended purpose or plan in order to accomplish an assured mission.
Abstract: Policies are used extensively in managing the security of large computer infrastructure... more Abstract: Policies are used extensively in managing the security of large computer infrastructure systems. Many large organizations and several government entities such as the National Institute for Standards and Technology (NIST) and the North American Electric Reliability Corporation (NERC) define security policies to specify the allowed configurations of the systems under their watch. The goal of such policies is to help reduce the vulnerability of the infrastructure to attacks, misconfiguration and operator error.
Abstract Public communication during natural and man-made disasters is a key issue that must be a... more Abstract Public communication during natural and man-made disasters is a key issue that must be addressed to protect lives and properties. The choice of the best protective actions to take depends on a global situation-awareness that is not available to the general public. Emergency personnel and public authorities have the duty to inform the population before, during and after catastrophic events to support the disaster response.
Abstract In this paper, we consider the problem of customized information dissemination over peer... more Abstract In this paper, we consider the problem of customized information dissemination over peer-based pub/sub systems. In customized dissemination, user's (consumers), in addition to specifying their information needs (subscription queries), also specify input about the format in which they wish to receive the published content.
The monitoring of modern large scale infrastructure systems often relies on complex event process... more The monitoring of modern large scale infrastructure systems often relies on complex event processing (CEP) rules to detect security and performance problems. For example, the continuous monitoring of compliance to regulatory requirements such as PCI-DSS and NERC CIP requires analyzing events to identify if specific conditions over the configurations of devices occur. In multi-organization systems, detecting these problems often requires integrating events generated by different organi- zations. As events provide information about the infrastructure’ internal structure, organizations are interested in reducing the amount of information shared with external entities.
This paper analyses the problem of detecting policy violations in network infrastructure systems managed by two organizations (e.g., a cloud user and a cloud provider). We focus on CEP monitoring systems and we introduce two protocols for selecting the events to share between the two organizations to ensure the detection of all possible policy violations. Our experimental evaluation shows that reciprocal information sharing between the two organizations significantly reduces the amount of information to transfer. In our SNMP monitoring test case, we obtain a 80% reduction in the information shared by any single organization.
Monitoring systems observe important information that could be a valuable resource to malicious u... more Monitoring systems observe important information that could be a valuable resource to malicious users: attackers can use the knowledge of topology information, application logs, or configuration data to target attacks and make them hard to detect. The increasing need for correlating information across distributed systems to better detect potential attacks and to meet regulatory requirements can potentially exacerbate the problem if the monitoring is centralized. A single zero-day vulnerability would permit an attacker to access all information.
This paper introduces a novel algorithm for performing policy-based security monitoring. We use policies to distribute information across several hosts, so that any host compromise has limited impact on the confidentiality of the data about the overall system. Experiments show that our solution spreads information uniformly across distributed monitoring hosts and forces attackers to perform multiple actions to acquire important data.
Uploads
Papers by Mirko Montanari
This paper analyses the problem of detecting policy violations in network infrastructure systems managed by two organizations (e.g., a cloud user and a cloud provider). We focus on CEP monitoring systems and we introduce two protocols for selecting the events to share between the two organizations to ensure the detection of all possible policy violations. Our experimental evaluation shows that reciprocal information sharing between the two organizations significantly reduces the amount of information to transfer. In our SNMP monitoring test case, we obtain a 80% reduction in the information shared by any single organization.
This paper introduces a novel algorithm for performing policy-based security monitoring. We use policies to distribute information across several hosts, so that any host compromise has limited impact on the confidentiality of the data about the overall system. Experiments show that our solution spreads information uniformly across distributed monitoring hosts and forces attackers to perform multiple actions to acquire important data.
This paper analyses the problem of detecting policy violations in network infrastructure systems managed by two organizations (e.g., a cloud user and a cloud provider). We focus on CEP monitoring systems and we introduce two protocols for selecting the events to share between the two organizations to ensure the detection of all possible policy violations. Our experimental evaluation shows that reciprocal information sharing between the two organizations significantly reduces the amount of information to transfer. In our SNMP monitoring test case, we obtain a 80% reduction in the information shared by any single organization.
This paper introduces a novel algorithm for performing policy-based security monitoring. We use policies to distribute information across several hosts, so that any host compromise has limited impact on the confidentiality of the data about the overall system. Experiments show that our solution spreads information uniformly across distributed monitoring hosts and forces attackers to perform multiple actions to acquire important data.