Network-based deployments within the Internet of Things increasingly rely on the cloud-controlled... more Network-based deployments within the Internet of Things increasingly rely on the cloud-controlled federation of individual networks to configure, authorize, and manage devices across network borders. While this approach allows the convenient and reliable interconnection of networks, it raises severe security and safety concerns. These concerns range from a curious cloud provider accessing confidential data to a malicious cloud provider being able to physically control safety-critical devices. To overcome these concerns, we present D-CAM, which enables secure and distributed configuration, authorization, and management across network borders in the cloud-based Internet of Things. With D-CAM, we constrain the cloud to act as highly available and scalable storage for control messages. Consequently, we achieve reliable network control across network borders and strong security guarantees. Our evaluation confirms that D-CAM adds only a modest overhead and can scale to large networks.
ABSTRACT Bit errors regularly occur in wireless communications. While many media streaming codecs... more ABSTRACT Bit errors regularly occur in wireless communications. While many media streaming codecs in principle provide bit error tolerance and resilience, packet-based communication typically drops packets that are not transmitted perfectly. We present PICCETT , a method to heuristically identify which connections corrupted packets belong to, and to assign them to the correct applications instead of dropping them. PICCETT is a receiver-side classifier that requires no support from the sender or network, and no information which communication protocols are used. We show that PICCETT can assign virtually all packets to the correct connections at bit error rates up to 7–10%, and prevents misassignments even during error bursts. PICCETT's classification algorithm needs no prior offline training and both trains and classifies fast enough to easily keep up with IEEE 802.11 communication speeds.
ABSTRACT Secure Two-Party Computation (STC), despite being a powerful tool for privacy engineers,... more ABSTRACT Secure Two-Party Computation (STC), despite being a powerful tool for privacy engineers, is rarely used practically due to two reasons: i) STCs incur significant overheads and ii) developing efficient STCs requires expert knowledge. Recent works propose a variety of frameworks that address these problems. However, the varying assumptions, scenarios, and benchmarks in these works render results incomparable. It is thus hard, if not impossible, for an inexperienced developer of STCs to choose the best framework for her task. In this paper, we present a thorough quantitative performance analysis of recent STC frameworks. Our results reveal significant performance differences and we identify potential for optimizations as well as new research directions for STC. Complemented by a qualitative discussion of the frameworks' usability, our results provide privacy engineers with a dependable information basis to take the decision for the right STC framework fitting their application.
ABSTRACT Nowadays, an ever-increasing number of service providers takes advantage of the cloud co... more ABSTRACT Nowadays, an ever-increasing number of service providers takes advantage of the cloud computing paradigm in order to efficiently offer services to private users, businesses, and governments. However, while cloud computing allows to transparently scale back-end functionality such as computing and storage, the implied distributed sharing of resources has severe implications when sensitive or otherwise privacy-relevant data is concerned. These privacy implications primarily stem from the in-transparency of the involved backend providers of a cloud-based service and their dedicated data handling processes. Likewise, back-end providers cannot determine the sensitivity of data that is stored or processed in the cloud. Hence, they have no means to obey the underlying privacy regulations and contracts automatically. As the cloud computing paradigm further evolves towards federated cloud environments, the envisioned integration of different cloud platforms adds yet another layer to the existing in-transparencies. In this paper, we discuss initial ideas on how to overcome these existing and dawning data handling in-transparencies and the accompanying privacy concerns. To this end, we propose to annotate data with sensitivity information as it leaves the control boundaries of the data owner and travels through to the cloud environment. This allows to signal privacy properties across the layers of the cloud computing architecture and enables the different stakeholders to react accordingly.
2014 IEEE Symposium on Computers and Communications (ISCC), 2014
ABSTRACT Bit errors regularly occur in wireless communications. While many media streaming codecs... more ABSTRACT Bit errors regularly occur in wireless communications. While many media streaming codecs in principle provide bit error tolerance and resilience, packet-based communication typically drops packets that are not transmitted perfectly. We present PICCETT , a method to heuristically identify which connections corrupted packets belong to, and to assign them to the correct applications instead of dropping them. PICCETT is a receiver-side classifier that requires no support from the sender or network, and no information which communication protocols are used. We show that PICCETT can assign virtually all packets to the correct connections at bit error rates up to 7–10%, and prevents misassignments even during error bursts. PICCETT's classification algorithm needs no prior offline training and both trains and classifies fast enough to easily keep up with IEEE 802.11 communication speeds.
2013 IEEE 5th International Conference on Cloud Computing Technology and Science, 2013
ABSTRACT The adoption of the cloud computing paradigm is hindered by severe security and privacy ... more ABSTRACT The adoption of the cloud computing paradigm is hindered by severe security and privacy concerns which arise when outsourcing sensitive data to the cloud. One important group are those concerns regarding the handling of data. On the one hand, users and companies have requirements how their data should be treated. On the other hand, lawmakers impose requirements and obligations for specific types of data. These requirements have to be addressed in order to enable the affected users and companies to utilize cloud computing. However, we observe that current cloud offers, especially in an intercloud setting, fail to meet these requirements. Users have no way to specify their requirements for data handling in the cloud and providers in the cloud stack – even if they were willing to meet these requirements – can thus not treat the data adequately. In this paper, we identify and discuss the challenges for enabling data handling requirements awareness in the (inter-)cloud. To this end, we show how to extend a data storage service, AppScale, and Cassandra to follow data handling requirements. Thus, we make an important step towards data handling requirements-aware cloud computing.
Proceedings of the 5th ACM Conference on Data and Application Security and Privacy - CODASPY '15, 2015
ABSTRACT Bitcoin is a digital currency that uses anonymous cryptographic identities to achieve fi... more ABSTRACT Bitcoin is a digital currency that uses anonymous cryptographic identities to achieve financial privacy. However, Bitcoin's promise of anonymity is broken as recent work shows how Bitcoin's blockchain exposes users to reidentification and linking attacks. In consequence, different mixing services have emerged which promise to randomly mix a user's Bitcoins with other users' coins to provide anonymity based on the unlinkability of the mixing. However, proposed approaches suffer either from weak security guarantees and single points of failure, or small anonymity sets and missing deniability. In this paper, we propose CoinParty a novel, decentralized mixing service for Bitcoin based on a combination of decryption mixnets with threshold signatures. CoinParty is secure against malicious adversaries and the evaluation of our prototype shows that it scales easily to a large number of participants in real-world network settings. By the application of threshold signatures to Bitcoin mixing, CoinParty achieves anonymity by orders of magnitude higher than related work as we quantify by analyzing transactions in the actual Bitcoin blockchain and is first among related approaches to provide plausible deniability.
ABSTRACT The SensorCloud project aims at enabling the use of elastic, on-demand resources of toda... more ABSTRACT The SensorCloud project aims at enabling the use of elastic, on-demand resources of today’s Cloud offers for the storage and processing of sensed information about the physical world. Recent privacy concerns regarding the Cloud computing paradigm, however, constitute an adoption barrier that must be overcome to leverage the full potential of the envisioned scenario. To this end, a key goal of the SensorCloud project is to develop a security architecture that offers full access control to the data owner when outsourcing her sensed information to the Cloud. The central idea of this security architecture is the introduction of the trust point, a security-enhanced gateway at the border of the information sensing network. Based on a security analysis of the SensorCloud scenario, this chapter presents the design and implementation of the main components of our proposed security architecture. Our evaluation results confirm the feasibility of our proposed architecture with respect to the elastic, on-demand resources of today’s commodity Cloud offers.
Network-based deployments within the Internet of Things increasingly rely on the cloud-controlled... more Network-based deployments within the Internet of Things increasingly rely on the cloud-controlled federation of individual networks to configure, authorize, and manage devices across network borders. While this approach allows the convenient and reliable interconnection of networks, it raises severe security and safety concerns. These concerns range from a curious cloud provider accessing confidential data to a malicious cloud provider being able to physically control safety-critical devices. To overcome these concerns, we present D-CAM, which enables secure and distributed configuration, authorization, and management across network borders in the cloud-based Internet of Things. With D-CAM, we constrain the cloud to act as highly available and scalable storage for control messages. Consequently, we achieve reliable network control across network borders and strong security guarantees. Our evaluation confirms that D-CAM adds only a modest overhead and can scale to large networks.
ABSTRACT Bit errors regularly occur in wireless communications. While many media streaming codecs... more ABSTRACT Bit errors regularly occur in wireless communications. While many media streaming codecs in principle provide bit error tolerance and resilience, packet-based communication typically drops packets that are not transmitted perfectly. We present PICCETT , a method to heuristically identify which connections corrupted packets belong to, and to assign them to the correct applications instead of dropping them. PICCETT is a receiver-side classifier that requires no support from the sender or network, and no information which communication protocols are used. We show that PICCETT can assign virtually all packets to the correct connections at bit error rates up to 7–10%, and prevents misassignments even during error bursts. PICCETT's classification algorithm needs no prior offline training and both trains and classifies fast enough to easily keep up with IEEE 802.11 communication speeds.
ABSTRACT Secure Two-Party Computation (STC), despite being a powerful tool for privacy engineers,... more ABSTRACT Secure Two-Party Computation (STC), despite being a powerful tool for privacy engineers, is rarely used practically due to two reasons: i) STCs incur significant overheads and ii) developing efficient STCs requires expert knowledge. Recent works propose a variety of frameworks that address these problems. However, the varying assumptions, scenarios, and benchmarks in these works render results incomparable. It is thus hard, if not impossible, for an inexperienced developer of STCs to choose the best framework for her task. In this paper, we present a thorough quantitative performance analysis of recent STC frameworks. Our results reveal significant performance differences and we identify potential for optimizations as well as new research directions for STC. Complemented by a qualitative discussion of the frameworks' usability, our results provide privacy engineers with a dependable information basis to take the decision for the right STC framework fitting their application.
ABSTRACT Nowadays, an ever-increasing number of service providers takes advantage of the cloud co... more ABSTRACT Nowadays, an ever-increasing number of service providers takes advantage of the cloud computing paradigm in order to efficiently offer services to private users, businesses, and governments. However, while cloud computing allows to transparently scale back-end functionality such as computing and storage, the implied distributed sharing of resources has severe implications when sensitive or otherwise privacy-relevant data is concerned. These privacy implications primarily stem from the in-transparency of the involved backend providers of a cloud-based service and their dedicated data handling processes. Likewise, back-end providers cannot determine the sensitivity of data that is stored or processed in the cloud. Hence, they have no means to obey the underlying privacy regulations and contracts automatically. As the cloud computing paradigm further evolves towards federated cloud environments, the envisioned integration of different cloud platforms adds yet another layer to the existing in-transparencies. In this paper, we discuss initial ideas on how to overcome these existing and dawning data handling in-transparencies and the accompanying privacy concerns. To this end, we propose to annotate data with sensitivity information as it leaves the control boundaries of the data owner and travels through to the cloud environment. This allows to signal privacy properties across the layers of the cloud computing architecture and enables the different stakeholders to react accordingly.
2014 IEEE Symposium on Computers and Communications (ISCC), 2014
ABSTRACT Bit errors regularly occur in wireless communications. While many media streaming codecs... more ABSTRACT Bit errors regularly occur in wireless communications. While many media streaming codecs in principle provide bit error tolerance and resilience, packet-based communication typically drops packets that are not transmitted perfectly. We present PICCETT , a method to heuristically identify which connections corrupted packets belong to, and to assign them to the correct applications instead of dropping them. PICCETT is a receiver-side classifier that requires no support from the sender or network, and no information which communication protocols are used. We show that PICCETT can assign virtually all packets to the correct connections at bit error rates up to 7–10%, and prevents misassignments even during error bursts. PICCETT's classification algorithm needs no prior offline training and both trains and classifies fast enough to easily keep up with IEEE 802.11 communication speeds.
2013 IEEE 5th International Conference on Cloud Computing Technology and Science, 2013
ABSTRACT The adoption of the cloud computing paradigm is hindered by severe security and privacy ... more ABSTRACT The adoption of the cloud computing paradigm is hindered by severe security and privacy concerns which arise when outsourcing sensitive data to the cloud. One important group are those concerns regarding the handling of data. On the one hand, users and companies have requirements how their data should be treated. On the other hand, lawmakers impose requirements and obligations for specific types of data. These requirements have to be addressed in order to enable the affected users and companies to utilize cloud computing. However, we observe that current cloud offers, especially in an intercloud setting, fail to meet these requirements. Users have no way to specify their requirements for data handling in the cloud and providers in the cloud stack – even if they were willing to meet these requirements – can thus not treat the data adequately. In this paper, we identify and discuss the challenges for enabling data handling requirements awareness in the (inter-)cloud. To this end, we show how to extend a data storage service, AppScale, and Cassandra to follow data handling requirements. Thus, we make an important step towards data handling requirements-aware cloud computing.
Proceedings of the 5th ACM Conference on Data and Application Security and Privacy - CODASPY '15, 2015
ABSTRACT Bitcoin is a digital currency that uses anonymous cryptographic identities to achieve fi... more ABSTRACT Bitcoin is a digital currency that uses anonymous cryptographic identities to achieve financial privacy. However, Bitcoin's promise of anonymity is broken as recent work shows how Bitcoin's blockchain exposes users to reidentification and linking attacks. In consequence, different mixing services have emerged which promise to randomly mix a user's Bitcoins with other users' coins to provide anonymity based on the unlinkability of the mixing. However, proposed approaches suffer either from weak security guarantees and single points of failure, or small anonymity sets and missing deniability. In this paper, we propose CoinParty a novel, decentralized mixing service for Bitcoin based on a combination of decryption mixnets with threshold signatures. CoinParty is secure against malicious adversaries and the evaluation of our prototype shows that it scales easily to a large number of participants in real-world network settings. By the application of threshold signatures to Bitcoin mixing, CoinParty achieves anonymity by orders of magnitude higher than related work as we quantify by analyzing transactions in the actual Bitcoin blockchain and is first among related approaches to provide plausible deniability.
ABSTRACT The SensorCloud project aims at enabling the use of elastic, on-demand resources of toda... more ABSTRACT The SensorCloud project aims at enabling the use of elastic, on-demand resources of today’s Cloud offers for the storage and processing of sensed information about the physical world. Recent privacy concerns regarding the Cloud computing paradigm, however, constitute an adoption barrier that must be overcome to leverage the full potential of the envisioned scenario. To this end, a key goal of the SensorCloud project is to develop a security architecture that offers full access control to the data owner when outsourcing her sensed information to the Cloud. The central idea of this security architecture is the introduction of the trust point, a security-enhanced gateway at the border of the information sensing network. Based on a security analysis of the SensorCloud scenario, this chapter presents the design and implementation of the main components of our proposed security architecture. Our evaluation results confirm the feasibility of our proposed architecture with respect to the elastic, on-demand resources of today’s commodity Cloud offers.
Uploads
Papers by Martin Henze