We present Proof-of-Approval protocol that, just like Nakamoto's famous blockchain protocol, enables achieving consensus in a so-called per-missionless setting where anyone can join (or leave) the protocol execution. But unlike other protocols requiring consumption of physical resources, this protocol uses inherent randomness in network communication to arrive at a consensus. While all blockchains record transactions, this protocol additionally records approvals from network stakeholders. We show that recorded approvals make preferred fork selection less ambiguous, prevents long-range attacks and results in near instant finality. This protocol allows anyone, including parties without stakes, to compete for the block creation process and win rewards. We show that this free-for-all approach results in high " liveness " for the blockchain. In addition , we show that the protocol provides design parameters to achieve the desired degree of " fairness. " We analyze the protocol in partially-synchronous " bounded-delay " model where the messages are guaranteed to arrive within the time bounds of a round. Our model also assumes the adversarial stake to be bounded below quorum and the total honest stake to be large enough to achieve quorum. Finally, we discuss some practical implications of this protocol.
In the past decades, many factors have been continuously increasing like the functionality of embedded systems as well as the time-to-market pressure has been continuously increasing. Simulation of an entire system including both hardware... more
In the past decades, many factors have been continuously increasing like the functionality of embedded systems as well as the time-to-market pressure has been continuously increasing. Simulation of an entire system including both hardware and software from ...
Grid computing or computational grid is always a vast research field in academic, as well as in industry also. Computational grid provides resource sharing through multi-institutional virtual organizations for dynamic problem solving.... more
Grid computing or computational grid is always a vast research field in academic, as well as in industry also. Computational grid provides resource sharing through multi-institutional virtual organizations for dynamic problem solving. Various heterogeneous resources of different administrative domain are virtually distributed through different network in computational grids. Thus any type of failure can occur at any point of time and job running in grid environment might fail. Hence fault tolerance is an important and challenging issue in grid computing as the dependability of individual grid resources may not be guaranteed. In order to make computational grids more effective and reliable fault tolerant system is necessary. The objective of this paper is to review different existing fault tolerance techniques applicable in grid computing. This paper presents state of the art of various fault tolerance technique and comparative study of the existing algorithms.
Abstract. Byzantine Fault Tolerant (BFT) systems are considered to be state of the art with regards to providing reliability in distributed systems. Despite over a decade of research, however, BFT systems are rarely used in practice. In... more
Abstract. Byzantine Fault Tolerant (BFT) systems are considered to be state of the art with regards to providing reliability in distributed systems. Despite over a decade of research, however, BFT systems are rarely used in practice. In this paper, we describe our experience, from an application developer's perspective, trying to leverage the publicly available, highly-studied and extended "PBFT" middleware (by Castro and Liskov), to provide provable reliability guarantees for an electronic voting application with high security and robustness needs. We describe several obstacles we encountered and drawbacks we identified in the PBFT approach. These include some that we tackled, such as lack of support for dynamic client management and leaving state management completely up to the application. Others still remaining include the lack of robust handling of non-determinism, lack of support for webbased applications, lack of support for stronger cryptographic primitives, a...
The high replication cost of Byzantine fault-tolerance (BFT) methods has been a major barrier to their widespread adoption in commercial distributed applications. We present ZZ, a new approach that reduces the replication cost of BFT... more
The high replication cost of Byzantine fault-tolerance (BFT) methods has been a major barrier to their widespread adoption in commercial distributed applications. We present ZZ, a new approach that reduces the replication cost of BFT services from 2f+ 1 to practically f+ 1. The key insight in ZZ is to use f+ 1 execution replicas in the normal case and to activate additional replicas only upon failures. In data centers where multiple applications share a physical server, ZZ reduces the aggregate number of execution replicas running ...
Cloud computing is a concept of providing user and application oriented services in a virtual environment. Users can use the various cloud services as per their requirements dynamically. Different users have different requirements in... more
Cloud computing is a concept of providing user and application oriented services in a virtual environment. Users can use the various cloud services as per their requirements dynamically. Different users have different requirements in terms of application reliability, performance and fault tolerance. Static and rigid fault tolerance policies provide a consistent degree of fault tolerance as well as overhead. In this research work we have proposed a method to implement dynamic fault tolerance considering customer requirements. The cloud users have been classified in to sub classes as per the fault tolerance requirements. Their jobs have also been classified into compute intensive and data intensive categories. The varying degree of fault tolerance has been applied consisting of replication and input buffer. From the simulation based experiments we have found that the proposed dynamic method performs better than the existing methods.
In this paper, we present the mechanisms needed for Byzantine fault tolerant coordination of Web services atomic transactions. The mechanisms have been incorporated into an open-source framework implementing the standard Web services... more
In this paper, we present the mechanisms needed for Byzantine fault tolerant coordination of Web services atomic transactions. The mechanisms have been incorporated into an open-source framework implementing the standard Web services atomic transactions specification. The core services of the framework, namely, the activation service, the registration service, the completion service, and the distributed commit service, are replicated and protected with our Byzantine fault tolerance mechanisms. Such a framework can be useful for many transactional Web services that require high degree of security and dependability.
Abstract—The popularity of wide-area computer services has generated a compelling need for efficient algorithms that provide high reliability. Byzantine fault-tolerant (BFT) algorithms can be used with this purpose because they allow... more
Abstract—The popularity of wide-area computer services has generated a compelling need for efficient algorithms that provide high reliability. Byzantine fault-tolerant (BFT) algorithms can be used with this purpose because they allow replicated systems to continue to provide a correct service even when some of their replicas fail arbitrarily, either accidentally or due to malicious faults. Current BFT algorithms perform well on LANs but when the replicas are distributed geographically their performance is affected by the lower bandwidth and the ...
In the past decades, many factors have been continuously increasing like the functionality of embedded systems as well as the time-to-market pressure has been continuously increasing. Simulation of an entire system including both hardware... more
In the past decades, many factors have been continuously increasing like the functionality of embedded systems as well as the time-to-market pressure has been continuously increasing. Simulation of an entire system including both hardware and software from early design stages is one of the effective approaches to improve the design productivity. A large number of research efforts on hardware/software (HW/SW) co-simulation have been made so far. Real-time operating systems have become one of the important components in the embedded systems. However, in order to validate function of the entire system, this system has to be simulated together with application software and hardware. Indeed, traditional methods of verification have proven to be insufficient for complex digital systems. Register transfer level test-benches have become too complex to manage and too slow to execute. New methods and verification techniques began to emerge over the past few years. Highlevel test-benches, assertion-based verification, formal methods, hardware verification languages are just a few examples of the intense research activities driving the verification domain.
Wireless ad hoc networks, due to their inherent unreliability, pose significant challenges to the task of achieving tight coordination amongst nodes. This paper presents a asynchronous Byzantine consensus protocol-called... more
Wireless ad hoc networks, due to their inherent unreliability, pose significant challenges to the task of achieving tight coordination amongst nodes. This paper presents a asynchronous Byzantine consensus protocol-called Turquois-specifically designed for resource-constrained wireless ad hoc networks. The key to its efficiency is that fact that it tolerates dynamic message omissions, which allows an efficient utilization of the wireless broadcasting medium. The protocol is safe despite the arbitrary failure of $ f &# ...
This paper presents a novel consensus algorithm deployed within the Temtum cryptocurrency network. An overview of the proof of work consensus algorithm is presented, and gaps in the research are outlined. The Temtum consensus algorithm's... more
This paper presents a novel consensus algorithm deployed within the Temtum cryptocurrency network. An overview of the proof of work consensus algorithm is presented, and gaps in the research are outlined. The Temtum consensus algorithm's unique components, including the Node Participation Document (NPD) and the use of the NIST randomness beacon, are outlined and explained. Comparisons on the cost to attack the consensus algorithm and energy consumption between the Temtum consensus algorithm and Bitcoin’s proof of work is presented and evaluated. We conclude this paper summarising the findings of the research and presenting future work to be conducted.
Open distributed systems are typically composed by an unknown number of processes running in heterogeneous hosts. Their communication often requires tolerance to temporary disconnections and security against malicious actions. Tuple... more
Open distributed systems are typically composed by an unknown number of processes running in heterogeneous hosts. Their communication often requires tolerance to temporary disconnections and security against malicious actions. Tuple spaces are a well-known coordination model for this sort of systems. They can support communication that is decoupled both in time and space. There are currently several implementations of distributed fault-tolerant tuple spaces but they are not Byzantine-resilient, ie, they do not ...
This paper presents two Byzantine fault-tolerant state machine replication (BFT) algorithms that are minimal in several senses. First, they require only 2 f+ 1 replicas, instead of the usual 3 f+ 1. Second, the trusted service in which... more
This paper presents two Byzantine fault-tolerant state machine replication (BFT) algorithms that are minimal in several senses. First, they require only 2 f+ 1 replicas, instead of the usual 3 f+ 1. Second, the trusted service in which this reduction of replicas is based is arguably minimal: it provides an interface with a single function and is composed only by a counter and a signature generation primitive. Third, in nice executions the two algorithms run in the minimum number of communication steps for non-speculative and speculative ...
CMOS technology is the key element in the development of VLSI systems since it consumes less power. Power optimization has become an overridden concern in deep submicron CMOS technologies. Due to shrink in the size of device, reduction in... more
CMOS technology is the key element in the development of VLSI systems since it consumes less power. Power optimization has become an overridden concern in deep submicron CMOS technologies. Due to shrink in the size of device, reduction in power consumption and over all power management on the chip are the key challenges. For many designs power optimization is important in order to reduce package cost and to extend battery life. In power optimization leakage also plays a very important role because it has significant fraction in the total power dissipation of VLSI circuits. This paper aims to elaborate the developments and advancements in the area of power optimization of CMOS circuits in deep submicron region. This survey will be useful for the designer for selecting a suitable technique depending upon the requirement.
ABSTRACT The reliability and availability of distributed services can be ensured using replication. We present an architecture and an algorithm for Byzantine fault-tolerant state machine replication. We explore the benefits of... more
ABSTRACT The reliability and availability of distributed services can be ensured using replication. We present an architecture and an algorithm for Byzantine fault-tolerant state machine replication. We explore the benefits of virtualization to reliably detect and tolerate faulty replicas, allowing the transformation of Byzantine faults into omission faults. Our approach reduces the total number of physical replicas from 3f+1 to 2f+1. It is based on the concept of twin virtual machines, which involves having two virtual machines in each physical host, each one acting as failure detector of the other.
The First Workshop on Recent Advances on Intrusion-Tolerant Systems aimed to bring together researchers in the related areas of Intrusion Tolerance, Distributed Trust, Survivability, Byzantine Fault Tolerance, and Resilience. The workshop... more
The First Workshop on Recent Advances on Intrusion-Tolerant Systems aimed to bring together researchers in the related areas of Intrusion Tolerance, Distributed Trust, Survivability, Byzantine Fault Tolerance, and Resilience. The workshop was specially interested in``intrusion-tolerant systems': how to build them? How to evaluate and test their dependability and security? What systems need to be intrusion-tolerant? The proceedings contain 7 papers, and the abstract of the keynote speech by Professor William H. Sanders, ...
Data centers strive to provide reliable access to the data and services that they host. This reliable access requires the hosted data and services hosted by the data center to be both consistent and available. Byzantine fault tolerance... more
Data centers strive to provide reliable access to the data and services that they host. This reliable access requires the hosted data and services hosted by the data center to be both consistent and available. Byzantine fault tolerance (BFT) replication offers the promise of services that are consistent and available despite arbitrary failures by a bounded number of servers and an unbounded number of clients. The thesis of this position paper is simple: BFT is on the verge of becoming a practical reality—but clearing the last hurdles will require to rethink, once again, how BFT systems must be designed and implemented. Three fundamental trends support our thesis that widespread adoption of Byzantine fault tolerance is at hand. First, falling hardware costs and the increased value and importance of services are making significant non-BFT replication a standard commercial practice [5, 12, 13]. Although fault tolerance has long been an after-thought for non-critical applications, it is...
The high replication cost of Byzantine fault-tolerance (BFT) methods has been a major barrier to their widespread adoption in commercial distributed applications. We present ZZ, a new approach that reduces the replication cost of BFT... more
The high replication cost of Byzantine fault-tolerance (BFT) methods has been a major barrier to their widespread adoption in commercial distributed applications. We present ZZ, a new approach that reduces the replication cost of BFT services from 2f+ 1 to practically f+ 1. The key insight in ZZ is to use f+ 1 execution replicas in the normal case and to activate additional replicas only upon failures. In data centers where multiple applications share a physical server, ZZ reduces the aggregate number of execution replicas running ...