Recently there has been a great deal of interest in performance evalution of parallel simulation. Most work is devoted to the time complexity and assumes that the amount of memory available for parallel simulation is unlimited. This paper... more
Recently there has been a great deal of interest in performance evalution of parallel simulation. Most work is devoted to the time complexity and assumes that the amount of memory available for parallel simulation is unlimited. This paper studies the space complexity of parallel simulation. Our goal is to design an efficient memory management protocol which guarantees that the memory consumption of parallel simulation is of the same order as sequential simulation. (Such an algorithm is referred to as a optimal .) First, we derive the relationships among the space complexities of sequential simulation, Chandy-Misra simulation [2], and Time Warp simulation [7]. We show that Chandy-Misra may consume more storage than sequential simulation, or vice versa. Then we show that Time Warp never consumes less memory than sequential simulation. Then we describe cancelback , an optimal Time Warp memory management protocol proposed by Jefferson. Although cancelback is considered to be complete so...
We present an efficient Hough transform for automatic detection of cylinders in point clouds. As cylinders are one of the most frequently used primitives for industrial design, automatic and robust methods for their detection and fitting... more
We present an efficient Hough transform for automatic detection of cylinders in point clouds. As cylinders are one of the most frequently used primitives for industrial design, automatic and robust methods for their detection and fitting are essential for reverse engineering from point clouds. The current methods employ automatic segmentation followed by geometric fitting, which requires a lot of manual interaction during modelling. Although Hough transform can be used for automatic detection of cylinders, the required 5D Hough space has a prohibitively high time and space complexity for most practical applications. We address this problem in this paper and present a sequential Hough transform for automatic detection of cylinders in point clouds. Our algorithm consists of two sequential steps of low dimensional Hough trans- forms. The first step, called Orientation Estimation, uses the Gaussian sphere of the input data and performs a 2D Hough Transform for finding strong hypotheses ...
Sorting is the basic operation in most of the applications of computer science. Sorting means to arrange data in particular order inside computer. In this paper we have discussed performance of different sorting algorithms with their... more
Sorting is the basic operation in most of the applications of computer science. Sorting means to arrange data in particular order inside computer. In this paper we have discussed performance of different sorting algorithms with their advantages and disadvantages. This paper also represents the application areas for different sorting algorithms. Main goal of this paper is to compare the performance of different sorting algorithms based on different parameters. Keywords— Algorithm, Time Complexity, Space Complexity
Sorting is the basic operation in most of the applications of computer science. Sorting means to arrange data in particular order inside computer. In this paper we have discussed performance of different sorting algorithms with their... more
Sorting is the basic operation in most of the applications of computer science. Sorting means to arrange data in particular order inside computer. In this paper we have discussed performance of different sorting algorithms with their advantages and disadvantages. This paper also represents the application areas for different sorting algorithms. Main goal of this paper is to compare the performance of different sorting algorithms based on different parameters.
— SS is an algorithm for efficient searching of an element from the list. Basically it uses modified binary search algorithm. That modification makes searching much more efficient as compare to simple binary search algorithm. That... more
— SS is an algorithm for efficient searching of an element from the list. Basically it uses modified binary search algorithm. That modification makes searching much more efficient as compare to simple binary search algorithm. That modification is rather then checking only middle position, SS is checking lower and high position as well. By checking three position simultaneously in the one iteration, the speed of searching an searching is increased than regular binary search algorithm.
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad,... more
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et a ̀ la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. ar X iv:c s/0
The discovery of information encoded in biological sequences is assuming a prominent role in identifying genetic diseases and in deciphering biological mechanisms. This information is usually encoded in patterns frequently occurring in... more
The discovery of information encoded in biological sequences is assuming a prominent role in identifying genetic diseases and in deciphering biological mechanisms. This information is usually encoded in patterns frequently occurring in the sequences, also called motifs. In fact, motif discovery has received much attention in the literature, and several algorithms have already been proposed, which are specifically tailored to
Abstract: For current state-of-the-art DPLL SAT-solvers the two main bottlenecks are the amounts of time and memory used. In proof complexity, these resources correspond to the length and space of resolution proofs. There has been a long... more
Abstract: For current state-of-the-art DPLL SAT-solvers the two main bottlenecks are the amounts of time and memory used. In proof complexity, these resources correspond to the length and space of resolution proofs. There has been a long line of research investigating these proof complexity measures, but while strong results have been established for length, our understanding of space and how it relates to length has remained quite poor. In particular, the question whether resolution proofs can be optimized for length and space ...
This article presents two new algorithms whose purpose is to maintain the Max-RPC domain filtering consistency dur-ing search with a minimal memory footprint and implementa-tion effort. Both are sub-optimal algorithms that make use of... more
This article presents two new algorithms whose purpose is to maintain the Max-RPC domain filtering consistency dur-ing search with a minimal memory footprint and implementa-tion effort. Both are sub-optimal algorithms that make use of support residues, a backtrack-stable and highly efficient data structure which was successfully used to develop the state-of-the-art AC-3rm algorithm. The two proposed algorithms, Max-RPCrm and L-Max-RPCrm are competitive with best, optimal Max-RPC algorithms, while being considerably sim-pler to implement. L-Max-RPCrm computes an approxima-tion of the Max-RPC consistency, which is guaranteed to be strictly stronger than AC with the same space complexity and better worst-case time complexity than Max-RPCrm. In prac-tice, the difference in filtering power between L-Max-RPCrm and standard Max-RPC is nearly indistinguishable on random problems. Max-RPCrm and L-Max-RPCrm are implemented into the Choco Constraint Solver through a strong consistency global c...
Memory is a scarce resource in Java smart cards. Developers and card suppliers alike would want to make sure, at compile- or load-time, that a Java Card applet will not overflow memory when performing dynamic class instantiations.... more
Memory is a scarce resource in Java smart cards. Developers and card suppliers alike would want to make sure, at compile- or load-time, that a Java Card applet will not overflow memory when performing dynamic class instantiations. Although there are good solutions to the general problem, the challenge is still out to produce a static analyser that is certified and could execute on-card. We provide a constraint-based algorithm which determines potential loops and (mutually) recursive methods. The algorithm operates on the bytecode of an applet and is written as a set of rules associating one or more constraints to each bytecode instruction. The rules are designed so that a certified analyser could be extracted from their proof of correctness. By keeping a clear separation between the rules dealing with the inter- and intra-procedural aspects of the analysis we are able to reduce the space-complexity of a previous algorithm.
Clustering is one of the fundamental data mining tasks. Many different clustering paradigms have been developed over the years, which include partitional, hierarchical, mixture model based, density-based, spectral, subspace, and so on.... more
Clustering is one of the fundamental data mining tasks. Many different clustering paradigms have been developed over the years, which include partitional, hierarchical, mixture model based, density-based, spectral, subspace, and so on. The focus of this paper is on full-dimensional, arbitrary shaped clusters. Existing methods for this problem suffer either in terms of the memory or time complexity (quadratic or even cubic). This shortcoming has restricted these algorithms to datasets of moderate sizes. In this paper we propose ...
Consistencies are properties of Constraint Networks (CNs) that can be exploited in order to make inferences. When a significant amount of such inferences can be performed, CNs are much easier to solve. In this paper, we interest ourselves... more
Consistencies are properties of Constraint Networks (CNs) that can be exploited in order to make inferences. When a significant amount of such inferences can be performed, CNs are much easier to solve. In this paper, we interest ourselves in relation filtering consistencies for binary constraints, i.e. consistencies that allow to identify inconsistent pairs of values. We propose a new consistency called Dual Consistency (DC) and relate it to Path Consistency (PC). We show that Conservative DC (CDC, i.e. DC with only relations associated with the constraints of the network considered) is more powerful, in terms of filtering, than Conservative PC (CPC). Following the approach of Mac Gregor, we introduce an algorithm to establish (strong) CDC with a very low worst-case space complexity. Even if the relative efficiency of the algorithm introduced to establish (strong) CDC partly depends on the density of the constraint graph, the experiments we have conducted show that, on many series o...
Consistencies are properties of Constraint Networks (CNs) that can be exploited in order to make inferences. When a significant amount of such inferences can be performed, CNs are much easier to solve. In this paper, we interest ourselves... more
Consistencies are properties of Constraint Networks (CNs) that can be exploited in order to make inferences. When a significant amount of such inferences can be performed, CNs are much easier to solve. In this paper, we interest ourselves in relation filtering consistencies, i.e. consistencies that allow to identify inconsistent pairs of values. We propose a new consistency called Dual Consistency (DC) and relate it to Path Consistency (PC). We show that Conservative DC (CDC, i.e. DC with only relations associated with the constraints of the network considered) is more powerful, in terms of filtering, than Conservative PC (CPC). Following the approach of Mac Gregor, we introduce an algorithm to establish (strong) CDC with a very low worst-case space complexity. Even if the relative efficiency of the algorithm introduced to establish (strong) CDC partly depends on the density of the constraint graph, the experiments we have conducted show that, on many series of CSP instances, CDC is...
Abstract—Tampering of a database can be detected through the use of cryptographically strong hash functions. Subsequently, applied forensic analysis algorithms can help determine when, what, and perhaps ultimately who and why. This paper... more
Abstract—Tampering of a database can be detected through the use of cryptographically strong hash functions. Subsequently, applied forensic analysis algorithms can help determine when, what, and perhaps ultimately who and why. This paper presents a novel forensic analysis algorithm, the Tiled Bitmap Algorithm, which is more efficient than prior forensic analysis algorithms. It introduces the notion of a candidate set (all possible locations of detected tampering(s)) and provides a complete characterization of the candidate set and its cardinality. An optimal algorithm for computing the candidate set is also presented. Finally, the implementation of the Tiled Bitmap Algorithm is discussed, along with a comparison to other forensic algorithms in terms of space/time complexity and cost. An example of candidate set generation and proofs of the theorems and lemmata and of algorithm correctness can be found in the appendix, which can be found on the Computer Society Digital Library at
This paper addresses the problem of computing visual hulls from image contours. We propose a new hybrid approach which overcomes the precision-complexity trade-off inherent to voxel based approaches by taking advantage of surface based... more
This paper addresses the problem of computing visual hulls from image contours. We propose a new hybrid approach which overcomes the precision-complexity trade-off inherent to voxel based approaches by taking advantage of surface based approaches. To this aim, we introduce a space discretization which does not rely on a regular grid, where most cells are ineffective, but rather on an irregular grid where sample points lie on the surface of the visual hull. Such a grid is composed of tetrahedral cells obtained by applying a Delaunay triangulation on the sample points. These cells are carved afterward according to image silhouette information. The proposed approach keeps the robustness of volumetric approaches while drastically improving their precision and reducing their time and space complexities. It thus allows modeling of objects with complex geometry, and it also makes real time feasible for precise models. Preliminary results with synthetic and real data are presented. 1.
We present random sampling algorithms that with probability at least 1 − δ compute a (1 ± )-approximation of the clustering coefficient and of the number of bipartite cliques in a graph given as a stream of edges. Our algorithm for... more
We present random sampling algorithms that with probability at least 1 − δ compute a (1 ± )-approximation of the clustering coefficient and of the number of bipartite cliques in a graph given as a stream of edges. Our algorithm for estimating the clustering coefficient uses space that is inversely related to the clustering coefficient of the network itself. Our algorithm for computing the number K3,3 of bipartite cliques uses space that is proportional to the ration between the number of K1,3 and K3,3 in the graph. Since the space complexity depends only on the structure of the input graph and not on the number of nodes, our algorithms scale very well with increasing graph size and so they provide a basic tool to analyze the structure of dense clusters in large graphs. They have many applications to the discovery of Web communities, the analysis of the structure of large social networks and to the discovery of frequent patterns in large graphs. We implemented both algorithms and eva...
Algorithms based on following local gradient information are surprisingly effective for certain classes of constraint satisfaction problems. Unfortunately, previous local search algorithms are notoriously incomplete: They are not... more
Algorithms based on following local gradient information are surprisingly effective for certain classes of constraint satisfaction problems. Unfortunately, previous local search algorithms are notoriously incomplete: They are not guaranteed to find a feasible solution if one exists and they cannot be used to determine unsatisfiability. We present an algorithmic framework for complete local search and discuss in detail an instantiation for the propositional satisfiability problem (SAT). The fundamental idea is to use constraint learning in combination with a novel objective function that converges during search to a surface without local minima. Although the algorithm has worst-case exponential space complexity, we present empirical resulls on challenging SAT competition benchmarks that suggest that our implementation can perform as well as state-of-the-art solvers based on more mature techniques. Our framework suggests a range of possible algorithms lying between tree-based search a...
This paper introduces a three-dimensional synchronized alternating Turing machine (3-SATM), and investigates fundamental properties of 3-SATM0s whose input tapes are restricted to cubic ones. The main topics of this paper are: (1) a... more
This paper introduces a three-dimensional synchronized alternating Turing machine (3-SATM), and investigates fundamental properties of 3-SATM0s whose input tapes are restricted to cubic ones. The main topics of this paper are: (1) a relationship between the accepting powers of 3-SATM0s and three-dimensional alternating Tur- ing machines with small space bounds, (2) a relationship between the accepting powers of five-way and six-way 3-SATM0s, (3) a relationship between the accepting powers of 3-SATM0s and three-dimensional nondeterministic Turing machines.
Online diagnosis methods require large computationally expensive diagnosis tasks to be decomposed into sets of smaller tasks so that time and space complexity constraints are not violated. This paper defines the distributed diagnosis... more
Online diagnosis methods require large computationally expensive diagnosis tasks to be decomposed into sets of smaller tasks so that time and space complexity constraints are not violated. This paper defines the distributed diagnosis problem in the Transcend qualitative diagnosis framework, and then develops heuristic algorithms for generating a set of local diagnosers that solve the global diagnosis problem without a coordinator. Two versions of the algorithm are discussed. The time complexity and optimality of these ...
We examine the minimum amount of memory for real-time, as opposed to one-way, computation accepting nonregular languages. We consider deterministic, nondeterministic and alternating machines working within strong, middle and weak space,... more
We examine the minimum amount of memory for real-time, as opposed to one-way, computation accepting nonregular languages. We consider deterministic, nondeterministic and alternating machines working within strong, middle and weak space, and processing general or unary inputs. In most cases, we are able to show that the lower bounds for one-way machines remain tight in the real-time case. Memory lower bounds for nonregular acceptance on other devices are also addressed. It is shown that increasing the number of stacks of real-time pushdown automata can result in exponential improvement in the total amount of space usage for nonregular language recognition.
In this paper, we provide a flexible and automatic method to partition the functional space for efficient symbolic simulation. We utilize a 2-tuple list representation as the basis for partitioning the functional space. The partitioning... more
In this paper, we provide a flexible and automatic method to partition the functional space for efficient symbolic simulation. We utilize a 2-tuple list representation as the basis for partitioning the functional space. The partitioning is carried out dynamically during the symbolic simulation based on the sizes of OBDDs. We develop heuristics for choosing the optimal partitioning points. These heuristics intend to balance the tradeoff between the time and space complexity. We demonstrate the effectiveness of our new symbolic simulation approach through experiments based on a floating point adder and a memory management unit.
Nogood recording is a well known technique for reducing the thrash- ing encountered by tree search algorithms. One of the most significant disadvan- tages of nogood recording has been its prohibitive space complexity. In this paper we... more
Nogood recording is a well known technique for reducing the thrash- ing encountered by tree search algorithms. One of the most significant disadvan- tages of nogood recording has been its prohibitive space complexity. In this paper we attempt to mitigate this by using an automaton to compactly represent a set of nogoods. We demonstrate how nogoods can be propagated using