Automata Group Assignment
Automata Group Assignment
Algorithm analysis is an important part of a broader computational complexity theory, which provides
theoretical estimates for the resources needed by any algorithm which solves a given computational
problem.
• This resource can be expressed in terms of execution time (time efficiency, the most common factor) or
memory (space efficiency).
• Time Complexity: Determine the approximate number of operations required to solve a problem of size
n.
• Space Complexity: Determine the approximate memory required to solve a problem of size n.
• These estimates provide an insight into reasonable directions of search for efficient algorithms.
There are several major problems in computational complexity theory that remain open and actively
researched. Here are a few examples:
1. P versus NP problem: As I mentioned earlier, this is one of the most famous open problems in computer
science. It asks whether every problem that can be verified in polynomial time can also be solved in
polynomial time.
2. Complexity of specific problems: There are many important computational problems for which the
complexity is not yet well understood. Examples include the traveling salesman problem, graph
isomorphism, and factoring large integers.
3. Circuit lower bounds: Despite many efforts, there is still no known general technique for proving lower
bounds on the size of Boolean circuits that compute specific functions. This is a major open problem in
computational complexity theory.
4. De-randomization: Randomized algorithms are often more efficient than deterministic algorithms, but it
is not known whether every randomized algorithm can be converted into an equally efficient deterministic
algorithm.
5. Quantum complexity theory: Quantum computers have the potential to solve certain problems much
faster than classical computers, but the study of quantum complexity theory is still in its early stages.
Big-O notation
is the most commonly used notation for specifying asymptotic complexity, that is, for estimating the rate
of growth of complexity functions.
• The function f(n) is O(g(n)) if there exist positive numbers c and N such that f(n) ≤ c.g(n) for all n ≥ N.
• Example: f(n) = n2 + 5n, f(n)=O(n2 ) Big O Notation Classes Constant Time Complexity O(1):
Simplest of all complexities Or not complex at all.
• If an operation always completes in the same amount of CPU time regardless of the input size, it is
called a constant time operation.
• If it always uses the same amount of memory regardless of the input size, it is called a constant space
operation.
• Example: if an algorithm increment each number of length n , this algorithm runs in O(n) time and
performs O(1) work for each elements. Logarithmic time : log n
• Where the time complexity of a function only grows logarithmically in relation to the input. An
example of logarithmic effort is the binary search for a specific element in a sorted array of size n. Linear
time O (n):
• An algorithm is said to take linear time, the running time increases at most linearly with the size of the
input.
• More precisely, this means that there is a constant c such that the running time is at most cn for every
input of size n.
• If an algorithm’s time/space usage only grows linearly with the number of elements in the input, then it
has linear time/space complexity.
• Linear time is the best possible time complexity in situations where the algorithm has to sequentially
read its entire input. Quasilinear Time O(n log n) :
• The effort grows slightly faster than linear because the linear component is multiplied by a logarithmic
one. Quadratic Time O(n²) :
• The time grows linearly to the square of the number of input elements: If the number of input elements n
doubles, then the time simply multiplies. Exponential time O(2n ) :
• An algorithm is said to be exponential time, if T(n) is upper bounded by 2(n) , where (n) is some
polynomial in n.
• More formally, an algorithm is exponential time if T(n) is bounded by O(2 nk) for some constant k.
Here are, once again, the described complexity classes, sorted in ascending order of complexity (for
sufficiently large values of n):
the runtime does not depend on the input size but rather on half the input size.
Quasilinear time complexity refers to an algorithm whose runtime grows almost linearly with the input
size.
Specifically, if an algorithm’s time complexity is T(n) = O(n log n) for any constant k, it falls into the
quasilinear category.
Another way to express this is that quasilinear time algorithms are also O(n^(1+ε)) for every ε > 0.
In simpler terms, quasilinear time algorithms perform faster than any polynomial in n with an exponent
strictly greater than 1.
P Class Problem:
A P class problem can be solved in "polynomial time," which means that an algorithm exists for its
solution such that the number of steps in the algorithm is bounded by a polynomial function of n, where n
corresponds to the length of the input for the problem. This problem is easy to understand and tractable.
NP Class Problem:
A problem is said to be Nondeterministic polynomial time that can be solvable in polynomial time by a
nondeterministic Turing machine. The solutions to the NP class problem are hard to find since they are
being solved by a non-deterministic machine.
Notations: 3CNF denotes the set of all conjunctive normal forms whose clauses involve at most 3
Boolean variables 3SAT denotes the set of all satisfiable Boolean formulas in 3CNF (Thus, 3SAT is a
subset of 3CNF) 3 Example (cont.) Definition: An n-clique in an undirected graph is a fully connected
(this is, all-to-all) n-node sub-graph of the graph n-Clique denotes the set of all undirected graphs
possessing an n-clique.
Observation: A direct application lead us to the conclusion that the transformation of the
nondeterministic TM to a deterministic TM will render exponential time complexity.
Non-polynomial time conveys a larger class of problems, including those without any known NP
solution.
If any NP-complete problem can be solved in polynomial time, then every problem in NP can be
solved in polynomial time. NP-complete problems are the hardest problems in the NP set.
Cook’s Theorem
In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states
that the Boolean satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP can be
reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem.
Cook’s Theorem proves that satisfiability is NP-complete by reducing all non-deterministic Turing
machines to SAT. Each Turing machine has access to a two-way infinite tape (read/write) and a finite
state control, which serves as the program.
1. Space on the tape for guessing a solution and certificate to permit verification.
3. A finite set of states Θ for the machine, including the start state q0 and final states
Z yes, Z no
4. A transition function, which takes the current machine state, and current tape symbol and returns the
new state, symbol, and head position. We know a problem is in NP if we have a NDTM program to solve
it in worst-case time p[n], where p is a polynomial and n is the size of the input.