›INDEX
Last Updated:

CS 2333

Introduction

Different topics covered in the course: - Complexity Theory

This talks about if a problem is _hard_ or _easy_. Informally, a
problem is called "easy" if it's efficiently solvable (in some
reasonable amount of time).

Examples:
- Sort a list of 1,000,000 numbers. (easy)
- Search a name in a telephone directory. (easy)
- Factoring a 300-digit integer into its prime factors. (hard)
- Computing a layout for chips in VLSI. (hard)
  • Computability Theory

    As talked about in this blog, some problems in math are just unsolvable, and there may not be a way to know if the problem is unsolvable. You can prove that there will be statements in math that are true but can never be proven and that within that set of statements there are those statements that we may never know if a proof exists or if they are true. Such a statement could be the "Twin prime conjecture", or "Goldbach's conjecture". Something similar to this was "Fermat's last theorem" but that was proven to be true recently.

  • Automata Theory

    This deals with definitions and properties of different types of "computation models".

    Examples: - Finite Automata: These are used in text processing, compilers, and hardware design. - Context-Free Grammars: Define programming languages and in AI. - Turing Machines: Simple abstract models of real computers.

Prerequisites

Overall, the information from CS 1303 (notes)

  1. Sets
  2. Subsets
  3. Power set
  4. Union
  5. Intersection
  6. Difference
  7. Cartesian Product
  8. Complement
  9. Binary Relation
  10. Boolean Operations

Functions

Definition of a function

A mathematical binary relation, such that every element of the left set is related to exactly one element of the right set.

f : A B

For every input element a A, there is exactly one and only one output element b B.

f(a) = b
A -> Domain
B -> Co-domain
Range -> { b B | a A , f ( a ) = b }

Example:

Injuctive Functions (One-to-one)

f : A B

is one-to-one or injuctive if

x , y A , x y f ( x ) f ( y )

OR (Contrapositive): x , y A , f ( x ) = f ( y ) x = y

Proofs

  • Not injunctive: Prove by contradiction by providing an example of something in the codomain that maps to more than one element in the domain.

  • Injuctive: Let a and b be any two elements of the domain such that f(a) = f(b), show that a = b.

Example:

Prove that the function g(x) is injunctive.

g : , g ( x ) = 3 x - 11

Let a , b | g ( a ) = g ( b )
3 a - 11 = 3 b - 11

3 a = 3 b

a = b

the function g(x) in injuctive.

Surjective Functions (onto)

f : A B

is surjective if

b B , a A | f ( a ) = b

In other words, the range and codomain of the function are the same.

Proof

  • Not surjective - proof by contradiction, find an element in the co-domain that doesn't exist in the range.

  • Surjective

    Let b B

    a A | f ( a ) then = b

Example:

Prove f ( x ) = 5 x + 2 ( f : ) is surjective.

Let b
we can calculate

a = (b - 2) / 5
such that f ( a ) = b


a b


f is onto.

Bijective Functions

Any functions that are both one-to-one and onto (injunctive and surjective) are called bijections.

Pigeon Hole Principle

If n + 1 or more objects are placed into n boxes, then there is at least one box containing two or more objects. In other words, if A and B are two sets such that |A| > |B|, then there is no one-to-one function from A to B.

Relations

A binary relation R on 2 sets A and B is a subset of A x B.
Eg: S = Set of all current UNB students.
C = Set of all UNB courses.

Relation HasTaken = { ( s , c ) | s S , c C , s has taken c }

HasTaken S × C

Equivalence Relation

R A × A , for some set A

R is an equivalence relation if it's reflexive, symmetric, and transitive.

  • Reflexive: a A , ( a , a ) R

  • Symmetric: a , b A , ( a , b ) R ( b , a ) R

  • Transitive: a , b , c R , [ ( a , b ) R ( b , c ) R ] ( a , c ) R

Example: Example of equivalence relation

Any equivalence relation R on a set A induces a partition of A such that it splits A into equivalence classes. Each class contains elements that are related to each other but aren't related to any elements outside the class.

Equivalence classes example

String and Languages

Alphabets and Strings

An alphabet is a finite set of symbols.
Eg: ∑1 = {0, 1}
2 = {a, b, c, ... x, y, z}

A string over an alphabet ∑ is a finite sequence of symbols from ∑.

eg: 011011 is a string over ∑1. suchith is a string over ∑2.

The length of a string is denoted by |w| (Where w is a string).

The empty string has length zero is denoted by ϵ.

For an alphabet ∑, !sum* is the set of all strings over ∑. ∑* is always going to be ∞ except for when ∑ is empty (in which case it would contain exactly one element ϵ).

Language

A language is a set of strings.

  • L1 = {00, 10, 11}
  • L2 = {cat, dog, mouse}
  • L3 = {w ∈ ∑1* | |w| = 8}
  • L4 = {w ∈ ∑1* | w has odd length}

Repeating a character any number of times is notated by 1i.
L5 = {ϵ, 1, 11, 111, 1111, ...}
= { 1i | i ∈ ℤ, i ≥ 0 }

L6 = ∅ (empty language)
L7 = { ϵ } (not empty language)

Concatination

Given 2 strings, x and y, the concatenation of x and y is written as xy and is the string formed by writing the symbols of x followed by the symbols of y.

Example:
x = abcd
y = test
xy = abcdtest
yx = testabcd

The exponent notation from before is a special case of this.
xi = xxxx....x (i times)

L8 = { 0i1j | i, j ∈ ℤnonneg}

That language is formed using concatenation.

L13 = { x ∈ {0 ,1} | n0(x) = 2} ( This language specifies the number os zeros in a string has to be 2 )

nchar(x) -> This talks about the number of a particular character in a string.

Finite Automata And Regular Languages

Regular Languages

A class of languages that can be processed by simple computers called finite automata.

  • Singular: automaton
  • Plural: automata

Example:

Let's take the example of a hypothetical vending machine that accepts 5¢, 10¢, and 25¢. This can be considered an example of a simple machine, an automaton.

Vending machine automaton

We represent input to the finite automaton as a sequence of symbols aka. strings. Example, "ddn", "nnnnn". A string is going to be accepted by the machine if and only if we end up at the "accept" state after following all the trasitions associated with a sequence of symbols.

Examples of finite automata and state diagrams

The finite automata that are being discussed so far are deterministic finite automata. This implies 2 things, for any given state and character combination the δ function produces a state such that this new state ∈ Q and δ is a function, as in for any given pair there's only one state that is going to be the output.

δ when representing a fininte automaton represents a function that accepts a ordered pair of (state, symbol) and outputs another state. It's describings all transitions that are possible.

For a finite automaton M, we use L(M) to refer to the language accepted by M (The set of strings that M accepts).

A language A is called regular if there exists a finite automaton M such that L(M) = A.

Deterministic FA examples

Example 1:

DFA example 1

Input alphabet = {0, 1}
Accepted Strings = { 0ix0 | x ∈ {1, 0}* i ∈ ℤnonneg}

Example 2:

DFA example 2

language = { w ∈ {0, 1}* | w contains at least two 1s }

can also be written as:
language = { w ∈ {0, 1} | n1(w) ≥ 2 }

Example 3:

DFA example 3

Example 4:

DFA example 4

Example 5:

DFA example 5

Definition of deterministic finite automata

A deterministic finite automaton is a 5-tuple, M = (Q, ∑, δ, q, F) where, - Q is a finite set of states. - ∑ is the input alphabet. - δ: Q x ∑ -> Q is the transition function. - q ∈ Q, is the start/inital state. - F ⊆ Q, set of accept states (final states).

The delta function takes input as a pair of state and character and outputs a state.
eg: δ(q1, 1) = q2

Acceptance of string by finite automata

  • Let M = (Q, ∑, δ, q, F) be a finite automaton.
  • Let w = w1w2w3... be an input string over ∑.
  • As input string w is processed by M, define a sequence of visited states r0,r1,r2,r3... rn as follows:
  • r0 = q
  • ∀i = 0,1...,n-1 ri+1 = δ(ri, Wi+1)
  • if rn ∈ F then the string w is considered an accepted string.

Nondeterministic Finite Automata

Deterministic: For every cobination of a state and an input symbol, there is exactly one transition defined.

Nondeterministic: If the above rule is not followed then it's a nondeterministic automaton. In a nondeterministic finite automaton (NFA), there can be choice.

For a state q and an input symbol a, an NFA could have 0,1,2 or more transitions definied.

This makes the design process easier.

NFA example

The above is an example of a NFA accepts only strings that ends in 010.

Important: In an NFA, if there is any way to follow a sequence of transitions that leads to an accept state, then the string is accepted. For a given string, if there is no way to accept it, then it's not accepted.

You can create a DFA from an NFA just by simulating all possible ways you can parse a string in an NFA. So basically both NFAs and DFAs are equivalent models. In both cases, L(M) for some NFA or DFA, is going to be a regular language.

You can perform something like backtracking for an NFA since you can just try all possibilities and see if any of them end up at an accept state.

Delta (δ) is a function whose range is a set of states which you can more to on any given input.
Example:

δ(A, 1) = { A }
δ(A, 0) = { A, B }
δ(B, 0) = ∅

Tracing a String in an NFA

Tracing a string in an NFA

ϵ transitions

NFAs can have another kind of transition called ϵ transition which is, you can move states without having to read a character or having read the null character.

epso transition

Examples of NFA

Example 1:

Example 1 of NFA

Example 2:

Example 2 of NFA

Formal definition of Nondeterministic Finite Automata

Everything is the same as the DFA but the δ function is now going to have a co-domain of P(Q) (power set of all states) since each character can have 0 or more states as the output.

δ: Q x ( ∑ ∪ { ϵ } ) -> P(Q)

Q -> set of all states.
∑ -> set of input symbols called input alphabet.

Any NFA has can be converted into an equivalent DFA. This can be done just using the states of the DFA to keep track of the set of states that the NFA could be in after seeing a string of symbols.

NFA to DFA

The accept states in the DFA are states whose labels contain at least one accept state from the NFA.

Regular Operations

Let A and B be languages. We define the regular operations union, concatenation and star(*)(Star closure of a language).

Note that these are operations on languages.

  • Union: A ∪ B = { w | w ∈ A or w ∈ B }
  • Concatenation: AB = { ww' | w ∈ A and w' ∈ B }
  • Star(*): A* = { u1, u2,u3,... uk | k≥0 and each ui ∈ A }

Example of regular operations

Special cases

AB = ∅ when either A = ∅ or B = ∅.
For any language L, ϵ ∈ L*.

Theorem on operations on regular languages

The class of regular languages is closed under the operations union, concatenation, and star.

For union, proof:

Proof for union

Formal definition of the proof:

Considering 2 regular languages, A and B: Let M be an NFA such that L(M) = A ∪ B M = (Q, ∑, δ, q, F) Q = Qa ∪ Qb ∪ {q} ∑ = ∑a ∪ ∑b q = q (the new state that is created.) F = Fa ∪ Fb δ is defined as follows: for every state r ∈ Q and for every x ∈ ∑ ∪ {ϵ}, δ(r, x) = { - δa if r ∈ Qa - δb if r ∈ Qb - { qa, qb } if r = q and x = ϵ - ∅ if r = q and x ∈ ∑ }

Regular Expression

Formal definition

Let ∑ be a non-empty alphabet.

  1. ϵ is a regular expression.
  2. ∅ is a regular expression.
  3. For each a ∈ ∑, a is a regular expression.
  4. If R1 and R2 are regular expressions, then R1 ∪ R2 is a regular expression.
  5. If R1 and R2 are regular expression, then (R1)(R2) is a regular expression.
  6. If R is a regular expression then (R)* is a regular expression.

In many cases, the parenthesis is point 5 (concatenation) are not necessary.

Example:
Concatenating (0∪1)* with 111

  • (0∪1)*111 is fine
  • ((0∪1)*)(111) is not necessary.

But sometimes they are necessary, example:
Concatenating (a∪b) with (a∪c)

  • should be (a∪b)(a∪c)
  • If we use a∪ba∪c, that would be interpreted as { a, ba, c}

Order of operations

  1. Star
  2. Concatenation
  3. Union

ca*∪bc* would be interpreted as (c(a*))∪(b(c*))

Identities

  • (R)(ϵ) = R
  • R ∪ ∅ = R
  • R ∪ R = R

Examples of regular expressions

Here are some examples of regular expression that may be useful.

Examples of regular expressions

Non-regular Languages

Languages that cannot be represented as finite automaton or using a regular expression are known as non-regular languages.

Every finite language is regular.
Proof: Suppose L is some finite language and contains n string, where n ∈ ℤnonneg.

A regular expression: x1 ∪ x2 ∪ x3 ∪ ... xn ∪ would represent the entire language.

Many infinite languages are regular, but many are not.

Example of a non regular language:

{ 0i1i | i ∈ ℤnonneg }

Proving that languages are irregular (Pumping Lemma)

We can prove either that a language is regular or irregular by using the proof called the pumping lemma.

The idea behind the pumping lemma is as follows:

  • Suppose A is a regular language.
  • Then there is some DFA, M such that L(M) = A.
  • Let p≥ 1 be the number of states in M.
  • Consider any string S ∈ A with |S| ≥ p
  • Since S has more that or equal to p symbols, a DFA for A would follow p or more transitions to reach some accept state.
  • ∴ We muust repat a state at some point (since other than the start state there are only p-1 states but we followed at least p transitions.)
  • Let r be the name of the first state that gets repeated for S.

pumping lemma idea

  • let x be the part of S that cause the DFA to reach r.
  • let y be the part of S that cause the DFA to start from r and end end in r (some loop, since it's a repeated state).
  • let z be the part of S that takes us from r to an accept state.
  • S = xyz, |x|, |z| ≥ 0, |y| ≥ 1
  • We said state r is repeated, so y must at least be of length 1.
  • We said 'r' is the first repeated state. So this much happen before the pth character of the string since you only have p states. |xy| ≤ p.
  • ∀ i ∈ ℤnonneg, xyiz ∈ A.
  • All and only regular languages satisfy the properties above.
Summary

If A is a regular language, then there is an integer p ≥ 1 (called the pumping length) such that the following holds:

Every string s ∈ A with |s| ≥ p, can be divided into xyz=s such that

  • |y| ≥ 1
  • |xy| ≤ p
  • ∀ i ∈ ℤnonneg, xyiz ∈ A.

If you can show that some language leads to some contradiction in the pumping lemma, you prove that it is irregular.

General Approach to prove a language is irregular

  • Suppose that L is regular
  • Let p ≥1 be the "pumping length" guaranteed by the pumping lemma.
  • We choose a string s that is in L and has length ≥ p and demonstrate that it cannot be decomposed to satisfy all the 3 conditions which contradicts the pumping lemma.
  • ∴ L is not regular.

Note: Remember, if you are trying to show the contradiction using the last property, you just need to show that the string doesn't belong to A for some value of i ∈ ℤnonneg (including i=0).

Example:

Pumping lemma example 1

This contradicts the pumping lemma.
∴ L1 must not be regular.

Equivalence Strings

Two strings x and y are in the same equivalence class if and only if xz ∈ A ←→ yz ∈ A for some z, where A is some language.

Each state is related to a set of strings such that on processing any string in the set leads you to that state (Each state represents an Equivalence class).

Context-Free Grammars

Finite Languages ⊂ Regular Languages ⊂ Context free languages.

Context free grammars (CFG) are like regular expression for context free languages.

Formal Definition

A CFG is a 4-tuple G such that:

G = (V, ∑, R, S) where: - V is a finite set of variables. - ∑ is a finite set of terminals - V ∩ ∑ = ∅ - S ∈ V and is called the start variable. - R is a finite set of rules of the form - R is a finite set of rules of the form A → w, where A ∈ V w ∈ (V ∪ ∑)*

Examples

Here, S, A, B are variables, (S is the start variable), 1, 0, a, b, ϵ are terminals.

Context-free grammar examples

Strategies to creating grammars

Strategy 1 - Union

Create the grammar by breaking string into separate parts and having an or for each one each one up separately.

union Strategy

Strategy 2 - Concatenation

Create the grammar by breaking string into separate parts and then concatenating the strings in the end.

concat strategy

Strategy 3 - Regular Languages

If the language is a regular language then there is an easy way to convert a finite automaton into a context-free grammar.

  • For each state qi, make a variable Vi.
  • Add rule Vi → aVj if δ(qi, a) = qj
  • Add rule Vi → ϵ if qi is an accepting state.
  • Make V0 the start variable where q0 is the start state.

Example:

Context-free vs Context-sensitive

A context free language can only have a single variable on the left. Whereas a context-sensitive language can have variables surrounded by context on the left side.

eg:

xA → xB bAc → bac

Chomsky Normal Form

Sometimes we want all rules of a CFG to be in a particular format. This is useful for things like parsing programming languages.

A → BC A → a S → ϵ

A CFG is in Chomsky Normal form if every rule is one of the above forms. Where a is a terminal and A,B,C are variables (B,C are NOT start variables). S is a start variable and that particular form is only allowed for a start variable.

Pushdown Automata

The languages you can accept using PDAs are always context-free and any context-free language can be represented using a PDA.

A PDA is basically a DFA but with a stack that can hold data.

Given an input string, start in an initial state q with only a special symbol '$' (convention to use '$') on the stack. Move between states and modify the stack in the transition function. Any symbols can be pushed onto the stack on each transition and the symbols don't have to belong to the language we are working with.

Transition function

Given: -> Current state -> Current input symbol -> The symbol at the top of the stack(x) by having popped it off. Output: -> what state to move to -> what replaces x on the top of the stack

General format of a transition:

general pda transition

Note here:

  • ⍺ can be multiple symbols (pushed on from right to left, so at the end of that the left most character is on top.)
  • Any or all of the 3 variables can be ϵ to indicate the empty string.

Example of PDA:

L1 = { 0n1n | n ≥ 0, n ∈ ℤnonneg }

Answers to above question

Note: If you pop '$' (arbitrary symbol) the stack, you have no legal transitions left.

Formal Definition of PDA

A pushdown automaton is a 6-tuple M = (Q, ∑, Γ, δ, q, F), where - Q is a finite set of states. - ∑ is a finite input alphabet. - Γ is a finite stack alphabet, which contains a special symbol $. - δ is the transition function δ: Q X (∑ ∪ {ϵ}) X (Γ ∪ {ϵ}) → P(Q X Γ*) - q ∈ Q is the start state. - F ⊆ Q is the set of accept states.

The transition function is the "program" for the PDA and specifies what can be done in one computation step.

  • δ(r, a, X) is a set of moves that can be made when we
  • are in state r
  • read input symbol a (which can be ϵ)
  • pop symbol X from the top of the stack. (can also be ϵ to indicate we pop nothing.)
  • Each move in δ specifies:
  • a next state r'
  • a string ⍺ (possibly empty) to be pushed onto the stack (left end of !!aplha is the top, so pushing 012 indicates that we push 2, then 1, then 0)

Deterministic PDA

A deterministic PDA is one for which there is always exactly one possible move. Our above definition is for a non-deterministic PDA.

A non-deterministic PDA is more "powerful" than a deterministic PDA and they are not equivalent.

L(D-PDA) ⊂ !(N-PDA)

Closure property of context-free languages

Context-free languages are closed under: union, concatenation, and star. However, they are not closed under intersection and compliment.

Turing Machines

  • A Turing machine (TM) has k tapes for some k ≥ 1.
  • Each tape is divided into cells. Each cell contains a symbol from the tape alphabet Γ, which also includes □ (represents an empty cell). Each tape is infinitely long on each side.
  • Each tape has a tape head that can read the contents of the current cell and can write a symbol in the current cell. It can move one cell per step either to the left or to the right.
  • At any given time, the Turing machine is in one of a finite number of states Q. Q contains three special states: one start state, one accept state and one reject state.

Turing machine example

Formal Definition

A deterministic Turing machine is a 7-tuple

M = (∑, Γ, Q, Δ, qa, qr), where - ∑ is a finite input alphabet; □ ¬in ∑ - Γ is a finite tape alphabet; □ ∈ Γ, ∑ ⊆ Γ - Q is a finite set of states - q ∈ Q is the start state; qa ∈ Q is the accept state; qr ∈ Q is the reject state. - δ: Q X Γk → Q X Γk X {L, R, N}k is the transition function.

In one computation step, if

  • The TM is in state r
  • The k tape heads read the symbols a1, a2, a3...ak

then

  • The TM switches to state r'
  • for each i, 1≤i≤k, tape head i replace the symbol ai with ai'
  • for each i, 1≤i≤k, tape head i moves according to σi

Initially, the input string is on tape 1, with tape head on the very first symbol. All other tapes are empty.

A TM terminates when it enters qa or qr. The TM accepts the input string w if computation on input w terminates in qa. The TM rejects the input string w if computation on input w terminates in qr. For some inputs, TM might not terminate at all.

L(M) = the set of all strings accepted by the TM, M.

EXAMPLE:

L = { w ∈ {a, b}* | w is a palindrome }
  • Input string w is on the tape, with the tape head looking at the first symbol.
  • TM is in the start state q0.

Idea: Read the left most symbol and delete it, but "remember" the symbol using a state. Move to the right most character (if there is one) and check if it matches the left most symbol we remembered. If it doesn't match, reject string. If it does, delete it and move all the way back to the left and repeat the process. If no character is present, move to the accept state.

TM states for example language

This can be done using 2 Turing machines more efficiently by just copying the entire string to another tape and checking to see if they match starting in opposite directions.

Non-deterministic TMs

For any combination of state and k symbols on the k tapes, allow the TM to have more than one (or zero) possible moves.

The transition function becomes:

δ: Q X Γk → P(Q X Γk X {L, R, N}k)

Non-deterministic example

This basically implies that if you can check if a solution is right then you can use a NDTM to find every possible solution and just check if they are right.

Non-Determinism vs Determinism

Non-deterministic TMs are equivalent in power to deterministic TMs. To convert a NDTM to a DTM you just parse all branches in the following way;

Convert NDTM to DTM

You use an algorithm that does the following (it's going to be a complex algorithm but possible nonetheless)

Let's think of the decision tree that an NDTM would have to parse. At each state there may be multiple things that the TM could do and it would just pick the right one (or more realistically check all of them in parallel ). Here's an example tree:

Decision tree for NDTM

Now, each branch has been labeled by a letter. If we were to parse each branch one by one then we would have the problem of getting stuck on an infinite branch. However if we advance each branch by one state every time then you move all the branches forward in parallel. This assures that if there is a possible branch with an accept state then we reach it otherwise we either reject the string (if all branches of the NDTM halt) or get stuck in an infinite loop.

So this would do exactly what the NDTM would do, hence simulating it.

Decision Problems

Problems that have a yes or no answer such as "Given any DFA and any string w, does M accept w?" You have an algorithm that will always provide the answer to this question. We know we can do it because it will take a finite number of steps (the length of w); as a DFA can never be stuck in an infinite loop.

String Encoding

Turing machines only take in strings as input. We use string encoding of objects. We use the notation <O> to refer to the encoding of O as a string of zeros and ones.

  • A decidable problem is a problem for which you can design a Turing machine that always gives the right answer.

The language ADFA = { <M, w> | M is a DFA, w is a string and M accepts w} is a decidable language.

There is a TM that will always correctly accept strings that belong to the language and reject strings that don't, never going into an infinite loop.

TM simulating DFA

ATM = { < T, w > | T is a TM, w is a string, T accepts w }

The above language is known to be undecidable.

Proof that Atm is Undecidable

Suppose ATM is decidable Then there exists some TM H that decides ATM ∴ On input < T, M >, where TM is a TM and w is a string, H accepts < T, M >, if T accepts w. Now, we can construct a new TM D that works as follows: - D takes any TM S as input. - It then calls H with the following input: the TM S, the string <S> the encoding of S as a string. - If H rejects, D accepts; if H accepts, D rejects. If we give <D> as input to D => D accepts <D> if <D> accepts <D> D rejects <D> if <D> accepts <D> which are both logical contradictions. => The assumption that H exists if false => ATM is undecidable.

Atm proof

Halting Problem

Can you have a program algorithm that takes as input a program and tells us if that program terminates or would run forever.

Halting problem proof

We are showing that the halting problem is undecidable by showing that if the halting problem were decidable then we could decide another language that we KNOW to be undecidable which would be a contradiction.

Approach to proving Undecidable

Suppose we want to prove that B is undecidable, one approach is proof by contradiction (as we proved the halting problem) by saying R decides B but if R exists we could decide some language A where we know A is undecidable implies R does not exists, implies B is undecidable.

Example:

example of undecidable proof

Some known decidable problems

  • Given a TM, is L(TM) regular
  • Given a polynomial p, does the equation p=0 have a solution in integers? (Hilbert's 10th problem)
  • Post correspondence problem
  • Conway's game of life
  • Wang tiles

Enumerablility

A TM is enumerable is it accepts all the strings that are in the language it's designed for. It might not halt for all inputs but it will accept all acceptable string in a finite number of steps.

ATM is enumerable and Hilbert's 10th problem is also enumerable.

There are languages that are not even enumerable.
EQTM = { | M1 and M2 are TMs and L(M1) = L(M2) }

Complexity Theory

Time complexity theory: How long an algorithm takes for a particular input.

  • We focus on the number of steps it will take on a particular input.

Worst case running time

The worst case running time of M (a deterministic TM) is a function
f: ℕ → ℕ, where f(n) is the maximum number of steps that M will use on any input of length n.

  • M runs in time f(n).
  • (or) M is an f(n) time Turing machine.

Sometimes we're interested in average-case running time analysis too.

Example:

Consider a TM that will check if a given input string w ∈ {0,1}* is of the form 0m for some integer m ≥ 0

Worse case: Check all symbols to check and verify all of them are 0.
Total number of steps: n+1 (+1 since we have to check end of string.)

We could have another example of a TM that takes n2 + 20n + 30 steps to solve it's worst case.

Asymptotic Analysis

Commonly called big-O analysis, Asymptotic analysis is where you see how the function scales and not the exact number of steps.

We say that g(n) is an asymptotic upper bound for f(n).

nk → polynomial bound
2n → exponential bound (blows up much faster)

tractable: realistically solvable by a computer in a reasonable amount of time.

O(2n) is intractable (usually. For large numbers)

NOTE: for Nondeterministic TM the worst case is the number of steps in the longest accepting branch.

The class P and NP

P is the class of languages that are decidable in polynomial time on a deterministic TM.

Some problems are decidable but are not in P. No polynomial-time deterministic algorithm exists to solve them.

Some problems are decidable, but nobody knows if they are in P or not.

NP is the class of languages that can be decided in polynomial time by a nondeterministic TM.

Example: The Hamiltonian path problem is in NP, but not know if it is in P.

NP-completeness

Every NP-complete problem X has the following properties:

  • X ∈ NP
  • if problem X can be shown to be in P, then it would follow that every problem in NP is in P.

The Hamiltonian path problem is one such problem that is NP-complete.

Whether P = NP is one of the millennium problems. It's one of the most important problems in math.

Enjoy the notes on this website? Consider supporting me in this adventure in you preferred way: Support me.