Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

I\'m just starting to get into the theory of computation, which studies what can

ID: 652213 • Letter: I

Question

I'm just starting to get into the theory of computation, which studies what can be computed, how quickly, using how much memory and with which computational model.

I have a pretty basic question, but am really hoping some of you guys can help me understand the concept behind it:

Why is everything centered around the notion and definition of LANGUAGES (i.e. regular languages and context free languages)? And how do these relate and describe the complexity of an algorithm and the possible computational models for solving them?

I read these sort of related questions:

What is the enlightenment I'm supposed to attain after studying finite automata?
How practical is Automata Theory?

but still don't have an answer to my doubts, since they provide a practical justification of why they are important (which I do understand) but don't help me understand why complexity theory is based upon them.

Explanation / Answer

It's because languages are the best (only?) way we have of formalizing the concept of a "problem."

An algorithm (Turing Machine) has performance, which we express via big-O complexity. A problem (language) belongs to a complexity class. These are usually defined by existence: if there exists a machine accepting a language L which runs in given performance (space or time), then the language belongs to the corresponding complexity class.

There's a few reasons for this. First is that languages are platform independent. You're not worrying about whether an integer is 32 or 64 bits, or whether floating point operations run in parallel with other operations. These things give performance speedup at the micro-level, but complexity analysis is interested in the macro level. As you scale from 100 to 106 to 109 to 1012 inputs, how does the algorithm performance change? Does it go from using 1 million tape cells to 1 billion, or from 1 million to more cells than there are atoms in the universe?

The second is that languages are just a nice abstraction for data. You need something you can do proofs about, something you can model formally. Encoding your input and output as a string mean that you're now dealing not with bits in memory, but with mathematical objects with specific properties. You can reason about them and prove proofs about them in a formal, and very simple sense.

Complexity theory tends to be focused on decision problems because they end up being difficult. When the decision version of travelling salesman is NP-complete (i.e. is there a tour shorter than length k), then obviously finding the shortest path is harder. There isn't as much focus on function/optimization problems because there's still a lot of open questions and unsolved problems about the simpler decision problems.

I guess here's my challenge to you: find way to mathematically describe problems that isn't languages. I don't know if languages are special, but I think they're the simplest tool we've got, the easiest one to deal with.

Hire Me For All Your Tutoring Needs
Integrity-first tutoring: clear explanations, guidance, and feedback.
Drop an Email at
drjack9650@gmail.com
Chat Now And Get Quote