# Intro TCS recap

This semester I taught another iteration of my “Introduction to Theoretical Computer Science” course, based on my textbook in process. The book was also used in University of Virgnia CS 3102 by David Evans and Nathan Brunelle.

The main differences I made in the text and course since its original version were to make it less “idiosyncratic”: while I still think using programming language terminology is the conceptually “right” way to teach this material, there is a lot to be said for sticking with well-established models. So, I used Boolean circuits as the standard model for finite-input non-uniform computation, and Turing Machines, as the standard model for unbounded-input uniform computation. (I do talk about the equivalent programming languages view of both models, which can be a more useful perspective for some results, and is also easier to work with in code.)

In any course on intro to theoretical CS, there are always beautiful topics that are left on the “cutting room floor”. To partially compensate for that, we had an entirely optional “advanced section” where guest speakers talked about topics such as error correcting codes, circuit lower bounds, communication complexity, interactive proofs, and more. The TA in charge of this section – amazing sophomore named Noah Singer – wrote very detailed lecture notes for this section.

This semester, students in CS 121 could also do an optional project. Many chose to do a video about topics related to the course, here are some examples:

There is much work to still do on both the text and the course. Though the text has improved a lot (we do have 267 closed issues after all) some students still justifiably complained about typos, which can throw off people that are just getting introduced to the topic. I also want to add significantly more solved exercises and examples, since students do find them extremely useful. I need to significantly beef up the NP completeness chapter with more examples of reductions, though I do have Python implementation of several reductions and the Cook Levin theorem.

This type of course is often known as a “great ideas” in computer science, and so in the book I also added a “Big Idea” environment to highlight those. Of course some of those ideas are bigger than others, but I think the list below reflects well the contents of the course:

• If we can represent objects of type T as strings, then we can represent tuples of objects of type T as strings as well.
• A function is not the same as a program. A program computes a function.
• Two models are equivalent in power if they can be used to compute the same set of functions.
• Every finite function can be computed by a large enough Boolean  circuit.
• program is a piece of text, and so it can be fed as input to other  programs.
• Some functions  $f:\{0,1\}^n \rightarrow \{0,1\}$  cannot be computed by a Boolean circuit using fewer than exponential (in $n$) number of gates.
• We can precisely define what it means for a function to be computable by any possible algorithm.
• Using equivalence results such as those between Turing and RAM machines, we can “have our cake and eat it too”: We can use a simpler model such as Turing machines when we want to prove something can’t be done, and use a feature-rich model such as RAM machines when we want to prove something can be done.
• There is a  “universal” algorithm that can evaluate arbitrary algorithms on arbitrary inputs.
• There are some functions that can not be computed by any algorithm.
• If a function $F$ is uncomputable we can show that another function $H$ is uncomputable by giving a way to reduce the task of computing $F$ to computing $H$.
• We can use restricted computational models to bypass limitations such as uncomputability of the Halting problem and Rice’s Theorem. Such models can compute only a restricted subclass of functions, but allow to answer at least some semantic questions on programs.
• A proof is just a string of text whose meaning is given by a verification algorithm.
• The running time of an algorithm is not a number, it is a function of the length of the input.
• For a function $F:{0,1}^* \rightarrow {0,1}$ and $T:\mathbb{N} \rightarrow \mathbb{N}$, we can formally define what it means for $F$ to be computable in time at most $T(n)$ where $n$ is the size of the input.
• All “reasonable” computational models are equivalent if we only care about the distinction between  polynomial and exponential. (The book immediately notes quantum computers as a possible exception for this.)
• If we have more time, we can compute more functions.
• By “unrolling the loop” we can transform an algorithm that takes $T(n)$ steps to compute $F$ into a circuit that uses $poly(T(n))$ gates to compute the restriction of $F$ to ${0,1}^n$.
• A reduction $F \leq_p G$ shows that $F$ is “no harder than $G$” or equivalently that $G$ is “no easier than $F$“.
• If a single $\mathbf{NP}$-complete has a polynomial-time algorithm, then there is such an algorithm for every decision problem that corresponds to the existence of an efficiently-verifiable solution.
• If $\mathbf{P}=\mathbf{NP}$, we can efficiently solve a fantastic number of decision, search, optimization, counting, and sampling problems from all areas of human endeavors.
• A randomized algorithm outputs the correct value with good probability on every possible input.
• We can amplify the success of randomized algorithms to a value that is arbitrarily close to $1$.
• There is no secrecy without randomness.
• Computational hardness is necessary and sufficient for almost all cryptographic applications.
• Just as we did with classical computation, we can define mathematical models for quantum computation, and represent quantum algorithms as binary strings.
• Quantum computers are not a panacea and are unlikely to solve $\mathbf{NP}$ complete problems, but they can provide exponential speedups to certain structured problems.

These are all ideas that I believe are important for Computer Science undergraduates to be exposed to, but covering all of these does make for a every challenging course, which gets literally mixed reviews from the students, with some loving it and some hating it. (I post all reviews on the course home page.) Running a 200-student class is definitely something that I’m still learning how to do.