Doing a 180 and still spinning

I taught my first class last quarter and it was an enjoyable and eye-opening experience at many levels. First some background. The class was undergraduate algorithms or as popularly known in UCLA – CS180. There were 129 students (kind of like jumping into the deep end to test the waters). Like most other CS curricula, it is a core required course and as I later heard from the students, the class can have a significant impact on where you intern or even get employed eventually (all software companies want to know how you did in this course).

This post is meant to record some of my observations.

How I felt: The first two weeks felt a bit stressful and burdensome. But once I got used to it, I started enjoying the lectures and it was indeed quite pleasing to hear (and in some cases see) that a good fraction of the students liked the material and see them participating in class.

Hindsight: The most significant point was the level of the assignments. Here I erred mainly due to a mismatch in expectations. The first assignment, the median was 100% so I increased the level. The next one was at 77% which still felt high and not challenging enough for the students. At this point I consciously had 50% of each assignment be moderately easy problems (directly based on class work) and the remaining 50% range from not-so-easy to problems requiring at least one new idea. While perhaps the concept was right, the proportions were off from what the students expected. A 80-20 or so split would have been much better in hindsight. I got it almost right for the final with the median being 75%.

There were no real surprises in the syllabus covered with most topics being in common with other similar classes (you can compare here: Harvard, MIT 1, MIT2, MIT 3, CMU 1, CMU 2, Stanford 1, Stanford 2, Coursera-Stanford). However, it did feel a little ambitious in the end and the content needs some pruning. For instance, I spent one lecture each on three somewhat non-standard topics – analyzing sampling methods, contention resolution and cuckoo hashing. For the next time perhaps covering one of them or even none would be better.

A few people asked to include a programming component in the course. This makes perfect sense and I indeed considered it seriously at the beginning and thought about doing something like what Jelani Nelson used at Harvard. But it was plainly infeasible to have programming components in the assignments with the available resources (Jelani tells me he had 10 TAs for a class of about 180). Perhaps for the next time around I can suggest problems for students to play with even if they won’t be graded.

One other request was for practice midterm/final questions. I am still undecided about this one.

Proofs: I spent a lot of time in class proving that various (in some cases extremely simple) algorithms work. This is not an exception for this course, but seems to be true for most similar courses (check the syllabi: Harvard, MIT 1, MIT2, MIT 3, CMU 1, CMU 2, Stanford 1, Stanford 2, Coursera-Stanford).

So, as a few students asked, why so much emphasis on proofs in an algorithms class? There are two separate issues here. First, perhaps my not-so-clear presentation (this is the first run after all). Let us separate that from the second, probably more pressing one – if the goal of an algorithms course is to develop algorithmic thinking and/or prepare the students mainly for a career in software engineering, why should we (by we I mean all algorithms courses across the universities) emphasize proofs?

First, which proofs did I spend a lot of time doing? Well, there was 1) BFS/DFS, 2) FFT, 3) Minimum spanning trees, 4) Sampling, 5) Quicksort, 6) Hashing.

BFS/DFS we can explain as they serve as examples to illustrate induction, invariants etc. For FFT, the algorithm and the proof are one and the same – you can’t quite come up with the algorithm without the proof. But how about the others?

Take MST, Quicksort, hashing. With the right questions, you can motivate students to come up with the algorithms themselves as they are indeed quite natural and simple. But shouldn’t that be the end of developing algorithmic thinking? Same goes for Quicksort, hashing. Randomized divide & conquer makes intuitive sense and so does making random choices when in doubt. Why go deeply into probability, linearity-of-expectation to analyze these? Here are two worthwhile reasons (among many) I can think of.

First, speed is not everything – we need to be sure that the algorithm works. At the end of the day, even when you just want to build something hands on, in many cases you need to be absolutely sure that what you have actually works. For example, it is easy to come up with examples where greedy fails. In class I did do an example (knapsack) where greedy strategy fails. However, looking back I should have emphasized it more and drawn a parallel with other examples where greedy fails.

Second, the goal of the course is not just to help with programming faster code but also to serve as a launching pad for a path to computer science (with emphasis on the ‘science’). Even in the former case, thinking about algorithms in a principled way and being able to analyze them will eventually help in designing new algorithms; especially so when you have to tweak existing algorithms for new applications. Looking back, including more examples to demonstrate this concept would have been quite helpful.
Future: I look forward to teaching the class again keeping the above points in mind. It should only get better for me and hopefully for the students too.

3 thoughts on “Doing a 180 and still spinning

  1. At the University of Wisconsin-Madison, we sometimes include a programming component in the undergraduate algorithms course by assigning ACM ICPC style programming problems. The main benefits to this approach are that approiate problems and the software the grade them already exist. We are currently using http://www.spoj.com/ for the second time for this purpose and are happy with it.

Leave a comment