CMSC-641 Algorithms: Review Material (fall 95)

In fall 1995, I conducted three supplemental review sessions to review fundamental concepts from CMSC-441. This page summarizes the material presented at these review sessions. ---Alan T. Sherman

Solving Problems

I recommend the following general-purpose four-step method for solving problems:
  1. Understand the problem.
  2. Devise a strategy.
  3. Apply the strategy.
  4. Check.

Session I: Analyzing Algorithms

We evaluate algorithms by three basic criteria:

  1. Resource usage (e.g. time, space, communication)
  2. Quality of results (e.g. correctness, approximation ratio, stability)
  3. Implementation cost (e.g. man hours, lines of code, money)

There are three basic types of programs:

  1. straightline programs (no loops or recursive calls)
  2. looping programs
  3. recursive programs
For each type, to analyze the time complexity, we begin by assigning a cost to each line. For straightline programs, we simply add up these costs. For looping programs, we express the running time as a summation and we solve the summation. For recursive programs, we express the running time as a recurrence and solve the recurrence.

Sometimes (e.g. when the control structure is complex or data dependent, as sometimes happens in graph algorithms), it is easier to analyze time complexity by associating all time of the algorithm with the data objects that the algorithm processes, over the entire course of the algorithm. I refer to this method as "count by object" rather than "count by code." Counting by object may be viewed as the aggregate method of amortized analysis. We noted that depth-first search and the two-finger merging algorithm can be so analyzed.

We practiced how to analyze algorithms by solving problems from previous 441 exams. These problems included a doubly-nested looping algorithm and a divide-and-conquer algorithm.


Session II: Mathematical Tools

A variety of math tools are useful in the study of algorithms. This tools include tools for describing and comparing growth rates, tools for analyzing resource usage algorithms, and tools for proving other properties (e.g. correctness) of algorithms.

We reviewed the six basic asymptotic notations and practiced the "ratio test" for comparing growth rates. We also reviewed the logical relationships among the six asymptotic notations.

We reviewed basic summations, including the constant summation, the arithmetic summation, and the geometric summation. In addition, we practiced the method of splitting and bounding a summation and of bounding a summation by integration.

We practiced solving some recurrences using techniques drawn from the following list:

  1. guess and check (check by induction)
  2. characteristic equations (see Chapter 2 from Brassard's book)
  3. Master Theorem (see Exercise 4.4-2 for a more general statement of Case 2)
  4. transformation (both of domain and of target)
  5. iteration
  6. bounding the recurrence
  7. generating functions
We discussed some of the finer points of applying these methods, including the "sloppiness theorem" and the often omitted yet important "general and final steps" in using iteration. We also discussed advantages and disadvantages of each technique.

We reviewed the following basic proof techniques:

  1. direct proof (e.g. case analysis, applying definitions)
  2. indirect proof (also called proof by contradiction)
  3. proof by counterexample
  4. proof by counting argument
  5. proof by induction (strong and weak forms)
  6. proof by diagonalization

Session III: Designing Algorithms

There is no simple, all-purpose, systematic way to design efficient algorithms. The following method, however, provides some helpful structure in dealing with this challenging and creative task.

There are three basic approaches to designing algorithms:

  1. Apply or adapt and existing algorithm.

  2. Design a new algorithm based on standard algorithm design strategies. These strategies include the following:
    1. generate and test (exhaustive search)
    2. transform infinite problem into a finite problem
    3. scanning
    4. divide-and-conquer
    5. greedy
    6. dynamic programming
    7. branch and bound
    8. iterative improvement
    9. transformation
    10. time-space tradeoff
    11. precomputation
    12. randomization
    13. derandomization
    14. hashing (more generally, apply appropriate data structure)
    15. oblivious
    16. local optimization

  3. Alter the requirements (change the problem, approximate the solution rather than finding an exact solution, or apply a heuristic that is not guaranteed to succeed). For this course, this approach is mostly unacceptable.

As an example, we designed and discussed a variety of solutions to the maximum subvector sum problem:

Since the integers may be positive or negative, this problem is nontrivial. See Bentley's article on algorithm design techniques in his book Programming Pearls.