6a. Demonstrate an understanding of the basic components of search algorithms and their various implementations
Sorting and searching illustrate major concepts of the course, including modularization, hierarchy, composition, abstraction, recursion, and the use of divide and conquer to develop sorting and searching algorithms. Searching and sorting are also the building blocks for many applications, they are used to demonstrate complexity analysis, and they are good examples for generic programming. How does sorting illustrate composition?
A continuing goal in software is the identification of commonality – common problems and common solutions. The software design and the code for a common solution can be reused, n times, where n is the number of occurrences of the common problem. The cost of the reuse is the cost of the development of the common software + the cost of n uses. The cost of the development can be amortized over the n uses. The cost of using the common software depends on the effort needed to adapt the software to the application for which it will be used.
"Memory and Search Methods" explains some components of searching, including data structures and how they are stored in memory (from the beginning to 11:00), whether information is sorted and complexity analysis (11:00-18:00), and how recursion is done (or, how the problem is reduced at each pass of the algorithm) (from 30:00-33:00).
6b. Demonstrate an understanding of the differences between various search algorithms, including list search and tree search
"Memory and Search Methods", from the beginning to 11:00, discusses memory methods for storing a list and the associated number of comparisons needed to find an element. If the list is sorted the search can be done more efficiently. From 11:00-18:00, the lecture explains amortization of 1 sort and n searches. "Binary Search" explains binary searches. Note that a linear search reduces the problem by subtracting 1 in each pass; a binary search reduces the problem by dividing by 2 in each pass of the search.
The above two lectures primarily address searching (and sorting) lists. "Linked List" explores in some detail linked lists and operations that are performed on them. What are some linked list operations? If an element of a linked list can point to more than one list element to create a hierarchy, it is a tree. Why are lists and trees called recursive data structures? Name a third recursive data structure. "Recursive Data Structures" is an overview of lists, trees, and sets. Traversing a tree for searching is covered in "Basic Tree Traversals". Name the basic tree traversals.
Search is actually a problem-solving technique, because some problems can be modeled as a search for a path that corresponds to a solution. "Search Algorithms" shows how some problems can be solved by searching. Having modeled an example as a search problem, how can the search be made optimal? The video presents two strategies: Depth-first and Breadth-first (from 28:00-31:00). The implementation of these strategies uses a stack and queue data structure, respectively (31:00-40:00). Pruning rules can be used to reduce the size of the (possibly very large) tree. How many pruning rules (40:00-50:00) are identified? Application of each strategy with pruning indicates that breadth-first is more effective at finding a specific path (if not more efficiently) (50:00-1:00:00). Addition of a dynamic programming rule (1:00:00 to the end) makes breadth-first optimal with respect to effectiveness.
6c. Demonstrate an understanding of the differences between various sorting algorithms
Subunits 6.2.1, 6.2.2, and 6.2.3 describe four common sorting algorithms.
"Memory and Search Methods", from 18:00-26:00, explains Selection Sort using the notion of an invariant and analyzes its complexity. Why is Selection Sort not a very good sorting algorithm? From 33:00 to the end, the video explains Merge Sort, one of the two most popular sorting algorithms. What is the other one? The video introduces the programming construct of a lambda function, and uses it as a generic.
6d. Demonstrate an understanding of the basic techniques involved in complexity analysis
Complexity analysis provides information for making design decisions, and is introduced in "Complexity: Log, Linear, Quadratic, Exponential". The complexity of an algorithm is determined by mapping a problem to one of four known classes of algorithms. Using these four different algorithms for calculating exponentiation, the video gives high level steps for analyzing complexity. The mapping is done by counting the steps of the algorithm. Complexity is the rate of growth; that is, the increase in the number of steps as the problem grows. 'Big-O' notation is used, which is an upper bound to the rate of growth. Four bounds are typically used: log, linear, quadratic, and exponential. The number of steps in an algorithm are counted using recurrence relations. What is the characteristic of each class? The process is summarized nicely at 30:00-33:00. The steps that should be counted are the primitive steps, and they are determined by the data structure that is used and how it is stored in memory. The lecture also uses lists, as described in subunit 6.1.1.
Unit 6 Vocabulary
This vocabulary list includes terms that might help you with the review items above and some terms you should be familiar with to be successful in completing the final exam for the course.
Try to think of the reason why each term is included.