Topic Name Description
Course Introduction Page Course Syllabus
Page Course Terms of Use
1.1.1: Development of C Page William Stewart's "C Programming Language History"

Read this history of C to see where the C fits into the 'larger picture' of computing history. The history of C is part of the lore of programming, Unix, and the Internet, which everyone who studies programming should know.

1.1.2: Branching C to C++ Page Wikipedia: "Compatibility of C and C++"

This article discusses the compatibility of C and C++. Compatibility of two programming languages refers to the extent to which a program written in one of the languages can be used without modification in the other. Compatibility includes both syntax (grammar) and semantics (the execution of grammatical statements). C and C++ have a degree of upward-compatibility, but there are differences since they are distinct languages that have evolved separately.

1.1.3: History of Object-Oriented Programming (OOP) Page Ruby for Beginners: "Object-Oriented Programming"

To understand a language, it helps to know what motivated its development, its principle concepts (called a "programming paradigm"), and how it relates to other languages. This page explains the principle concepts and paradigm of Ruby, an object-oriented ("OO") language developed in the mid-1990s. The concepts explained here also apply to other OO languages.

You can learn OO programming via a course on an OO language, which will emphasize the syntax and features of that language. OO features that are not implemented in that language or different implementations of a given feature may not be covered. They would typically be encountered when you learn a different OO language.  In this course, we teach the foundational concepts of the OO paradigm, and use various languages, particularly C++ and Java, to demonstrate them.

1.2.1: Influence of Prevalence of C Page Jeremy Hansen's "The Rook's Guide to C++"

Read Chapter 1, which discusses the development relationship of C and C++. If you wish, take a look at the table of contents of the full text, which can be found here. Most of the content comes from C, as much of C is included in C++, though C++ adds additional features such as classes and objects.

1.2.2: Java Overview Page Hobart and William Smith Colleges: David Eck's "Introduction to Programming Using Java"

Read Chapter 1, which describes how a 'system' can solve many types of problems. A 'system' consists of a computer (hardware components that carry out machine language instructions), software (programs written in a programming language, in particular Java), a communications interface (that interconnects the computer to a worldwide network of other computers), and an interface (that enables users to access data from and run programs on many of the computers in the network).

While the operation of 'the system' applies to many programming languages, this chapter points out features of Java that improve the operation of the 'system', such as device independence via the Java Virtual Machine, OO, reusable class libraries (for user interfacing, event handling), network support, support for other technologies, and suitability for programming other devices.

1.2.3: C++ Overview URL Massachusetts Institute of Technology: John Guttag's "Introduction to C++"

This article provides an overview of the elements of C++; specifically, the 'C' portion of C++.

Note how section 2.2 describes tokens as the "minimal chunks of a program". The root goal of programming is solving problems using the 'chunks' of a programming language.  Of course, the chunks must be appropriate for the type of problems to be solved. Generally, smaller chunks are applicable to many types of tasks, but involve more effort; larger chunks involve less effort, but are designed for more specific tasks.

The following resource addresses 'larger' chunks available in C++.

Page Wikiversity: "C++"

Solving problems with programs is made easier if we can reuse the same or similar solutions that already exist. We do this using the 'chunks' provided by a language, as described in the previous resources. These sections describe the larger 'chunk' features of C++. Larger 'chunks' consist of programming statements used to write a program and to complete programs or portions of programs that reside in libraries.

The section "Classes and Inheritance" explains and illustrates classes, which enable reuse of large sections of programming code. "Templates" explains and illustrates generic programming using templates. Focus specifically on the Introduction, Function Templates, and Class Templates. It also discusses STL, the standard C++ library. Note that 'list' is a template in C++.

1.3: C++/Java Comparison Page Wikipedia: "Comparison of Java and C++"

When comparing two programming languages, consider their underlying concepts (goals, principles, model, paradigm), their syntax (grammar), their semantics (what tasks the language can instruct a computer to do), and what support (resources, libraries, tools, etc.) they provide. These considerations can be broken down into a list of specific features that are used to evaluate and compare the two languages. The table in this article describes the similarities and differences between Java and C++.

Page Hal Smith's "C++ vs. Java: Code to Executable"

This video illustrates the different operational processes (compiling and linking) used in C++ and Java.  Most of the video discusses the processes for C++, because it is more complicated than that of Java.  The Java processes were described more thoroughly earlier in the course.

2.1: Programming Paradigms Page Marco Bonzanini's "Functional Programming in Python"

The general computing paradigm has several states: the task to be performed; a strategy for how the task will be performed; transformation of the strategy into a detailed strategy that corresponds to computations that a computer can perform; implementation of the strategy as a set of instructions that can be executed by a computer; and, lastly, validation that the results of the execution perform the task in a satisfactory manner. In software engineering, these states are called requirements analysis, architecture and design, program code, and validation. This generic model has been refined into programming paradigms to support the transition from design to programming.

This unit introduces several programming paradigms and explains main concepts of the Object-Oriented programming paradigm. Each paradigm has several, in some cases, many, programming languages that support it. This page illustrates functional programming via Python. Note that it mentions some key features, including state, immutability, first class data type, recursion, anonymous functions, and lazy-evaluation.

Page Allen Yip's "Prolog, Logic Programming and Programming Paradigm"

This page explains logic, functional, imperative, and object-oriented paradigms. It indicates that a programming language, while supporting a particular paradigm, typically may support other paradigms as well. With respect to the general computational paradigm, programming paradigms correspond to design and programming. This page mentions problem solving paradigms that support requirement analysis: lambda calculus, first order logic, and Turing machines. These are mathematical models developed to solve particular types of problems. Programming paradigms correspond to particular problem-solving models to support a particular type of task or to support many types of tasks. Programming paradigms can be thought of as models for transforming problem-solving models into executable programs.

Page Massachusetts Institute of Technology: Dennis Freeman's "Object-Oriented Programming"

This video introduces the OO paradigm. The video mentions 4 modules: software engineering, signals, circuits, and planning. These modules focus on key concepts for building models for analyzing and solving problems that are applicable to many problems.

Our focus is on the software engineering module, which uses Python to illustrate the application of several concepts (modularity, abstraction, composition, and hierarchy) to programming as part of the OO paradigm. The video gives examples of data abstraction: using numbers and operations to form numeric expressions; strings and string operations to build string structures; and 'execution' abstraction (procedure names to represent sequences of instructions). Abstraction enables the composition of data to form more complex expressions and the composition of procedures to form hierarchies of procedures.

The video then shows how these two, data abstraction and procedure abstraction, are combined to form a composite structure, called a class, that consists of both data and procedures. An object is simply an instance of a class. A class contains the data and procedures common to the its instances, i.e. objects. An object consists of the class data, class procedures (called methods), assignment of values to the class (comm) data, and, can also include additional data and additional procedures. Just like data and procedures, classes and objects are given names, i.e. names are bound to them.

The video concludes with an explanation of how Python uses name bindings (the association of a name with a 'value' -- a data value, expression, or procedure, object, or class instructions) when it executes instructions. Name bindings are stored by Python in a table called an environment. Each procedure, each class, and each object has it own environment table. When a name is used in a procedure, Python looks up the name in the procedure's environment table to find the value associated with it. Since a Python name can be shared, for example, by a class and an object, rather than store the shared name twice the shared name is in the class environment and the object environment points to the class environment. Thus, a hierarchy of environments is used to represent a class – object hierarchy. The key concepts of modularity, abstraction, composition, and hierarchy are VERY important.

2.2: Fundamental Concepts of Object-Oriented Programming Page Hobart and William Smith Colleges: David Eck's "Introduction to Programming Using Java"

Read Chapter 5: Objects and Classes.

OO concepts of modularity, abstraction, composition, and hierarchy are very powerful and require intense attention to detail. The previous subunit on how Python implements name bindings gave a glimpse of the detail that is involved. This chapter gives a detailed presentation of OO in Java. The terminology in this reading is a little different; name-binding is called name scope. The chapter begins with an explanation of the data and procedures of a class (called class variables and class methods). The class data can be fixed for all objects, in which case, it is called static. Static variables are common to all objects. There is only one copy and, thus only one value, stored in the class.

Recall from the Python OO overview that an environment associates a name with a 'value'. A static variable is in the class environment table, which points to the one copy. In contrast, a class can also have non-static variables and each object of the class contains its own copy of them. In the terminology for Java, each object has a name, which is a pointer to the location of the object's instance variables.

The next detail to note is how objects are created and initialized (i.e. values assigned to its instance variables and its methods names) by assigning values to them in the class and by constructors. A consequence of the concepts of modularity and abstraction is reuse – in writing a OO program we can use classes that have been written by others and are part of the language or contained in class libraries associated with the language. Recall the generic computing paradigm consisted of several states: requirements, design, implementation, and validation. In software engineering, it is referred to as the program process.

Section 5.3 gives insight into writing programs using classes. What classes, what objects, and how are they related? Are questions of program design. Section 5.4 illustrates the design and implementation stages of the process. The latter sections continue with the VERY important OO details of inheritance and polymorphism. They create a class hierarchy that enables code to be shared among classes and among similar, but different, objects. Whereas, Java has simple inheritance (a subclass can extend one superclass), C++ has multiple inheritance ( a subclass can extend 2 or more superclasses). Java does, however, have a restricted kind of multiple inheritance, called interfaces, where an interface can be implemented by 2 or more classes. As you finish the reading, you should appreciate how the concepts are connected. If you understand the variable names, in particular, the object names, 'this' and 'super', the class name 'Object', and the dot naming convention (e.g. ClassName.objectName.methodname), you should have a good understanding of the concepts and details presented in this reading.

3.1: History and Motivation Page Ada Programming: "Generics"

Read this section for an explanation of the history and purpose of generic programming and the precursor language, Ada. Ada is an Object-Oriented programming language used predominantly in military applications. It shares many of its features - including its structure and typing, with C++. While both languages were developed in the same year, Ada's rapid adoption by certain circles within the Computer Science world led computer scientists to modify C++ by adopting some of Ada's features in subsequent C++ releases. These changes are seen as an important evolutionary step in Object-Oriented programming. These materials cover generic programming, precursor Ada, and ANSI/ISO.

Page John DeNero's "Composing Programs"

A software engineering version of the general computation has four fundamental states: requirements, design, code, validation; or in process terms, defining a problem or task, designing a solution, coding the solution, and testing, the solution. Programming paradigms support the transition from designing to coding. To exploit the similarity among problems and tasks, it is beneficial to reuse existing designs and code on new problems and tasks. The principle concepts of modularity, abstraction and composition support reusability. This article shows how programming languages implement these principles using data, expressions, functions, and control statements.

3.2: Main Design Ideas Page Massachusetts Institute of Technology: Chris Terman's "Basics of Information" and "The Digital Abstraction"

Watch these lectures. These videos are part of another course. For our course, focus on the hardware hierarchy introduced in the first 20 minutes of lecture 1 and the first 34 minutes of lecture 2. These videos reinforce the concepts of modularization, abstraction, composition, and hierarchy, but it does so by stepping 'aside' to take a look at computer hardware.

3.3: Elements of C++ STL Page Alexander Stepanov's "STL and Its Design Principles"

Watch this video, which discusses the principles of the C++ STL (Standard Template Library) and describes abstraction as it pertains to algorithms and data.

There are different kinds of abstraction. For hardware, we used part-whole abstraction to decompose a computer into its module hierarchy. In this lecture, you encounter part-whole abstraction for data, but one based on generality for algorithms – when an algorithm is a special case of another algorithm, the latter is more general than the former. The discourse leads to object-oriented and generic programming.

STL is an organized collection of generic algorithms applicable to a more general collection of problems and tasks. STL templates include Containers, Iterators, Algorithms, and Functors. In this long, but illuminating video, the lecturer gives his insight into the development of STL, its benefits and limitations. Focus on the parts of the lecture that relate to abstraction and composition, and the process of creating programs.

3.3.1: Containers and Iterators Page Jeremy Hansen's "The Rook's Guide to C++"

Read chapter 23, which covers containers and iterators.

Page Wikipedia: "Standard Template Library"

Read this article, which further discusses the STL.

3.3.2: Complexity and Cost Page Massachusetts Institute of Technology: Eric Grimson and John Guttag's "Complexity"

In this unit we continue our exploration of abstraction with respect to algorithms. This lecture discusses the classification of algorithms according to how the performance of an algorithm grows relative to the size of the problem or task the algorithm solves or performs. Algorithms are classified by average run-time complexity, defined as the average number of steps the algorithm takes for a problem of size 'n'. Abstraction ignores the implementation of the algorithm and only considers the growth in the number of (primitive) steps an algorithm takes as the size of the problem grows. The video lecture introduces big 'O' notation and gives examples for linear, log, quadratic, and exponential complexity. The lecturer states that exponential complexity should be avoided in general.

3.3.3: Functors URL Wikipedia: "Function Object"

A functor is a name for a function object, which is an object that is called like a function.

4.1: The Role of Exceptions Page Cave of Programming: "Handling Exceptions"

An exception is a runtime error that interrupts the execution of a program statement.  When an exception occurs, the runtime system transfers control to either a runtime routine that handles the exception by printing a message and then terminating the program or to a programmer-provided routine that handles the exception by executing instructions that take corrective action. This video gives a Java example of an exception handler using 'throw' and 'catch' blocks. 

4.1.1: Traditional Error-Handling Methods Page Hobart and William Smith Colleges: David Eck's "Introduction to Programming Using Java"

Read section 8.1.

An error can be a syntax error, a runtime error, or a logic error.  The language compiler or interpreter can detect syntax errors.  A runtime error is an unexpected event involving incorrect semantics and interrupts the execution of an instruction. A logic error results in program behavior that does not satisfy the requirements, e.g. an incorrect solution.  The resource reading describes some semantic errors that Java can prevent and some that Java can not detect.  In the latter case, the Java developer should build program features that either avoid or detect those errors.

4.1.2: Using Exceptions to Write Correct Programs Page Hobart and William Smith Colleges: David Eck's "Introduction to Programming Using Java"

Read section 8.2 for an explanation of how to prevent errors by writing correct programs and writing program statements to take corrective action or to prevent errors.

Runtime errors and incorrect results when executing a program are addressed by detecting them and taking corrective action when they occur, or by preventing them from occurring in the first place. Corrective action and prevention depend on an understanding of the programming process and the identification of the root cause of errors. The programming process consists of several stages: specifying the requirements for the program, designing the program, implementing the program using a programming language, testing the program, and deploying the program for operation by users.

Errors should be detected as early as possible in the programming process, and each process stage should have a verification activity: requirements, design, code, and even tests should be verified for correctness. Each stage should also be validated (that is, checked that they satisfy the requirements) to prevent errors.

4.2.1: Exceptions in Java Page W3Resource: "Exceptions in Java"

This article article illustrates the Java flow to handle an exception. Java uses  'try' and 'catch' blocks for the code that handles the exception, and a 'throw' keyword used in the method where the exception occurs. There are 4 sections to the reading, each after the first accessed by clicking <next>. Be sure to read each.

Page TheJavaWorld: "Java Error Handling"

There are several design decisions and are addressed in the design activity of the programming process, including what exceptions to handle, where to place the exceptions handlers, the correspondence of try's to 'catches', the nesting of exception handlers, and which methods will have the throw keywords.

Page Hobart and William Smith Colleges: David Eck's "Introduction to Programming Using Java"

Read section 8.3, which is a continuation of the previous sections on correctness and robustness, and elaborates on exceptions in Java.

4.2.2: Exceptions in C++ Page C++ Programming: "Exception Handling"

Read this overview of the role of exceptions generated by the C++ library. This section also covers throw and try/catch concepts.

5.1: Definition Page Boundless: "Recursive Definitions"

Read this article about recursion.

5.1.1: Divide and Conquer Page Massachusetts Institute of Technology: John Guttag's "Recursion"

This video discusses recursion as a divide and conquer problem solving technique, a design technique, and a programming technique.

5.1.2: Applying Recursion URL Massachusetts Institute of Technology: John Guttag's "Recursion Lecture Notes"

These notes expand on the topic of recursion as a way to decompose problems into subproblems. They also apply to decomposing data structures into sub-substructures. Identifying the recursive problem or data structure and their implementation in a program require a thorough understanding of the programming process. Some implementation topics are pointed out, including reentrant code, recursion vs. iteration (loops), relation of recursion to the functional and imperative programming paradigms, and common mistakes in programming recursion.

Page Wikipedia: "Tail-Recursive Functions"

Tail-recursion is a type of recursion where the last statement executed is the recursive call and, therefore, there are no instructions after that call. When the return from the recursive call is made, the calling function terminates, in which case the return address to the calling function is not necessary. Some compilers and some languages do not save the return address back to the recursive calling function and translate tail recursion into machine instructions for a loop, which saves space on the stack and execution time. However, implementation of source tail recursion using machine code for a loop is compiler and/or language dependent.

5.1.3: Recursive Structures Page Massachusetts Institute of Technology: Eric Grimsom and John Guttag's "Divide and Conquer Methods"

This video delves deeper into divide and conquer algorithms. Some of these algorithms are recursive. Recursion is a way of decomposing a problem into smaller or simpler problems of the same type. We have discussed the base subproblem and the inductive subproblem when applying recursion. A third consideration is composition -- namely, how will simpler subproblems be composed into a solution to the larger problem? Sometimes the decomposition is easy but the composition is hard; sometimes the reverse holds.

The lecturer employs complexity to compare algorithms -- how does the number of steps in a solution to a problem grow as n grows, where n is the number of operations or data elements involved in the inductive step? Formally, that complexity approach is called Big 'O'. The lecturer presents two examples. The message of the video is that designing and implementing algorithms involves tradeoffs: decomposition vs composition tradeoff (merge sort example), and  tradeoff of storage space vs. run-time (hash function example). 

Most of the concepts you encounter in programming languages are related to reuse of algorithms or designs and implementations. These are categorized by complexity (for example the merge sort example has n log n complexity), which helps us decide their appropriateness for certain kinds of problems. The lecture closes with enhancements to the examples using exceptions and assertions (pre and post conditions), covered in unit 4 of our course. They help us handle different expectations that may arise in reusing the algorithms.

5.1.4: Recursive Steps Page Programming via Java: "Recursion"

This resource goes into recursion for Java in detail. It goes over several Java examples. It states that the steps of induction can be implemented using either recursion or loops; for a particular program, one implementation may be more effective or efficient than the other. Given a program that implements recursion, stepping through the program statement by statement helps to understand the details of how recursion works at run-time. The run-time system makes use of an internal stack that records the run-time states of a recursive function. Diagrams of the stack are given that show the changes in state of the method during run-time, that is, the pushing of the state when the method is called and the popping of the stack when the method has finished executing. The diagrams help you to understand recursive behavior.

5.1.5: Examples of Recursion Page Khan Academy: "Recursive Factorial Function" and "Fibonacci Numbers"

In these lectures we use Python, which is an easy-to-use procedural and object-oriented language, but our focus will not be on the syntax of the language. Rather, our focus is the concept of recursion, the requirements for the program (i.e., the statement of the problem), the design of the program, the semantics of the program (this is the stage in which we don't worry too much about the syntax of Python - knowing Java and C++ enables you to learn Python easily), and the verification of the program implementation (we do this by stepping through the program and running several test cases).

Note that the term verification refers to the correctness: the code running without errors and the code correctly implementing the design. Validation refers to the satisfaction of the requirements for the program: the requirements accurately reflecting problem or task the program is to perform, the design satisfying the requirements, and the code satisfying the requirements.

6.1.1: List Search Page Massachusetts Institute of Technology: John Guttag's "Memory and Search Methods" and "Binary Search, Bubble, and Selection Sorts"

Sorting usually makes use of search. Watch these lectures on linear and binary search, and note how the use of sorting can also improve search performance. 

Page Chess Programming Wiki: "Linked List"

Assume we have a collection of data objects, such as telephone numbers, and that we need to find a particular phone number in that collection. We will need a data structure for storing the objects. One such data structure is a list. A list is a generic object and can be used for any type, a type built into our programming language or a programmer defined object. For example, we can have a list of integers or a list of telephone numbers.

A list is composed of elements and has functions or methods that apply to a list, in particular, insert and remove, which add or delete elements of the list, respectively. Some languages may also have a find function. However, if our language has no such function we will need to write it. This resource discusses the implementation of a program to search a list to find a particular element in the list. Please glance at the 'list' of External Links at the bottom of the page; the elements or nodes of the list are grouped by language: Python, Java, C++ , and so on.

6.1.2: Tree Search URL Composing Programs: "Recursive Data Structures"

Read this page. In the previous unit of our course we studied recursive algorithms. Recursion is a concept that also applies to data. Here we look at recursive data structures - lists, trees, and sets. A list is a structure that consists of elements linked together. If an element is linked to more than one element, the structure is a tree.  If each element is linked to two (sub) elements, it is called a binary tree. Trees can be implemented using lists, as shown in the resource for this unit. Several examples of the wide applicability of lists are presented. A link points to all the remaining links, i.e. the rest of the list or the rest of the tree; thus, a link points to a list or to a tree - this is data recursion.

The efficiency of the programming process includes both running time and size of data.  The reading resource discusses the latter for recursive lists and trees.

Lastly, why read the last section on sets? Sets are another recursive data structure and the last section 2.7.6, indicates their connection with trees, namely, a set data type can be implemented in several different ways using a list or a  tree data type. Thus, the programming process includes implementation decisions, in addition, to design or algorithm decisions. Each of these types of decisions is constrained by the features of the programming language used. The decision choices, such as which data structure to use, will impact efficiency and effectiveness of the program's satisfaction of the program's requirements.

Page Kamal Rawat's "Basic Tree Traversals"

The use of a tree structure involves traversing or stepping through the elements or nodes of the tree. This page shows how to traverse a binary tree, which we can extend to trees having more than two descendants at each node. Many problems can be modeled by a tree. For example, in chess, the first move can be represented by the root or starting node of a tree; the next move by an opponent player, by the descendent nodes of the root. This decomposition can continue for many levels. Thus, a level in the tree hierarchy represents the possible moves available to one player; and the next level, the possible moves of the opponent player.  Each level represents the choices available to a given player. Traversing the tree involves: from a given start node a player looks-ahead at its descendent nodes (the possible moves), from each of these descendant nodes the player looks-ahead at their descendants (possible responding moves of the opponent player), and so on, continuing to look ahead (planning) to cover as many levels as feasible. Based on the look-ahead information (which gets better the further the look-ahead goes), the player chooses a descendant from the given start node.

Page Massachusetts Institute of Technology: Dennis Freeman's "Search Algorithms"

Watch this lecture, which illustrates the role of searching in helping to solve many problems. Searching a tree involves traversing the tree and making a decision at each node as we traverse or step through the tree.  Most problems involve making decisions.  If we can put a value on the outcome of a decision and if we search for a decision that has a 'best' value, then our decision process would be a search process.

The lecture points out two common techniques for traversing all of the elements of a tree: breadth first and depth first search. A traversal technique involves deciding which descendant (sub) element to look at next. Note that the starting large decision has been decomposed into a series of decisions as to which descendent element to look at on the next level down in the tree hierarchy.

6.2.1: Merge and Insertion Sort Page Massachusetts Institute of Technology: Eric Grimson and John Guttag's "Divide and Conquer Methods, Merge Sort, and Exceptions"

You watched these videos earlier, but take some time to review the merge sort discussion at the beginning of the first lecture. Again, focus on recognizing abstraction, decomposition, and composition. Rewatch the second lecture, focusing on the bubble and selection sorts.

Page Khan Academy: "Sorting Algorithms"

The following lectures step through the programming life-cycle process for sorting. The programming process comprises requirements, design, implementation of the design in a programming language, and verification/validation. The last lecture adds maintenance, extending the process to a programming life-cycle process.   Please realize that the states of the process overlap and do not have to be performed sequentially.

6.2.2: Quick Sort Page Massachusetts Institute of Technology: Erik Demaine and Charles Leiserson's "Quicksort, Randomized Algorithms"

This lecture explains the details of the working of quick sort, which is on average 3 times faster than merge sort. The lecture has 3 parts: the first 20 minutes approximately, or first third, gives the explanation -- watch that part of the lecture. You should revisit the second third, or second and third parts of the lecture when you are in unit 6.2.5, later in our course.

6.2.3: Radix Sort Page Wikipedia: "Radix Sort"

The radix sort does not compare values of the elements to be sorted; it uses the digits (in some radix, e.g. base 2) of integer keys and sorts the keys by sorting, first on the first digit (either the least significant or most significant digit), then on the next significant digit, and so on, up to the last digit. This decomposes the problem into n smaller sorting problems, namely, sorting, all the values that have the same digit in the same radix position of the key. Read this article on the Radix sort. Carefully study the discussion on efficiency and note that the complexity depends on the assumptions made regarding the primitive steps and the data structures used in a program.

6.2.4: Analysis Page Massachusetts Institute of Technology: Eric Grimson and John Guttag's "Complexity; Log, Linear, Quadratic, Exponential Algorithms"

Understanding the complexity of an algorithm helps us decide whether or not we should use it as the design of a program to solve a problem. Complexity is usually measured in terms of the average number of steps in the computation of a program. The steps can be used to estimate an average bound, lower bound, and upper bound of the amount of time and for the amount of storage space needed for the computation. The lecture explains Big O notation and concept and, using recurrence relations, develops the Big O value for several types of computations. The steps of interest are the primitive steps of an algorithm and the operations that are intrinsic to the data structure used in the program implementation of the algorithm.

Big O estimates are usually associated with the algorithm -- its primitive operations and data structure. Algorithms are grouped into classes associated with linear, n log n, quadratic, and exponential Big O bounds:  O(n), O(n times log n), O(n**2), and O(2**n), respectively.  For a given value of n, the lecturer gives the values of n, n x log n, n**2, and 2**n, to show the dramatic growth as n grows. If a problems grows by a factor of n an algorithm that grows too fast, e.g. has quadratic or exponential growth, would not be a good choice for the design of a program to solve it.

Note that the details of complexity analysis require a strong mathematical background, and you should focus on the main ideas or 'big picture' first before delving into the details.

Page "Big O Notation"

The video gives a concise definition of Big O, which is popular for bounding the running time for search and sort algorithms. Additionally, if you feel comfortable with your math background, you should watch the second and third parts of the video. 

7.1: Generic Programming Folder Massachusetts Institute of Technology: Eunsuk Kang's "Introduction to C Memory Management and C++ Object-Oriented Programming"

Study the slides for lectures 5 and 6 for a good foundation discussion of generic programming concepts.

Polya, a famous mathematician, wrote "How to Solve It", which introduced generic techniques, applicable to solving any problem. Since the objective of the programming process is a solution to a problem or task, perhaps the concepts in Polya's book are applicable. The programming process as we have seen includes requirements development (understanding the problem), design (formulating a plan or strategy), implementation in a programming language (completing the plan by providing details from previous or known solutions), verification and validation (checking the solution). Given a specific problem (program), we can develop a specific solution (specific program); or, we could use abstraction to view the program generically, and develop a generic solution. Using generic functions and generic data structures, a generic design and generic implementation can be developed that will solve, not just a specific problem, but a general class of problems. That is the idea of generic programming; a form of reuse on a large scale, of requirements, design, and implementation.

Templates and Template Libraries are an approach to generic programming. One approach to generic programming is a capability that allows functions that apply to more than one type of data (polymorphism). For example, a generic search should work for searching data of any type, as long as the data is comparable.

7.2: Templates in C++ Page Hobart and William Smith Colleges: David Eck's "Introduction to Programming Using Java"

Read this section for an introduction to the generic programming in several languages, including Java and C++. It covers the standard template library, containers, algorithms, templates, and mentions  compile time polymorphism vs. runtime polymorphism.

URL Massachusetts Institute of Technology: Jesse Dunietz, Geza Kovacs, and John Marrero's "Introduction to C++"

Explore the lecture notes and assignments for this short course, which discuss C++ templates, the Standard Template Library (STL), and operator overloading.

7.3: Commonly-Used Classes Page Hobart and William Smith Colleges: David Eck's "Introduction to Programming Using Java"

Read section 10.2. In an class hierarchy, a base class is more generic than a class that implements it. In particular, functions or methods of a base class are more generic functions. While generality is desirable, it may be a trade-off with efficiency for certain problems. This section looks at several classes in the class hierarchy for lists and for sets, and the additional features implemented in the subclasses that make them more efficient depending on the problem to be solved.

Study Guides and Review Exercises Page Unit 1 Study Guide and Review: C++ and Java
Page Unit 2 Study Guide and Review: The Building Blocks of Object-Oriented Programming
Page Unit 3 Study Guide and Review: C++ Standard Template Library
Page Unit 4 Study Guide and Review: Exceptions
Page Unit 5 Study Guide and Review: Recursion
Page Unit 6 Study Guide and Review: Searching and Sorting
Page Unit 7 Study Guide and Review: Template Programming
Optional Course Evaluation Survey URL Optional Course Evaluation Survey

Please take a few moments to provide some feedback about this course. Consider completing the survey whether you have completed the course, you are nearly at that point, or you have just come to study one unit or a few units of this course.

Your feedback will focus our efforts to continually improve our course design, content, technology, and general ease-of-use. Additionally, your input will be considered alongside our consulting professors' evaluation of the course during its next round of peer review. As always, please report urgent course experience concerns to contact@saylor.org and/or our discussion forums.