Topic | Name | Description |
---|---|---|
1.1: Programming Paradigms | This video addresses the basic nature of computers and speaks to what they are capable of. It emphasizes the fact that computers only do what humans program them to do. The video mentions the language Python, but any language would apply here, including Java and C++. Watch from 6:55-28:40. |
|
Read this introduction to functional programming, through Section 3.3.1. As you will see from the article's index, many languages support functional programming, including (although not mentioned in the article) C/C++. |
||
These slides review the object-oriented approach and relate it to earlier approaches. |
||
1.2: Generic Programming and Late-Definition of Data Types | Read this text, which discusses the basics of generic programming and relates it to different languages. |
|
As with all efforts to simplify programming so that systems of greater complexity can be more easily created, there are issues to consider. |
||
1.3: Fundamental Concepts of Object-Oriented Programming | Object-Oriented (OO)
concepts of modularity, abstraction, composition, and hierarchy are very
powerful and require intense attention to detail. This chapter gives a
detailed presentation of OO as implemented by Java. The terminology in
this article is a little different; name-binding is called name-scope.
The chapter begins with an explanation of the data and procedures of a
class (called class variables and class methods). The class data can be
fixed for all objects, in which case, it is called static. Static
variables are common to all objects. There is only one copy and, thus
only one value, stored in the class. A name (variable) is associated with a 'value'. A Java class can have any number of variables and each object (instance) of the class contains its own copy of those variables. In the terminology for Java, each object has a name, which is a pointer to the location of each instance of an class, and its variables. The next detail to note is how objects are created and initialized (i.e. values assigned to its instance variables and its methods names) by assigning values to them in the class via constructors. A consequence of the concepts of modularity and abstraction is reuse. In writing an OO program we can use classes that have been written by others and are part of the language or contained in class libraries associated with the language. Recall that the generic computing paradigm consists of several states: requirements, design, implementation, and validation. In software engineering, it is referred to as the programming process. Section 5.3 gives insight into writing programs using classes. What classes, what objects, and how they are related are questions of program design. Section 5.4 illustrates the design and implementation stages of the process. The latter sections continue with the VERY important OO details of inheritance and polymorphism. They create a class hierarchy that enables code to be shared among classes and among similar, but different, objects. Whereas Java has simple inheritance (a subclass can extend one superclass), C++ has multiple inheritance ( a subclass can extend 2 or more superclasses). Java does, however, have a restricted kind of multiple inheritance called interfaces, where an interface can be implemented by 2 or more classes. As you read, think about how the concepts are connected. If you understand variable names, in particular the object names 'this' and 'super', the class name 'Object', and the dot naming convention (like ClassName.objectName.methodname), you should have a good understanding of the concepts and details presented in this article. |
|
1.3.1: Practice Using Classes and Objects | Read this text on the implementation of classes, methods, attributes, and driver programs. |
|
1.3.2: Practice Using Inheritance | Read this article, and then take a shot at an implementation using inheritance. The text gives a detailed walkthrough of one approach. |
|
1.4: Insights from Experienced Programmers | Paraphrasing this video's speaker, "Design patterns have been a part of the vocabulary of software design for quite awhile. At the same time, the scope of systems design has moved broadly away from straightforward client-server technology. Systems and services are now expected by end users to be accessible from innumerable points including wearable or embedded devices through mobile phones, tablets, augmented or virtual reality systems, voice assistants, conversational chatbots, and others only barely conceived. As designers of the new range of experiences enabled by this technology, we require new ways to describe, to communicate, and to reason about these increasingly complex systems. This talk describes one such approach: an understanding of the craft of system design that takes its inspiration from functional programming, and from the nature of order as an earlier generation did from pattern languages". The speaker guides us in taking a broad high-level view of programming computers so that their operation is not limited to virtual reality but effective in the world in which we live. |
|
The speaker in this video notes that experienced coders can sometimes tell bad code on sight, not from any aspect of its inherent functionality, but from largely aesthetic characteristics. He also recognizes that writing code is often like writing good literature, if we want it to be understandable and useful. This video is a very interesting step from the philosophical to the practical. |
||
2.1: C++ Background | The C programming language is the procedural/modular precursor to C++, which adds object-orientation. Read this history of C to see where the C fits into the 'larger picture' of computing history. The history of C is part of the lore of programming, Unix/Linux, and the Internet, which everyone who studies programming should know. |
|
This page discusses the development relationship of C and C++. |
||
This article provides an overview of the elements of C++; specifically, the 'C' portion of C++. Note how section 2.2 describes tokens as the "minimal chunks of a program". The root goal of programming is solving problems using the 'chunks' of a programming language. Of course, the chunks must be appropriate for the type of problems to be solved. Generally, smaller chunks are applicable to many types of tasks, but involve more effort; larger chunks involve less effort, but are designed for more specific tasks. |
||
Solving problems with programs is made easier if we can reuse the same or similar solutions that already exist. We do this using the 'chunks' provided by a language, as described in the previous resources. These sections describe the larger 'chunk' features of C++. Larger 'chunks' consist of programming statements used to write a program and to complete programs or portions of programs that reside in libraries. The section "Classes and Inheritance" explains and illustrates classes, which enable reuse of large sections of programming code. "Templates" explains and illustrates generic programming using templates. Focus specifically on the Introduction, Function Templates, and Class Templates. It also discusses STL, the standard C++ library. Note that 'list' is a template in C++. |
||
This article provides an overview of the elements of C++; specifically, the 'C' portion of C++. |
||
Compatibility of C and C++C gradually evolved into C++, although C is still often used. It is
possible to mix the two in a single program. This article discusses the
compatibility of C and C++. Compatibility of two programming languages
refers to the extent to which a program written in one of the languages
can be used without modification in the other. Compatibility includes
both syntax (grammar) and semantics (the execution of grammatical
statements). C and C++ have a degree of upward-compatibility, but there
are differences since they are distinct languages that have evolved
separately. |
||
2.2: Java Background | Read this page to get the background of the Java programming language. Its history is most interesting. |
|
This page extends on the previous resource to delve deeper into Java itself, relating its use to its history. |
||
Read Chapter 1, which describes how a 'system' can solve many
types of problems. A 'system' consists of a computer (hardware
components that carry out machine language instructions), software
(programs written in a programming language, in particular Java), a
communications interface (that interconnects the computer to a worldwide
network of other computers), and an interface (that enables users to
access data from and run programs on many of the computers in the
network). While the operation of 'the system' applies to many programming languages, this chapter points out features of Java that improve the operation of the 'system', such as device independence via the Java Virtual Machine, OO, reusable class libraries (for user interfacing, event handling), network support, support for other technologies, and suitability for programming other devices. |
||
Unlike C/C++, Java is purely object-oriented. It is
essential that you grasp the relationship between computer languages and
object-orientation. |
||
In a class hierarchy, a base class is more generic than a class that implements it. In particular, functions or methods of a base class are more generic functions. While generality is desirable, it may be a trade-off with efficiency for certain problems. This section looks at several classes in the class hierarchy for lists and for sets, and the additional features implemented in the subclasses that make them more efficient depending on the problem to be solved. |
||
2.3: Comparing Java and C++ | When comparing two programming languages, consider their underlying concepts (goals, principles, model, paradigm), their syntax (grammar), their semantics (what tasks the language can instruct a computer to do), and what support (resources, libraries, tools, etc.) they provide. These considerations can be broken down into a list of specific features that are used to evaluate and compare the two languages. The table in this article describes the similarities and differences between Java and C++. |
|
This video illustrates the different operational processes (compiling
and linking) used in C++ and Java. Most of the video discusses the
processes for C++, because it is more complicated than that of Java. The
Java processes were described more thoroughly earlier in the course. |
||
3.1: History and Motivation | Read this page to gain insight into the history of C++' Standard Template Library. You will see that it did not suddenly appear on the scene. Also, its ideas have been incorporated into other languages. In particular, its support of data-generic programming is important to the construction and flexibility of modern systems. |
|
3.2: Main Design Ideas | Paraphrasing the introduction, this video introduces STL, a formal part of the modern C++ language standard. The speaker demonstrates many STL core features (iterators, algorithms, and data structures). He elaborates on technical details in very substantive way. STL, is a C++ library of classes, algorithms, and iterators, while providing many fundamental algorithms, data structures, and templates. |
|
Watch this video, which discusses the principles of the C++ STL (Standard Template Library) and describes abstraction as it pertains to algorithms and data. There are different kinds of abstraction. For hardware, we used part-whole abstraction to decompose a computer into its module hierarchy. In this lecture, you encounter part-whole abstraction for data, but one based on generality for algorithms – when an algorithm is a special case of another algorithm, the latter is more general than the former. The discourse leads to object-oriented and generic programming. STL is an organized collection of generic algorithms applicable to a more general collection of problems and tasks. STL templates include Containers, Iterators, Algorithms, and Functors. In this long, but illuminating video, the lecturer gives his insight into the development of STL, its benefits and limitations. Focus on the parts of the lecture that relate to abstraction and composition, and the process of creating programs. |
||
3.3: The Elements of C++ STL | Study these slides for a good foundation discussion of generic programming concepts. The examples are in C++ but the principles apply also to Java. The objective of the programming process is a solution to a problem or to complete task. The programming process itself includes requirements development (understanding the problem), design (formulating a plan or strategy), implementation in a programming language (completing the plan by providing details from previous or known solutions), verification and validation (checking the solution). Given a specific problem (program), we can develop a specific solution (specific program); or, we could use abstraction to view the program generically, and develop a generic solution. Using generic functions and generic data structures, a generic design and generic implementation can be developed that will solve, not just a specific problem, but a general class of problems. That is the idea of generic programming; a form of reuse on a large scale, of requirements, design, and implementation. Libraries of Templates and Containers are an approach to generic programming. One approach to generic programming is a capability that allows functions that apply to more than one type of data (polymorphism). For example, a generic search should work for searching data of any type, as long as the data is comparable. |
|
Read this chapter, which covers iterators for groups of classes in contiguous memory. |
||
Read this article, which further discusses the STL. |
||
Read Chapter 2 for an introduction to basic STL classes and their application. |
||
Use this reference to get at the nuts and bolts of C++ Standard Template Library. |
||
A functor is a name for a function object, which is an object that is called like a function. |
||
4.1: Introduction | The Java collections framework is a set of classes and interfaces that implement commonly reusable collection data structures. Although referred to as a framework, it works in a manner of a library. The collections framework provides both interfaces that define various collections and classes that implement them. This brief article is a good summary of this topic. |
|
4.2: Details | These videos introduce the concept of libraries and then discusses the JCL, while incorporating numerous examples. |
|
4.3: Further Examples | The extensiveness of the JCL can make it a trial when deciding which module is the "right" one to use. Actually, more than one module can be used to serve the same purpose, although one may be easier to apply than another. This video has a good discussion on module choice. |
|
This set of videos offers examples on the use of specific JCL modules. You will find the videos useful as you write your own programs. |
||
5.1 Introduction to Exceptions | Read this article, which introduces exceptions and relates those ideas to programming languages. |
|
This article addresses the systematic incorporation of exception handling into component-based systems. By "component-based", one can infer "object-oriented" since the use of libraries of classes, such ast STL and JCL, can be seen as the use of components, building blocks, while constructing large-scale software systems. Read this article in its entirety to get a sense of how to put exception handling to good use. |
||
5.2: The Role of Exceptions | An error can be a syntax error, a runtime error, or a logic error. The language compiler or interpreter can detect syntax errors. A runtime error is an unexpected event involving incorrect semantics and interrupts the execution of an instruction. A logic error results in program behavior that does not satisfy the requirements: an incorrect solution. This page describes some semantic errors that various computer languages strive to prevent and some that some languages can not detect. In the latter case, the software developer should build program features that either avoid or detect those errors. |
|
Read this section for an explanation of how to prevent errors by writing correct programs and writing program statements to take corrective action or to prevent errors. Runtime errors and incorrect results when executing a program are addressed by detecting them and taking corrective action when they occur, or by preventing them from occurring in the first place. Corrective action and prevention depend on an understanding of the programming process and the identification of the root cause of errors. The programming process consists of several stages: specifying the requirements for the program, designing the program, implementing the program using a programming language, testing the program, and deploying the program for operation by users. Errors should be detected as early as possible in the programming process, and each process stage should have a verification activity: requirements, design, code, and even tests should be verified for correctness. Each stage should also be validated (that is, checked that they satisfy the requirements) to prevent errors. Although this page uses Java syntax for its examples, the same principles apply to any language, including C++. |
||
5.3.1: Exceptions in Java | This page illustrates the Java flow to handle an exception. Java uses 'try' and 'catch' blocks for the code that handles the exception, and a 'throw' keyword used in the method where the exception occurs. |
|
What exceptions to handle, where to place the exceptions handlers, the correspondence of 'try's to 'catches', the nesting of exception handlers, and which methods will have the throw keywords, are design decisions and are addressed in the design activity of the programming process. |
||
Read this section, which elaborates on exceptions in Java. |
||
This video gives a Java example of an exception handler using 'throw' and 'catch' blocks. It is a brief review of the previous discussion. An exception is a runtime error that interrupts the execution of a program statement. When an exception occurs, the runtime system transfers control to either a runtime routine that handles the exception by printing a message and then terminating the program or to a programmer-provided routine that handles the exception by executing instructions that take corrective action. This video gives a Java example of an exception handler using 'throw' and 'catch' blocks. |
||
5.3.2: Exceptions in C++ | Read this overview of the role of exceptions generated by the C++ library. This section also covers throw and try/catch concepts. |
|
This page might seem like it duplicates some of what we have just seen, but it is valuable because it gives a different perspective on the topic. Read chapter 1 on pages 15-60. |
||
6.1: What is Recursion? | Read this article to learn the basics of recursion. |
|
Recursion is a problem-solving technique for decomposition and recomposition. In the typical programming process, recursion applies to both design and coding. Recursive algorithms are used in designing, and recursive implementations are used in coding. Usually, recursion in design comes from recursive data structures because they are generic and apply to many types of problems or tasks. For example, trees are a widely useful data structure that are used in games, decision-making, analysis, and more. Recursive implementations tend to pertain to specific problems or tasks. However, if there are similar problems, a recursive implementation may be applicable to those similar problems. This video introduces recursion as a "divide and conquer" problem-solving technique, a design technique, and a programming technique. |
||
6.2: Recursive Structures | This video delves deeper into divide and conquer algorithms. Some of these algorithms are recursive. Recursion is a way of decomposing a problem into smaller or simpler problems of the same type. We have discussed the base subproblem and the inductive subproblem when applying recursion. A third consideration is composition – namely, how will simpler subproblems be composed into a solution to the larger problem? Sometimes the decomposition is easy but the composition is hard; sometimes the reverse is true. In this video, the lecturer uses complexity to compare algorithms – how does the number of steps in a solution to a problem grow as n grows, where n is the number of operations or data elements involved in the inductive step? Formally, that complexity approach is called Big 'O'. The lecturer presents two examples. The takeaway here is that designing and implementing algorithms involves tradeoffs: decomposition versus composition (as seen in the merge sort example), and tradeoff of storage space versus run-time (as seen in the hash function example). Most of the concepts you encounter in programming languages are related to reusing algorithms or designs and implementations. These are categorized by complexity; for example, the merge sort example has n log n complexity. This helps us decide their appropriateness for certain kinds of problems. The lecture closes by showing how you can enhance the examples using exceptions and assertions (pre- and post-conditions), which were covered in Unit 4 of our course. They help us handle different exceptions that might arise when reusing algorithms. |
|
6.3: Recursive Steps | This resource goes into recursion in detail using Java, and goes over several examples. The steps of induction can be implemented using either recursion or loops; for a particular program, one implementation may be more effective or efficient than the other. Given a program that implements recursion, stepping through the program statement by statement helps to understand the details of how recursion works at run-time. The run-time system makes use of an internal stack that records the run-time states of a recursive function. Diagrams of the stack are given that show the changes in state of the method during run-time; that is, the pushing of the state when the method is called and the popping of the stack when the method has finished executing. Be sure to review the diagrams. |
|
The principles of recursion are the same, regardless of the language used for implementation. This chapter views the topic through the lens of C++. There are a fair number of examples and visualizations. Read through these at a minimum. You might find the exercises useful since the application of a principle is an aid to your understanding. Read Chapter 5 in this book on C++. |
||
6.4: Applying Recursion | These notes expand on the topic of recursion as a way to decompose problems into subproblems. They also apply to decomposing data structures into sub-substructures. Identifying the recursive problem or data structure and their implementation in a program require a thorough understanding of the programming process. Some implementation topics are pointed out, including reentrant code, recursion vs. iteration (loops), relation of recursion to the functional and imperative programming paradigms, and common mistakes in programming recursion. |
|
Tail-recursion is a type of recursion where the last statement executed is the recursive call and, therefore, there are no instructions after that call. When the return from the recursive call is made, the calling function terminates, in which case the return address to the calling function is not necessary. Some compilers and some languages do not save the return address back to the recursive calling function and translate tail recursion into machine instructions for a loop, which saves space on the stack and execution time. However, implementation of source tail recursion using machine code for a loop is compiler and/or language dependent. |
||
From an academic perspective, recursion seems rather elegant and it requires less code to run. Plus, many algorithms are easier to express recursively than explicitly (without recursion). There is much to recommend recursion. However, expressing an algorithm recursively is not always the right thing to do. For one thing, it is difficult to predict the amount of memory and time consumed by a recursive implementation. Many applications in industry require tight and known runtimes. Memory availability is limited as well. Every time a recursive call is made, more memory is occupied, which takes additional time. |
||
The author is convinced that, if you want to ever consider yourself an experienced programmer, you should be trying to understand cases where recursion is not an appropriate design. His article is intended to help you gain that understanding as quickly as possible. His examples are written in Javascript. You will note the closeness of the syntax to Java. |
||
This video interpretes recursion using stack visualization. Notice how memory-use expands and contracts as the process evolves. |
||
It is worth examining various numeric sequences and their implementation via recursive and non-recursive approaches. |
||
6.5.1: Examples in C/C++ | Discussed in this video is the conversion of a mathematical theorem to a recursive process. Then that process is explained step-by-step for given input values. |
|
There is a complete description of the coding process in this video. There is also a quick view of memory-pointer management within a recursive process. |
||
There is a complete description of the coding process in this video. |
||
6.5.2: Examples in Java | Offers a discussion on the recursive processing of binary trees. A code walk-through is included. Material begins at Time Marker 4:31. |
|
7.1: Search Algorithms | Sorting usually makes use of search. Watch these lectures on linear and binary search, and note how the use of sorting can also improve search performance, in some cases. These lectures make mention of Python code but that part can be ignored since the lectures stand alone. They explain the basics of the algorithm well and in such a way that the brief exposure of code is only ancillary. Another point: A lot of time is spent discussing list representation. Do not ignore that part of the discussion. How a list is represented makes a huge difference in search and sort performance. |
|
Assume we have a collection of data objects, such as telephone numbers, and that we need to find a particular phone number in that collection. We will need a data structure for storing the objects. One such data structure is a list. A list is a generic object and can be used for any type, a type built into our programming language or a programmer defined object. For example, we can have a list of integers or a list of telephone numbers. |
||
Read this page. In the previous unit of our course we studied recursive algorithms. Recursion is a concept that also applies to data. Here we look at recursive data structures - lists, trees, and sets. A list is a structure that consists of elements linked together. If an element is linked to more than one element, the structure is a tree. If each element is linked to two (sub) elements, it is called a binary tree. Trees can be implemented using lists, as shown in the resource for this unit. Several examples of the wide applicability of lists are presented. A link points to all the remaining links, i.e. the rest of the list or the rest of the tree; thus, a link points to a list or to a tree - this is data recursion. The efficiency of the programming process includes both running time and size of data. This page discusses the latter for recursive lists and trees. Lastly, why read the last section on sets? Sets are another recursive data structure and the last section 2.7.6, indicates their connection with trees, namely, a set data type can be implemented in several different ways using a list or a tree data type. Thus, the programming process includes implementation decisions, in addition, to design or algorithm decisions. Each of these types of decisions is constrained by the features of the programming language used. The decision choices, such as which data structure to use, will impact efficiency and effectiveness of the program's satisfaction of the program's requirements. Note: You will notice an unusual use of C++ here. What the author is doing is showing how to pass a fixed-value data-structure as a calling argument. |
||
The use of a tree structure involves traversing or stepping through the elements or nodes of the tree. This page shows how to traverse a binary tree, which we can extend to trees having more than two descendants at each node. Many problems can be modeled by a tree. For example, in chess, the first move can be represented by the root or starting node of a tree; the next move by an opponent player, by the descendent nodes of the root. This decomposition can continue for many levels. Thus, a level in the tree hierarchy represents the possible moves available to one player; and the next level, the possible moves of the opponent player. Each level represents the choices available to a given player. Traversing the tree involves: from a given start node a player looks-ahead at its descendent nodes (the possible moves), from each of these descendant nodes the player looks-ahead at their descendants (possible responding moves of the opponent player), and so on, continuing to look ahead (planning) to cover as many levels as feasible. Based on the look-ahead information (which gets better the further the look-ahead goes), the player chooses a descendant from the given start node. |
||
It is important to understand what search is and when it is appropriate. This page explains sequential and binary search, and their implementation. There is also the matter of hashing as a storage and search technique. In so doing, we introduce the unordered map and how to implement a map abstract data type using hashing. Read Sections 6.1-6.5. You may also find Sections 6.6-6.11 useful for practice. |
||
7.2: Sorting Algorithms | This page explains and implements selection sort, bubble sort, merge sort, quick sort, insertion sort, and shell sort. |
|
This lecture explains the details of the working of quick sort, which is on average 3 times faster than merge sort. The coding syntax is very general, fit for any language. The video has 3 parts: the first 20 minutes approximately, or first third, gives the explanation – watch that part of the lecture. You should watch the rest of the lecture when you study Big-O analysis. |
||
The radix sort does not compare values of the elements to be sorted; it uses the digits (in some radix, e.g. base 2) of integer keys and sorts the keys by sorting, first on the first digit (either the least significant or most significant digit), then on the next significant digit, and so on, up to the last digit. This decomposes the problem into n smaller sorting problems, namely, sorting, all the values that have the same digit in the same radix position of the key. Read this article on the Radix sort. Carefully study the discussion on efficiency and note that the complexity depends on the assumptions made regarding the primitive steps and the data structures used in a program. |
||
7.3: Analyzing Program Efficiency | These videos on algorithm analysis are well thought out and presented. At issue is that the simple code used for illustration is written in Python, a modern industrial language but not a prerequisite for this course. This page gives code snippets that compare Python with C++. C++ syntax is sufficiently similar to Java that you will readily see the relationship. |
|
Understanding the complexity of an algorithm helps us decide whether or not we should use it as the design of a program to solve a problem. Complexity is usually measured in terms of the average number of steps in the computation of a program. The steps can be used to estimate an average bound, lower bound, and upper bound of the amount of time and for the amount of storage space needed for the computation. The lecture explains Big O notation and concept and, using recurrence relations, develops the Big O value for several types of computations. The steps of interest are the primitive steps of an algorithm and the operations that are intrinsic to the data structure used in the program implementation of the algorithm. |
||
In this unit we continue our exploration of abstraction with respect to algorithm complexity. This lecture discusses the classification of algorithms according to how the performance of an algorithm grows relative to the size of the problem or task the algorithm solves or performs. Algorithms are classified by average run-time complexity, defined as the average number of steps the algorithm takes for a problem of size 'n'. Abstraction ignores the implementation of the algorithm and only considers the growth in the number of (primitive) steps an algorithm takes as the size of the problem grows. The video lecture introduces big 'O' notation and gives examples for linear, log, quadratic, and exponential complexity. The lecturer states that exponential complexity should be avoided in general. Although the examples are in C++, the same principles apply to algorithms in general, regardless of language. |
||
Course Feedback Survey |