Introduction to Programming Languages

Read this article to see how a program is edited, compiled (or assembled), linked, and executed in the computer.

There are different ways in which we can execute programs. Compilers, interpreters and virtual machines are some tools that we can use to accomplish this task. All these tools provide a way to simulate in hardware the semantics of a program. Although these different technologies exist with the same core purpose - to execute programs - they do it in very different ways. They all have advantages and disadvantages, and in this chapter we will look more carefully into these trade-offs. Before we continue, one important point must be made: in principle any programming language can be compiled or interpreted. However, some execution strategies are more natural in some languages than in others.


Compiled Programs

Compilers are computer programs that translate a high-level programming language to a low-level programming language. The product of a compiler is an executable file, which is made of instructions encoded in a specific machine code. Hence, an executable program is specific to a type of computer architecture. Compilers designed for distinct programming languages might be quite different; nevertheless, they all tend to have the overall macro-architecture described in the figure below:


A compiler has a front end, which is the module in charge of transforming a program, written in a high-level source language into an intermediate representation that the compiler will process in the next phases. It is in the front end that we have the parsing of the input program, as we have seen in the last two chapters. Some compilers, such as gcc can parse several different input languages. In this case, the compiler has a different front end for each language that it can handle. A compiler also has a back end, which does code generation. If the compiler can target many different computer architectures, then it will have a different back-end for each of them. Finally, compilers generally do some code optimization. In other words, they try to improve the program, given a particular criterion of efficiency, such as speed, space or energy consumption. In general the optimizer is not allowed to change the semantics of the input program.

The main advantage of execution via compilation is speed. Because the source program is translated directly to machine code, this program will most likely be faster than if it were interpreted. Nevertheless, as we will see in the next section, it is still possible, although unlikely, that an interpreted program run faster than its machine code equivalent. The main disadvantage of execution by compilation is portability. A compiled program targets a specific computer architecture, and will not be able to run in a different hardware.


The life cycle of a compiled program

A typical C program, compiled by gcc, for instance, will go through many transformations before being executed in hardware. This process is similar to a production line in which the output of a stage becomes the input to the next stage. In the end, the final product, an executable program, is generated. This long chain is usually invisible to the programmer. Nowadays, integrated development environments (IDE) combine the several tools that are part of the compilation process into a single execution environment. However, to demonstrate how a compiler works, we will show the phases present in the execution of a standard C file compiled with gcc. These phases, their products and some examples of tools are illustrated in the figure below.


The aim of the steps seen above is to translate a source file to a code that a computer can run. First of all, the programmer uses a text editor to create a source file, which contains a program written in a high-level programming language. In this example, we are assuming C. There exist every sort of text editor that can be used here. Some of them provide supporting in the form of syntax highlighting or an integrated debugger, for instance. Lets assume that we have just edited the following file, which we want to compile:

#define CUBE(x) (x)*(x)*(x)
intmain(){
 inti=0;
 intx=2;
 intsum=0;
 while(i++<100){
  sum+=CUBE(x);
 }
 printf("The sum is %d\n",sum);
}

After editing the C file, a preprocessor is used to expand the macros present in the source code. Macro expansion is a relatively simple task in C, but it can be quite complicated in languages such as lisp, for instance, which take care of avoiding typical problems of macro expansion such as variable capture. During the expansion phase, the body of the macro replaces every occurrence of its name in the program's source code. We can invoke gcc's preprocessor via a command such as 

gcc -E f0.c -o f1.c. The result of preprocessing our example program is the code below. Notice that the call  CUBE(x) has been replaced by the expression  (x)*(x)*(x).

intmain(){
 inti=0;
 intx=2;
 intsum=0;
 while(i++<100){
  sum+=(x)*(x)*(x);
 }
 printf("The sum is %d\n",sum);
}

In the next phase we convert the source program into assembly code. This phase is what we normally call compilation: a text written in the C grammar will be converted into a program written in the x86 assembly grammar. It is during this step that we perform the parsing of the C program. In  Linux we can translate the source file, e.g., f1.c to assembly via the command cc1 f1.c -o f2.s, assuming that cc1 is the system's compiler. This command is equivalent to the call  gcc -S f1.c -o f2.s. The assembly program can be seen in the left side of the figure below. This program is written in the assembly language used in the x86 architecture. There are many different computer architectures, such as  ARM, PowerPC and Alpha. The assembly language produced for any of them would be rather different than the program below. For comparison purposes, we have printed the ARM version of the same program at the right side of the figure. These two assembly languages follow very different design philosophies: x86 uses a CISC instruction set, while ARM follows more closely the  RISC approach. Nevertheless, both files, the x86's and the ARM's have a similar syntactic skeleton. The assembly language has a linear structure: a program is a list-like sequence of instructions. On the other hand, the C language has a syntactic structure that looks more like a tree. Because of this syntactic gap, this phase contains the most complex translation step that the program will experiment during its life cycle.

# Assembly of x86 # Assembly of ARM
 .cstring _main:
LC0: @ BB#0:
 .ascii "The sum is %d\12\0" push{r7, lr}
 .text movr7, sp
.globl _main subsp, sp, #16
_main: movr1, #2
 pushl %ebp movr0, #0
 movl %esp, %ebp strr0, [r7, #-4]
 subl $40, %esp strr0, [sp, #8]
 movl $0, -20(%ebp) stmsp, {r0, r1}
 movl $2, -16(%ebp) bLBB0_2
 movl $0, -12(%ebp) LBB0_1:
 jmp L2 ldrr0, [sp, #4]
L3: ldrr3, [sp]
 movl -16(%ebp), %eax mulr1, r0, r0
 imull -16(%ebp), %eax mlar2, r1, r0, r3
 imull -16(%ebp), %eax strr2, [sp]
 addl %eax, -12(%ebp) LBB0_2:
L2: ldrr0, [sp, #8]
 cmpl $99, -20(%ebp) addr1, r0, #1
 setle %al cmpr0, #99
 addl $1, -20(%ebp) strr1, [sp, #8]
 testb %al, %al bleLBB0_1
 jne L3 @ BB#3:
 movl -12(%ebp), %eax ldrr0, LCPI0_0
 movl %eax, 4(%esp) ldrr1, [sp]
 movl $LC0, (%esp) LPC0_0:
 call _printf addr0, pc, r0
 leave bl_printf
 ret ldrr0, [r7, #-4]
 movsp, r7
 pop{r7, lr}
 movpc, lr

It is during the translation from the high-level language to the assembly language that the compiler might apply code optimizations. These optimizations must obey the semantics of the source program. An optimized program should do the same thing as its original version. Nowadays compilers are very good at changing the program in such a way that it becomes more efficient. For instance, a combination of two well-known optimizations, loop unwinding and constant propagation can optimize our example program to the point that the loop is completely removed. As an example, we can run the optimizer using the following command, assuming again that cc1 is the default compiler that gcc uses: cc1 -O1 f1.c -o f2.opt.s. The final program that we produce this time, f2.opt.s is surprisingly concise:

 .cstring
LC0:
 .ascii "The sum is %d\12\0"
 .text
.globl _main
_main:
 pushl%ebp
 movl%esp, %ebp
 subl$24, %esp
 movl$800, 4(%esp)
 movl$LC0, (%esp)
 call_printf
 leave
 ret

The next step in the compilation chain consists in the translation of the assembly language to binary code. The assembly program is still readable by people. The binary program, also called an object file can, of course, be read by human beings, but there are not many human beings who are up to this task these days. Translating from assembly to binary code is a rather simple task, because both these languages have the same syntactic structure. Only their lexical structure differs. Whereas the assembly file is written with ASCII [w:Assembly_language#Opcode_mnemonics_and_extended_mnemonics|[mnemonics]], the binary file contains sequences of zeros and ones that the hardware processor recognizes. A typical tool used in this phase is the as assembler. We can produce an object file with the command below as f2.s -o f3.o.

The object file is not executable yet. It does not contain enough information to specify where to find the implementation of the printf function, for example. In the next step of the compilation process we change this file so that the address of functions defined in an external libraries be visible. Each operating system provides programmers with a number of libraries that can be used together with code that they create. A special software, the linker can find the address of functions in these libraries, thus fixing the blank addresses in the object file. Different operating systems use different linkers. A typical tool, in this case, is ld or collect2. For instance, in order to produce the executable program in a Mac OS running Leopard, we can use the command collect2 -o f4.exe -lcrt1.10.5.o f3.o -lSystem.

At this point we almost have an executable file, but our linked binary program is bound to suffer a last transformation before we can see its output. All the addresses in the binary code are relative. We must replace these addresses by absolute values, which point correctly to the targets of the function calls and other program objects. This last step is the responsibility of a program called loader. The loader dumps an image of the program into memory and runs it.



Source: Wikibooks, https://en.wikibooks.org/wiki/Introduction_to_Programming_Languages/Compiled_Programs
Creative Commons License This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Last modified: Wednesday, July 22, 2020, 1:51 PM