Scheduling in Manufacturing Systems: The Ant Colony Approach

Read this section. It makes the case that production systems are similar to ant colonies because ants continue to learn from one another and find ways to become more efficient at their task at hand. As you read, pay special attention to some of the scheduling problems.

3. Adaptation of neural method to solve the problems of scheduling

3.2. The algorithm description

After entering the input data (the system specification), the algorithm constructs a neural network, the structure of which and the number of neurons composing it, depend upon the size and complexity of the instance of problem. We will name the part of the net allocated to this task – an area.

Constraints are introduced to the network by the execution of connections, occurring only between the neurons corresponding to the values of variables which cannot occur simultaneously.

The operation of the algorithm is the process of switching on appropriate neurons in each domain of network in order to satisfy the constraints imposed by the input data.

The algorithm course is as follows:

  1. Allocating random values to consecutive variables.
  2. Network relaxation:
  3. Calculating the weighted sum of all neurons inputs.
  4. Switching on the neuron with the highest input value.
  5. Return to relaxation or – if there are no changes – exit from relaxation.
  6. If there are connections (constraints) between the neurons that are switched on, each weight between two switched on neurons is decreased by 1 and there is a return to relaxation.

The algorithm starts from allocating weight '-1' to all connections and then the start solution is generated. It is created by giving random values to the subsequent variables. This process takes place in a certain way: for each task i.e. in each area of the net such number of neurons is switched on as it is necessary for a certain task to be completed. The remaining, in the part which is responsible for its performance, neurons are being switched off. In the obtained result there are many contradictions, specified by switching on the neurons where the connections exist.

Therefore, the next step of the algorithm is the relaxation process, the objective of which is to "satisfy" the maximum numbers of limitations (backtracking). The objective is to obtain the result where the number of situations, where two switched on neurons of negative weight connection between them is the lowest. While switching on neurons with the biggest value at the start, in each area three instances may happen:

  • If there is one neuron of the biggest value in the area, it is switched on; the remaining ones are switched off.
  • If there are more neurons, among which there is a previously switched one, there is no change and it remains switched on.
  • If there are more neurons, but there is no-one previously switched on, one of them is switched on randomly, the remaining ones are switched off.

A relaxation process finishes when the subsequent step does not bring any change and if all the requirements are met – the neurons between which a connection exist are not switched on – the right solution is found. If it is not still the case, it means that the algorithm found the local minimum and then the weight of each connection between two switched on neurons is decreased by "1" while its absolute value is being increased. It causes an increase in "interaction force' of this constraint which decreases the chance of switching on the same neurons in a relaxation process where we return in order to find the right solution.

After a certain number of iterations the network should consider all the constraints – providing that there is the right solution, it should be found. Another factor is worth pointing out: in a relaxation process such an instance may occur where changes always happen. Then, this process might never be completed. Then a problem is solved in such a way that relaxation is interrupted after a certain number of calls.

Search for a solution by algorithm consists of two stages. At the first one, which is described by the above presented algorithm, some activities are performed which lead to finding the right solution for the given specification. After finding such a solution, in consequence of purpose function optimization there is a change of values for a certain criterion – in this case, decrease – then, the subsequent search for the right solution occur. In this case the search aims at a solution which possesses bigger constraints as the criteria value is sharper. Two criteria are taken into consideration for which a solution is being searched. It may be a cost function – where at the given time criterion, we search for the cheapest solution, or time function – where at the given cost criterion, we search for the quickest solution. Thus, the run of the algorithm is to seek a solution for smaller and smaller value of a selected criterion. However, if the algorithm cannot find the right solution for the recently modified criteria value of the algorithm, it returns to the previous criteria value for which it has found the right solution and modifies it by a smaller value.

For instance, if an algorithm has found the right solution for cost criterion which is e.g. 10, and it cannot find it for cost criteria which are 9, it tries to find a solution for cost 9.5 etc. In this way the program never finishes work, but all the time it tries to find a better solution in sense of a certain criterion. The user/designer of the system can interrupt its work at any moment if he/she considers the current solution given by an algorithm to be satisfying.

In case of time criterion minimization, optimization goes at two planes. At the first one, subsequent neurons of the right side in task part of the network are connected to the neurons "one", in this way fewer and fewer quanta is available for the algorithm of task scheduling which causes moving a critical line to the left and at the same time its diminishing. However, at the second, an individual quantum of time is being diminished; at each step an individual neuron will mean a smaller and smaller time passage.

The task part:

Each area corresponds to one task (Fig. 3). For further area, the best possible setting for the task is selected. Which setting 'wins' at the given stage and in the given area – this shall be determined by the sum of neuron outputs in the setting, i.e. the one that introduces the smaller number of contradictions. Moreover, it is checked if among the found set of the best solutions there is no previous one, then it is left.

A neuron at the [i, k] position corresponds to the presence of 'i' task on the processor at the 'k' moment. Between these neurons there are suitable inhibitory connections (-1.0).

If, for example, task 1 must be performed before task 2, for all the neuron pairs

[1, k], [2, m] there are inhibitory connections (denoting contradictions), if k >= m and if task 8 occurs in the system at moment 2, „one" neuron is permanently connected to neurons [8, 0] and [8, 1] (neuron which has 1.0 at the start which is permanently contradictory) and guarantees that in the final solution there is no quantum at moment 0 or 1.

We also take critical lines into account, which stand for time constraints that cannot be exceeded by any allocated tasks – connecting 'one' will apply to the neurons of the right side of the network outside the critical line.


Figure 3.

The task part for scheduling problems.


Figure 4.

The resource part for scheduling problems.

The resource part:

Before selecting the quanta positions in the areas, algorithm has to calculate inputs for all the neurons. The neurons of the resource part are also connected to these inputs, as the number and the remaining places in recources have an impact on the setting which is going to "win" at a certain stage of computation. Thus, before an algorithm sets an exact task, it calculates the value of neuron inputs in resource part. The [r, i, k] neuron is switched on if at 'k' moment the resource 'r' is overloaded (too many tasks are using t), or it is not overloaded, but setting the task of part 'i' at the moment defined by 'k' would result in overloading. Neurons in resource part ( Fig. 4) respond by their possible connection, resource overloaded, if part of the task were set and at moment 'k'; therefore, neurons of resource part are connected to task inputs.

When in the resource part the neuron " 'r' resource overload' is switched on, as task 'i' is set at moment 'k' ", its signal (1.0) is transferred by weight (-1.0) to the neuron existing in the task part, which causes the negative input impulse (-1.0 * 1.0) at the input which results in a contradiction.

In other words – it "disturbs" function 'compute_in' to set the task and at moment "k". Thus, in the network there are subsequent illegal situations implemented (constraints).

Each neuron [r, i, k] of the resource part is connected with neuron [i, k] from the task part, so a possibility of task existence at a given moment with concurrent resource overloading is 'inhibited'.

Example:

Let us assume that there are five operations A, B, C, D, E.

Task part works as follows (a letter means a neuron switched on, sign '-' means a switched off neuron):

These operations should be allocated to a certain number of processors, so that one only operation would be performed on one processor at an exact moment:

Algorithm allocates ( at moment 0) fragment DDDDDD, adds a new processor ( the first) and allocates on it:

DDDDDD-----------

Allocation -C: for this moment (1) there is no place on the first processor, so algorithm adds the next processor and allocates an operation:

DDDDDD-----------

-C---------------

Allocation BBB: there is place on the second processor:

DDDDDD-----------

-C-BBB-----------

Allocation AAAA: there is no place at quantum 4 –algorithm adds the third processor and allocates:

DDDDDD-----------

-C-BBB-----------

----AAAA---------

Allocation EEEEE: there is place on the first processor :

DDDDDD---EEEEE---

-C-BBB-----------

----AAAA---------

Allocation AA there is place on the second processor

DDDDDD---EEEEE---

-C-BBB----AA------

----AAAA---------

Allocation BB: there is place on the second processor:

DDDDDD---EEEEE---

-C-BBB----AA-BB -

----AAAA---------

The result of the operations on the processors is as follows:

P1:DDDDDD---EEEEE---

P2:-C-BBB----AA-BB--

P3: ----AAAA---------

Computational complexity of neural algorithm for task scheduling

An algorithm gives the right solution for the problems of known multi-nominal algorithms and also may be used for problems NP-complete. The complexity of one computation step may be estimated as follows:

\mathrm{i} *\left(1+\mathrm{p}^{*} \mathrm{k}+\mathrm{k}^{*}(1+\mathrm{k} * \mathrm{p}) * \mathrm{~m}+\mathrm{k} * \mathrm{r} * \mathrm{i} * \mathrm{p}\right)+\mathrm{p}

E_{2}

Where:

  1. i – Number of tasks.
  2. p – Number of processors.
  3. k – Number of time quanta.
  4. m – Number of all consecutive depend abilities between tasks.
  5. r – Number of resources.

The largest complexity is generated by the process of increasing the number of tasks and an increase in the number of time quanta. Also, maximum number of processors and number of constitutive depend abilities in the introduced graph have a powerful effect on computation. It is a pessimistic estimation; in practice, real complexity may be slightly smaller, but proportional to that. An algorithm itself is convergent i.e. step by step generates better and better solutions.