Process Models in Design and Development

Read this article. It provides an overview of planning models. Pay particular attention to Figure 1 as it visually provides a global view of planning models. Then review Figures 2 -17 for more in-depth visual planning processes.

Meso-level models

Macro-level MS/OR models

The final category of models concerns computational or mathematical studies of factors governing processes on the macro-level.

Fig. 17



The first group of models in this category consider the overlapping of two consecutive project stages or tasks. These models are classified as macro-level because they focus on managerial decisions without representing the numerous tasks in a process flow. Much work on this topic was inspired by Krishnan et al. who study how preliminary transfer of information from an upstream stage, such as product design, allows a downstream stage, such as production design, to be started early. Because it is only an estimate of the final value, the preliminary information will be subject to one or more updates, each of which causes downstream rework (Fig. 17). This is modelled as a curve that defines the evolution of the upstream task's output towards a final value, and another defining how the sensitivity of the downstream task to changes increases over time. Krishnan et al. develop optimal overlapping strategies considering the forms of the two curves. Loch and Terwiesch further analyse the two-stage overlapping situation, focusing on the communication that enables overlapping. Their model considers that holding meetings to communicate more frequently during the overlapping period reduces iteration impact, because each change released by the upstream task will require more work to be redone the later it is dealt with, since more of the dependent work will be completed. However, meetings also require time. Optimal policies for overlapping are derived algebraically under these assumptions. Joglekar et al. assume that each of the two overlapping tasks generates 'design performance' at a fixed rate while also reducing the performance generated by its partner, causing rework to regain the prior level. They use algebraic manipulations to show how the relative rates of performance generation and the coupling strength between the tasks determine the optimal overlap. Again focusing on two tasks, Roemer and Ahmadi investigate the relationship between overlapping and crashing, i.e., increasing work intensity to reduce duration while increasing effort. They conclude that these approaches should be considered together and that the intensity of work should follow a certain pattern to minimise the rework caused by overlapping. The models described above incorporate many simplifying assumptions that assist with manipulating the algebra. Other researchers study similar issues using Monte Carlo simulation which allows study of more complex problems involving more factors and variables. For instance, the model developed by Bhuiyan et al. focuses on how sequentially dependent process phases can be overlapped to reduce development time at the risk of causing iteration at the phase exit review. They show that this risk can be mitigated by increasing the degree of functional interaction between engineering functions within each phase, although this causes more iteration within the phases.

Second, some researchers take an MS/OR approach to analyse the situations in which different macro-level process structures are appropriate. For instance, Bhattacharya et al. study to what degree a flexible process in which a design specification is evolved by repeated user feedback can be justified, considering that this may increase product attractiveness and thus sales, but leaves less time to optimise the design which may result in higher production costs. Several factors that should influence the choice of process structure are studied, including market uncertainty, the firm's appetite for risk, and the value of information that can be gained from customer feedback. Loch et al. consider when testing of design alternatives should be done in parallel (as per SBCE) allowing quick convergence to a solution, or sequentially, which allows for learning from each test to inform the next in a process of iterative improvement. Their model shows that parallel testing is most useful if the cost of tests is low or the time required to complete each test is significant, and if the tests are effective in revealing information about the designs. Suss and Thomson develop a discrete-event simulation model called the Collaborative Process Model (CoPM) that represents an engineering design process on three levels: a stage-gate structure; the activities and their interdependencies within each stage; and the actors or teams that carry out the activities. Among other insights, Suss and Thomson use their model to show that Scrum (an IID approach in which each iteration involves a short period of intense communication followed by a design review) is more effective than a traditional staged process in cases of high uncertainty within the process.