Results and discussion

Lessons learned

The development of our methodology and its application to the real world retail business led to several lessons learned. From a business management perspective, we observe the following:

Modelling goals and defining drivers and KPIs (i.e., creating the cause-effect decision model) not only helps to document the known aspects of the business but also helps to clarify unknown factors that might be driving goal accomplishment. Validation of the model through interviews with decision makers ensures that data and KPIs included are indeed relevant to the business. This can have a great impact especially for small businesses where initial goals might not have been clear.

Even though modelling the indicators helps define the required information and the relationships between variables, we are still unsure about where to draw the line regarding the data we need to show in the model versus the data maintained in source systems (e.g., databases or BI reports). We still need to explore how to find the appropriate balance so we do not omit important information in the model for decision making while preventing the inclusion of too much data which can complicate the decision-making environment. We believe however that getting feedback on the right balance is facilitated by the use of graphical goal models with rapid evaluation feedback as provided by GRL strategies, which provide more insight into cause-effect relationships than do conventional BI reports. Note however that the goal-oriented view introduced here is complementary to what is currently found in BI tools, not a substitute.

Defining relationships between the model elements without historical data is difficult. In some cases, managers themselves are not aware of the linkages because they have not had the historical data available to create cause-effect models. In this case, we first create the models using industry standards or "educated guesses" and then use the different iterations of the methodology steps to improve the cause-effect decision model.

The ability to adjust the range of acceptable values for a KPI is useful for registering risk. For example, one might establish a wide range of acceptable values for an objective that carries a high level of risk, such as expected sales for a new product. On the other hand, objectives with lower profiles, such as sales of well-established products, might have a narrower range of acceptable values.

The ability to have a cohesive notation and modelling environment that allow one to model the organization goals, KPIs, decision, situations, and processes, together with simple integration of the models with BI tools, can help with the introduction and adaptation of the suggested approach.

From a technical point of view, we have learned that:

Our new extensions to GRL and the new formula-based algorithm provide a great deal of flexibility for model evaluation, especially as they are combined with standard goal satisfaction evaluation, hence offering the best of both worlds. However our new algorithm still has room for improvement, especially when it comes to using other intentional elements (e.g., goals) as contributors to KPIs. We have had limited experience with this idea by considering situations as an input to KPIs, but this type of modelling may be useful in other circumstances that require further investigation. In addition, recent results on the use of constraint-based programming for solving the satisfaction problem in GRL models has the potential to allow optimizations to be computer and answers to be provided to questions such as "if I want to reach this KPI satisfaction level or current value, tell me what should be done in sub-KPIs given all the constraints present in the goal model".

Creating different versions of a model in different iterations and keeping them consistent for comparison purposes can be painful with current tool support. Saving separate files for each version of the model quickly becomes a maintenance issue that requires a better technical solution. Very recent additions to jUCMNav however provide potentially useful features in this context, including: URN model comparisons, strategy evaluation differences, contribution overrides, as well as export of strategy results (e.g., for historical data) and import of strategy definitions as Comma-Separated Value (CSV) files.