Proactive Supply Chain Performance Management with Predictive Analytics
Predictive Supply Chain Performance Management Model
Performance
management of complex business networks such as supply chains requires a
unified approach that comprises different management models,
technologies, and tools. This section introduces an integrated supply
chain PM model which incorporates the supply chain modelling method and
business intelligence technologies such as data warehousing and data
mining. It is based on the integrated supply chain intelligence model
for collaborative planning, analysis, and monitoring.
The main elements and the structure of the supply chain PM model are shown in Figure 2.
Figure 2 Predictive supply chain performance management model.

The
basis of the model is the supply chain modelling method which enables
modelling of supply chain processes, relationships, metrics, best
practices, and other relevant elements. Output of this stage is a supply
chain process model that serves as input for data warehouse design.
First,
based on the process model, data from various sources is extracted,
cleaned, and transformed in order to accommodate requirements for KPI
design, multidimensional analysis, and data mining models. The following
step is construction of OLAP cubes with proper dimensions and measures.
OLAP schema is the basis for design of supply chain KPIs which measure
the progress toward predefined goals. KPI typically provides a visual
representation of metrics over time, rather than just displaying the
numbers.
The next step which includes data mining is the key step
toward predictive performance management. Here, historical performance
(KPI) data is used for making predictions about future performance. This
information is then delivered to decision making via a special BI web
portal in the form of web reports, charts, scorecards, dashboards, or
notifications. Alternatively, prediction information can be used in
supply chain simulation models for analyzing different scenarios and
risks. Abukhousa et al. used simulation models as an analysis tool
for predicting the effects of changes to existing healthcare supply
chains and as a design tool to predict the performance of new systems
under varying sets of input parameters or conditions.
The
final step is taking appropriate actions to resolve problems and make
adjustments to strategy and plans. These actions can be made based on
more detailed reporting, data exploration, specific expert systems,
simulation, and data mining models. One such intelligent software
solution for integrated and interactive supply network design,
evaluation, and improvement is developed. It consists of three
modules designed for knowledge-based supply network modelling,
rule-based simulation executions, and intelligent assessment and
improvement.
In the next subsections, the main elements of the proposed supply chain predictive PM model will be described.
Supply Chain Process Model
The
starting point is the SCOR process model which provides a library of
the supply chain specific set of processes, relationships, metrics, and
best practices. The SCOR process model contains a standard name for each
process element, a notation for the process element, a standard
definition for the process element, performance attributes that are
associated with the process element, metrics that are associated with
the performance attributes, and best practices that are associated with
the process.
All process metrics are an aspect of the performance
attribute. Performance attributes for any given process are
characterized as either customer-facing (reliability, responsiveness,
and flexibility) or internal-facing (cost and assets) metrics.
Top
level metrics are the calculations by which an implementing company can
measure how successful they are in achieving their desired positioning
within the competitive market space. Lower level calculations (levels 2
and 3 metrics) are generally associated with a narrower subset of
processes. For example, delivery performance is calculated as the total
number of products delivered on time and in full, based on a committing
date. Additionally, even lower level metrics (diagnostics) are used to
diagnose variations in performance against plan. For example, a company
may wish to examine the correlation between the request date and
committing date.
Each process from the process model has its
related metrics, best practices, and inputs and outputs. All metrics
follow the same template which consists of the following
elements:
(i) name,
(ii) definition,
(iii) hierarchical metric structure,
(iv) qualitative relationship description,
(v) quantitative relationship (optional, if calculable),
(vi) calculation,
(vii) data
collection.
Based on the SCOR process model, we have created the
SCM metamodel (Figure 3), which enables the creation of any supply chain
configuration and is the basis for further modeling Metamodel is
normalized and contains all SCM elements such as processes, metrics,
best practices, inputs, and outputs. It also incorporates business logic
through relationships, cardinality, and constrains.
Figure 3 SCM metamodel.

Metamodel
is extended with additional entities to support supply network
modelling. That way, processes, metrics, and best practices can be
related to a specific node and tier in the supply network. With this
metamodel, processes at different levels can be modelled, thus providing
a more detailed view of supply chain processes and metrics. Metamodel
database contains standard SCOR metrics but also enables defining of
custom metrics, as well as metrics at lower levels (i.e., Level 4,
workflows, or Level 5, transactions).
The developed SCM metamodel
enables flexible modelling and creation of different supply chain
configurations (models). These models are the basis for the construction
of data warehouse (DW) metadata (measures, dimensions, hierarchies, and
KPIs).
DW and OLAP KPI Modeling
A user who wants to retrieve information directly from a data source, such as an ERP database, faces several significant challenges.
(i) The contents of such data sources are frequently very hard to understand, being designed with systems and developers instead of users in mind.
(ii) Information of interest to the user is typically distributed among multiple heterogeneous data sources.
(iii) Whereas many data sources are oriented toward holding large quantities of transaction level detail, frequently the queries that support business decision making involve summary and aggregated information.
(iv) Business rules are generally not encapsulated
in the data sources. Users are left to make their own interpretation of
the data.
In order to overcome these problems, we have
constructed the unified dimensional model (UDM). The role of a UDM
is to provide a bridge between the user and the data sources. A UDM is
constructed over one or more physical data sources. The user issues
queries against the UDM using a variety of client tools.
Construction
of the UDM as an additional layer over the data sources offers clearer
data model, isolation from the heterogeneous data platforms and formats,
and an improved performance for aggregated queries. UDM also allows
business rules to be embedded in the model. Another advantage of this
approach is that UDM does not require data warehouse or data mart. It is
possible to construct UDM directly on top of OLTP (on-line
transactional processing) systems and to combine OLTP and DW systems
within a single UDM.
In the UDM it is possible to define cubes,
measures, dimensions, hierarchies, and other OLAP elements, from the DW
schemas or directly from the relational database. This enables providing
the BI information to the business users even without previously built
DW, which can be very useful having in mind the fact that within the
supply chain there can be many nonintegrated data sources which require
time to connect, integrate, and design the data warehouse.
Flexibility
of UDM is also manifested in the fact that tables and fields can be
given names and descriptions that are understandable to the end-user and
hide unnecessary system fields. This metadata is further used
throughout the UDM, so all the measures, dimensions, and hierarchies
that are created based on these table fields will use these new names.
Definitions
of all UDM elements are stored as XML (eXtensible markup language)
files. Each data source, view, dimension, or cube definition is stored
in a separate XML file. For dimensions, these files contain data about
tables and fields which store dimension members. OLAP cube definition
files also contain information on how the preprocessed aggregates will
be managed. This metadata approach enables centralized management of the
dimensional model for the entire supply chain and provides an option
for model integration and metadata exchange.
Measures are one of
the basic UDM elements. Measures are the primary information that
business users require in order to make good decisions. Some of the
measures that can be used for the global supply chain analysis and
monitoring are as follows:
(i) reliability:
(1) perfect order fulfillment,
(ii) responsiveness:
(1) order fulfillment cycle time,
(iii) agility:
(2) upside supply chain adaptability,
(3) downside supply chain adaptability,
(4) overall value at risk,
(iv) cost:
(1) total cost to serve,
(v) asset management efficiency:
(2) return on supply chain fixed assets,
(3) return on working capital.
During the design, for each measure we need to define
the following properties:
(i) name of the measure,
(ii) what OLTP field or fields should be used to supply the data,
(iii) data type (money, integer, or decimal),
(iv) formula used to calculate the measure (if there is
one).
The next step is to cluster measures into measure groups.
The measure groups are an integral part of the UDM and the cube. Each
measure group in a cube corresponds to a table in the data source view.
This table is the measure group's source for its measure data.
Supply
chain process model (built using the SCM metamodel) can be used as the
basis for defining measures and measure groups because it provides
relationships between business processes and metrics, metrics
hierarchies, definitions, quantitative and qualitative descriptions, and
description of possible data sources that provide data for
calculations.
Companies often define key performance indicators,
which are important metrics used to measure the health of the business.
An OLAP KPI is a server-side calculation meant to define company's most
important metrics. These metrics, such as net profit, assets
utilization, or inventory turnover, are frequently used in dashboards or
other reporting tools for distribution at all levels throughout the
supply chain.
The UDM allows such KPIs to be defined, enabling a
much more understandable grouping and presentation of data. Key
performance indicator is a collection of calculations that are
associated with a measure group in a cube that is used to evaluate
business success. Typically, these calculations are a combination of
multidimensional expressions (MDX) or calculated members. KPIs also have
additional metadata that provides information about how client
applications should display the results of the KPI's calculations. The
use of OLAP-based KPIs allows client tools to present related measures
in a way that is much more readily understood by the user.
Table 1 lists common KPI elements and their definitions.
OLAP KPI structure.
Term | Definition |
Goal | An MDX numeric expression that returns the target value of the KPI. |
Value | An MDX numeric expression that returns the actual value of the KPI. |
Status | An MDX expression that represents the state of the KPI at a specified point in time. The status MDX expression should return a normalized value between −1 and 1. |
Trend | An MDX expression that evaluates the value of the KPI over time. The trend can be any time-based criterion that makes sense in a specific business context. |
Status indicator |
A visual element that provides a quick indication of the status for a KPI. The display of the element is determined by the value of the status MDX expression. |
Trend indicator | A visual element that provides a quick indication of the trend for a KPI. The display of the element is determined by the value of the trend MDX expression. |
Display folder | The folder in which the KPI will appear to the user when browsing the cube. |
Parent KPI | A reference to an existing KPI that uses the value of the child KPI as part of the KPI's computation. |
Current time member |
An MDX expression that returns the member that identifies the temporal context of the KPI. |
Weight | An MDX numeric expression that assigns a relative importance to a KPI. If the KPI is assigned to a parent KPI, the weight is used to proportionally adjust the results of the KPI value when calculating the value of the parent KPI. |
Figure 4 shows a part of the return on supply chain fixed assets KPI defined in the OLAP server.
Figure 4 DX KPI definition.

Besides
aforementioned benefits of the UDM model, it also serves as a basis for
data mining model design because it provides a consolidated,
integrated, aggregated, and preprocessed data store.