Optimal Policies for a Finite-Horizon Production Inventory Model

Read this article. The research indicates challenges associated with the timely ordering of products, especially those that can degrade with time. What kinds of challenges can you see with products that have a defined shelf life and can spoil or deteriorate?

Technical Preliminaries

This section contains a summary of the work of Benkherouf and Gilding needed to tackle the problem of this paper. Proofs of the results are omitted. Interested readers may consult Benkherouf and Gilding.

Consider the problem

\mathbf{P}: \operatorname{TC}\left(t_{1}, \ldots, t_{n} ; n\right)=n K+\sum_{i=1}^{n} R_{i}\left(t_{i-1}, t_{\mathrm{i}}\right),   (3.1)

subject to

0=t_{0} \text { < } t_{1} \text { < } \cdots \text { < } t_{n}=H.   (3.2)


It was shown in Benkherouf and Gilding that, under some technical conditions, the optimization problem (P) has a unique optimal solution which can be found from solving a system of nonlinear equations derived from the first-order optimality condition. To be precise, t_{0}=0 and t_{n}=H and ignore the rest of the constraints (3.2).

Write

S_{n}:=\sum_{i=1}^{n} R_{i}\left(t_{i-1}, t_{i}\right).   (3.3)


Assuming that R_{i}^{\prime} s  are twice differentiable, then, for fixed n, the optimal solution in (P) subject to (3.2) reduces to minimizing S_{n}.

Use the notation \nabla for the gradient, then setting \nabla \mathrm{TC}\left(t_{1}, \ldots, t_{n} ; n\right)=0 gives

(\nabla \mathrm{TC})_{i}=\left(\partial R_{i}\right)_{y}\left(t_{i-1}, t_{i}\right)+\left(\partial R_{i+1}\right)_{x}\left(t_{i}, t_{i+1}\right)=0, \quad i=1, \ldots, n-1.   (3.4)


Two sets of hypotheses were put forward.

Hypothesis 1. The functions R_{i} satisfy, for i=1, \ldots, n \text { and } y \text { > } x,

(1)  R_{i}(x, y) \text { > } 0

(2) R_{i}(x, x) \text { = } 0

(3)  \left(\partial R_{i}\right)_{x}(x, y) \text { < } 0 \text { < } \left(\partial R_{i}\right)_{y}(x, y)

(4) \left(\partial_{x} \partial_{y} R\right)_{i}(x, y) \text { < } 0


Hypothesis 2. Define

\begin{aligned}
&\mathscr{L}_{x} z=\partial_{x}^{2} z+\partial_{x} \partial_{y} z+f(x) \partial_{x} z \\
&\mathscr{L}_{y} z=\partial_{y}^{2} z+\partial_{x} \partial_{y} z+f(y) \partial_{y} z
\end{aligned}   (3.5)


then there is a continuous function f such that \mathscr{L}_{x} R_{i} \geq 0, \mathscr{L}_{y} R_{i} \geq 0 for all i=1, \ldots, n, and \left(\partial R_{i}\right)_{y}+\left(\partial R_{i+1}\right)_{x}=0 on the boundary of the feasible set.


The next theorem shows that under assumptions in Hypotheses 1 and 2, the function S_{n} has a unique minimum.

Theorem 3.1.The system (3.4) has a unique solution subject to (3.2). Furthermore, this solution is the solution of (3.1) subject to (3.2).  Recall that a function S_{n} is convex in n if

S_{n+2}-S_{n+1} \geq S_{n+1}-S_{n}.   (3.6)

This is equivalent to

\frac{1}{2}\left(S_{n}+S_{n+2}\right) \geq S_{n+1}.   (3.7)


Theorem 3.2. If s_{n} denotes the minimum objective value of (3.1) subject to (3.2) and R_{i}(\mathrm{x}, y)=R(x, y) then s_{n} is convex in n.

Based on the convexity property of s_{n}, the optimal number of cycles n* is given by

n^{*}=\min \left\{n \geq 1: s_{n+1}-s_{n} \text { > } 0\right\} .   (3.8)


Now to solve (3.4) at i=n-1 ,

\partial R_{n-1 y}\left(t_{n-2}, t_{n-1}\right)+\left(\partial R_{n}\right)_{x}\left(t_{n-1}, H\right)=0.   (3.9)

Assume that t_{n-1} is known, t_{n-2} can be found uniquely as a function of t_{n-1}. Repeating this process for i=n-2, i=1, t_{n-3}, \ldots, t_{n} are a function of t_{n-1}. So, the search for the optimal solution of (3.5) can be conducted using a univariate search method.