Module

for

Linear Programming - The Simplex Method

 

Background for Linear Programming

    Linear programming is an area of linear algebra in which the goal is to maximize or minimize a linear function  [Graphics:Images/LinearProgrammingMod_gr_1.gif]  of  [Graphics:Images/LinearProgrammingMod_gr_2.gif]  variables [Graphics:Images/LinearProgrammingMod_gr_3.gif] on a region  [Graphics:Images/LinearProgrammingMod_gr_4.gif]  whose boundary is defined by linear inequalities and equations.  In this context, when we speak of a "linear function of [Graphics:Images/LinearProgrammingMod_gr_5.gif] variables" we mean that   [Graphics:Images/LinearProgrammingMod_gr_6.gif]  has the form
    
        
[Graphics:Images/LinearProgrammingMod_gr_7.gif].

The region  [Graphics:Images/LinearProgrammingMod_gr_8.gif]  is a convex polytope.  If all the vertices of  [Graphics:Images/LinearProgrammingMod_gr_9.gif]  are known, then the maximum of  [Graphics:Images/LinearProgrammingMod_gr_10.gif] will occur at one of these vertices.  

    The solution can be constructed using the simplex method and is attributed to George Dantzig (1914 - ) who was born in Portland, Oregon.  The simplex method starts at the origin and follows a path along the edges of the polytope to the vertex where the maximum occurs.  The history of the development of the simplex method has been summarized in the article: An Interview with George B. Dantzig: The Father of Linear Programming  by Donald J. Albers; Constance Reid; in The College Mathematics Journal, Vol. 17, No. 4. (Sep., 1986), pp. 292-314, Jstor.  

 

Definition (Convex Polytope).  In two dimensions a convex polytope is a region that is the intersection of a finite set of half-planes (the general idea of a convex  polygon).  In three dimensions a convex polytope is solid region that is the intersection of a finite set of half-spaces (the generalized idea of a convex polyhedron).  The generalization in n dimensions is called a polytope.

 

Example 1.  Two students Ann and Carl work x and y hours per week, respectively.  Together they can work at most 40 hours per week.  According to the rules for part timers Ann can work at most 8 hours more that Carl.  But Carl can work at most 6 hours more than Ann.  There is an extra constraint  [Graphics:Images/LinearProgrammingMod_gr_11.gif].  Determine the region  [Graphics:Images/LinearProgrammingMod_gr_12.gif]  for these constraints.
1 (a).  If Ann and Carl earn $15 and $17 per hour, respectively, then find their maximum combined income per week.
1 (b).  If Ann and Carl earn $17 and $15 per hour, respectively, then find their maximum combined income per week.
1 (c).  If Ann and Carl both earn $16 per hour, respectively, then find their maximum combined income per week.

Solution 1 (a).

Solution 1 (b).

Solution 1 (c).

 

Standard Form of the Linear Programming Problem

    The standard form of the linear programming problem is to maximize  
[Graphics:Images/LinearProgrammingMod_gr_79.gif]  of  [Graphics:Images/LinearProgrammingMod_gr_80.gif]  variables [Graphics:Images/LinearProgrammingMod_gr_81.gif].  
    
(1)     Maximize   

    
[Graphics:Images/LinearProgrammingMod_gr_82.gif]   
    
    
[Graphics:Images/LinearProgrammingMod_gr_83.gif]     where     [Graphics:Images/LinearProgrammingMod_gr_84.gif]  for  [Graphics:Images/LinearProgrammingMod_gr_85.gif].  

(2)     Subject to the m constraints  

    
[Graphics:Images/LinearProgrammingMod_gr_86.gif]     where     [Graphics:Images/LinearProgrammingMod_gr_87.gif]  for  [Graphics:Images/LinearProgrammingMod_gr_88.gif].  

(3)     With the primary constraints   [Graphics:Images/LinearProgrammingMod_gr_89.gif]   for   [Graphics:Images/LinearProgrammingMod_gr_90.gif].  

    The coefficients
[Graphics:Images/LinearProgrammingMod_gr_91.gif]  and  [Graphics:Images/LinearProgrammingMod_gr_92.gif]  can be any real number.  It is often the case that  [Graphics:Images/LinearProgrammingMod_gr_93.gif],  but the cases  [Graphics:Images/LinearProgrammingMod_gr_94.gif]or  [Graphics:Images/LinearProgrammingMod_gr_95.gif]  can occur.

 

Setting up the Extended Simplex Tableau

    For computational purposes, we construct a tableau. The first
[Graphics:Images/LinearProgrammingMod_gr_96.gif] rows consist of the coefficients matrix [Graphics:Images/LinearProgrammingMod_gr_97.gif], the identity matrix [Graphics:Images/LinearProgrammingMod_gr_98.gif] and the column vector [Graphics:Images/LinearProgrammingMod_gr_99.gif]. In the [Graphics:Images/LinearProgrammingMod_gr_100.gif]-row of the tableau the first [Graphics:Images/LinearProgrammingMod_gr_101.gif] elements are the coefficients  [Graphics:Images/LinearProgrammingMod_gr_102.gif],  which are called the coefficients of the augmented objective equation.  The remainder of the bottom row is filled in with zeros.  An extra column on the right will be used in the decision process in solving for the variables.

    

[Graphics:Images/LinearProgrammingMod_gr_103.gif]

[Graphics:Images/LinearProgrammingMod_gr_104.gif]

Column

[Graphics:Images/LinearProgrammingMod_gr_105.gif]

[Graphics:Images/LinearProgrammingMod_gr_106.gif]

[Graphics:Images/LinearProgrammingMod_gr_107.gif]

[Graphics:Images/LinearProgrammingMod_gr_108.gif]

[Graphics:Images/LinearProgrammingMod_gr_109.gif]

[Graphics:Images/LinearProgrammingMod_gr_110.gif]

   

    The non-negative variable [Graphics:Images/LinearProgrammingMod_gr_111.gif] is called a slack variable and is the amount that the linear combination  [Graphics:Images/LinearProgrammingMod_gr_112.gif]  falls short of the bound  [Graphics:Images/LinearProgrammingMod_gr_113.gif].  It's purpose is to change an inequality to an equation, i.e. we have

(4)    [Graphics:Images/LinearProgrammingMod_gr_114.gif]   for rows   [Graphics:Images/LinearProgrammingMod_gr_115.gif] in the tableau.    

    The goal of the simplex method is to exchange some of the columns of 1's and 0's of the slack variables into columns of 1's and 0's of the decision variables.  

A explanation of this tableau is given in the link below.

Simplex Method Details  

 

Algorithm (Simplex Method).  To find the minimum of  [Graphics:Images/LinearProgrammingMod_gr_116.gif]  over the convex polytope [Graphics:Images/LinearProgrammingMod_gr_117.gif].  

(i)    Use non-negative slack variables  [Graphics:Images/LinearProgrammingMod_gr_118.gif]  and form a system of equations and the initial tableau.

(ii)   Determine the exchange variable, the pivot row and pivotal element.  
       The exchange variable
[Graphics:Images/LinearProgrammingMod_gr_119.gif] is chosen in the pivot column [Graphics:Images/LinearProgrammingMod_gr_120.gif] where  [Graphics:Images/LinearProgrammingMod_gr_121.gif]  is the smallest negative coefficient.  
       The pivot row  [Graphics:Images/LinearProgrammingMod_gr_122.gif]  is chosen where the minimum ratio [Graphics:Images/LinearProgrammingMod_gr_123.gif]occurs for all rows with  [Graphics:Images/LinearProgrammingMod_gr_124.gif].  
       The pivot element is
[Graphics:Images/LinearProgrammingMod_gr_125.gif].

(iii)  Perform row operations to zero out elements in the pivotal column [Graphics:Images/LinearProgrammingMod_gr_126.gif] above and below the pivot row [Graphics:Images/LinearProgrammingMod_gr_127.gif].

(iv)  Repeat steps (ii) and (iii) until there are no negative coefficients  [Graphics:Images/LinearProgrammingMod_gr_128.gif]  in the bottom row.  

 

Computer Programs  Linear Programming  

Mathematica Subroutines (Simplex Method).  To find the minimum of  [Graphics:Images/LinearProgrammingMod_gr_129.gif]  over the convex polytope [Graphics:Images/LinearProgrammingMod_gr_130.gif].  
Activate the following four cells.

[Graphics:Images/LinearProgrammingMod_gr_131.gif]
[Graphics:Images/LinearProgrammingMod_gr_132.gif]
[Graphics:Images/LinearProgrammingMod_gr_133.gif]
[Graphics:Images/LinearProgrammingMod_gr_134.gif]

Warning.  The above subroutines are for pedagogical purposes to illustrate the simplex method.  They will not work for certain cases when  [Graphics:Images/LinearProgrammingMod_gr_135.gif],  and in cases where more than one coefficient   [Graphics:Images/LinearProgrammingMod_gr_136.gif]  is set equal to zero in one step of the iteration.   For complicated problems one of the Mathematica subroutines:  ConstrainedMin, ConstrainedMax, or LinearProgramming  should be used.

 

Example 2.  Consider the region [Graphics:Images/LinearProgrammingMod_gr_137.gif] in the plane defined by the inequalities:  

        [Graphics:Images/LinearProgrammingMod_gr_138.gif]   

2 (a).  Find the maximum of   [Graphics:Images/LinearProgrammingMod_gr_139.gif]   over the region [Graphics:Images/LinearProgrammingMod_gr_140.gif].  
2 (b).  Find the maximum of   [Graphics:Images/LinearProgrammingMod_gr_141.gif]   over the region [Graphics:Images/LinearProgrammingMod_gr_142.gif].  

Solution 2 (a).

Solution 2 (b).

 

Example 3.  Consider the region [Graphics:Images/LinearProgrammingMod_gr_289.gif] in the plane defined by the inequalities:  

        [Graphics:Images/LinearProgrammingMod_gr_290.gif]  

3 (a).  Find the maximum of   [Graphics:Images/LinearProgrammingMod_gr_291.gif]   over the region [Graphics:Images/LinearProgrammingMod_gr_292.gif].  
3 (b).  Find the maximum of   [Graphics:Images/LinearProgrammingMod_gr_293.gif]   over the region [Graphics:Images/LinearProgrammingMod_gr_294.gif].  

Solution 3 (a).

Solution 3 (b).

 

Example 4.  Find the maximum of   [Graphics:Images/LinearProgrammingMod_gr_365.gif]   subject to the constraints:

        
[Graphics:Images/LinearProgrammingMod_gr_366.gif]  

Solution 4.

 

Example 5.  Find the maximum of   [Graphics:Images/LinearProgrammingMod_gr_402.gif]   subject to the constraints:

        
[Graphics:Images/LinearProgrammingMod_gr_403.gif]   

Solution 5.

 

Example 6.  Find the maximum of   [Graphics:Images/LinearProgrammingMod_gr_439.gif]    subject to the constraints:

        [Graphics:Images/LinearProgrammingMod_gr_440.gif]  

Solution 6.

 

Research Experience for Undergraduates

Linear Programming  Internet hyperlinks to web sites and a bibliography of articles.  

 

Download this Mathematica Notebook Linear Programming-Simplex Method

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(c) John H. Mathews 2005