3

I want to find an approximation to the cos(x). I formulated the problem as a linear optimization problem as follows:

$$ \min \sum_{i=1}^{M} e_i $$ subject to:

$-(a_0 + a_1x_i + \dots + a_nx_i^{N})-e_i \leq -\cos(x_i)$

and

$a_0 + a_1x_i + \dots + a_nx_i^{N} -e_i\leq \cos(x_i)$.

The points $x_i$ are $M$ points in the interval $[0,1]$. I tried to solve it using Python as follows:

from scipy.optimize import linprog
import numpy as np

def construct_matrix_sup(M, N, a, b, points):

 matrix = np.zeros((M, M + N + 1))

 for i in range(M):
    for j in range(M + N + 1):
       if j == 0:
          matrix[i, j] = -1
       if j >= N + 1:
          matrix[i, j] = -1 if j - (N + 1) == i else 0
       else:
          matrix[i, j] = -1 * points[i] ** j

 return matrix

def construct_matrix_inf(M, N, a, b, points):

 matrix = np.zeros((M, M + N + 1))

 for i in range(M):
     for j in range(M + N + 1):
         if j == 0:
             matrix[i, j] = 1
         if j >= N + 1:
             matrix[i, j] = -1 if j - (N + 1) == i else 0
         else:
             matrix[i, j] = 1 * points[i] ** j

 return matrix

def construir_vetor_c(M, N):

 c = np.zeros(M + N + 1)
 for i in range(M + N + 1):
     if i >= N + 1:
         c[i] = 1
 return c


M = 1000 N = 20 a = 0.1 b = 1

points = np.linspace(a, b, M)

A_sup = construct_matrix_sup(M, N, a, b, points)

A_inf = construct_matrix_inf(M, N, a, b, points)

b_sup = -np.cos(points)

b_inf = np.cos(points)

A = np.concatenate((A_sup, A_inf), axis=0)

b = np.concatenate((b_sup, b_inf))

c = construir_vetor_c(M, N)

result = linprog(c,A_ub = A, b_ub=b, method = 'revised simplex')

I devised a function to generate the matrix $A$ and the cost vector $c$ then employed the simplex method to solve the linear programming problem subject to $Ax \leq b$ where $x^{T}=(a_0, \dots, a_N, e_1, \dots, e_M)$. But I'm just finding the coefficient $a_0$ the other ones are zero.

Is there something wrong with my problem formulation or simplex implementation?

JJMae
  • 2,117
  • 2
    You might look up the Remez algorithm. – Robert Israel Apr 19 '24 at 19:18
  • 2
    @m-stgt The objective function and constraints are linear in the decision variables $a_j$ and $e_i$. The original (unconstrained) problem is to minimize the (nonlinear) sum of absolute errors, and the formulation given is a standard linearization. – RobPratt Apr 21 '24 at 18:28
  • 1
    I recommend printing out A, b, and c for a smaller instance to debug. – RobPratt Apr 21 '24 at 18:33
  • 1
    Look at the default values on the bounds. The default options may be that the variables are non negative. See https://math.stackexchange.com/questions/4787995/simple-linear-programming-problem-with-dual-objective#comment10183342_4787995 – Marc Dinh Apr 26 '24 at 07:42
  • I identified the mistake. I was assuming that all decision variables were positive, when in fact both the coefficients and the errors have no constraints. I corrected it and managed to make the approximation. – Felipe Oliveira Apr 26 '24 at 17:29

0 Answers0