Questions tagged [optimal-control]

Optimal control theory, an extension of the calculus of variations, is a mathematical optimization method for deriving control policies. (Def: http://en.m.wikipedia.org/wiki/Optimal_control)

Optimal control theory, an extension of the calculus of variations, is a mathematical optimization method for deriving control policies. Reference: Wikipedia.

The method is largely due to the work of Lev Pontryagin and Richard Bellman.

1067 questions
37
votes
1 answer

A System of Matrix Equations (2 Riccati, 1 Lyapunov)

Setup: Let $\gamma \in(0,1)$, ${\bf F},{\bf Q} \in \mathbb R^{n\times n}$, ${\bf H}\in \mathbb R^{n\times r}$, and ${\bf R}\in \mathbb R^{r\times r}$ be given and suppose that ${\bf P}$,${\bf W}$,${\bf X}\in \mathbb R^{n\times n}$, and ${\bf…
mzp
  • 2,000
14
votes
2 answers

Time-optimal control to the origin for two first order ODES - Trying to take control as we speak!

I want to find the time optimal control to the origin of the system: $$\dot{x}_1 = 3x_1+ x_2$$ $$\dot{x}_2 = 4x_1 + 3x_2 + u$$ where $|u|\leq 1$ I ran straight into the problem full strength, hit it with all I have got: $\begin{pmatrix} \dot{x}_1 \\…
14
votes
1 answer

Fastest curve from $p_0$ to $p_1$

I'm trying to solve a problem in path planning: Given points $p_0$ and $p_1$ and vectors $v_0$ and $v_1$, find a function $p(t)$ st. $p(0) = p_0$, $p(T) = p_1$, $p'(0) = v_0$ and $p'(T) = v_1$ which minimizes $T$ (or $p^{-1}(x_1)$) given the…
13
votes
2 answers

Proof of shortest path avoiding ball

I have read in a number of places that the shortest path between two points $a,b\in \mathbb{R}^2$ that avoids a disk $D$ between them (by "between" I mean the disk intersects the line $a-b$) is of the form: travel along a tangent line to $D$ that…
13
votes
2 answers

What is the difference between optimal control and robust control?

What is the difference between optimal control and robust control? I know that Optimal Control have the controllers: LQR - State feedback controller LQG - State feedback observer controller LQGI - State feedback observer integrator…
euraad
  • 3,052
  • 4
  • 35
  • 79
12
votes
2 answers

Optimality — Hamilton-Jacobi-Bellman (HJB) versus Riccati

Most of the literature on optimal control discuss Hamilton-Jacobi-Bellman (HJB) equations for optimality. In dynamics however, Riccati equations are used instead. Jacobi Bellman equations are also used in Reinforcement learning. Are there any…
10
votes
0 answers

Why no Forward Dynamic Programming in stochastic case?

Dynamic programming usually works "backward" - start from the end, and arrive at the start. This works both when there is and when there isn't uncertainty in the problem (e.g. some noise in the state). The backward DP algorithm is then (for the case…
9
votes
1 answer

Equivalence of Lyapunov equation, matrix inequality and algebraic Riccati equation versions of the Positive Real Lemma

My question is about the equivalence of three different versions of the positive real lemma. I would like to set up the question by first stating the definition of a positive real transfer function and one version of the positive real lemma. My…
8
votes
2 answers

How to explain lagrange multipliers to a lay audience?

So I will be giving a seminar to a scientifically mature lay audience (think bio/social science undergrad level). I have been told that I should count on less than half the audience to have experience with calculus. I think I can explain the basic…
8
votes
1 answer

Role of the weight matrix $M$ in $x^T M u$ in the LQR cost function

I wonder what the role of the weight matrix $M$ is in the performance index $$J = \int_0^{t_f}{\left( x^T Q x + u^T R u + x^T M u \right) \mathrm d t}$$ for an optimal control problem where $$\dot x=Ax+Bu$$ where $u$ is the design variable.…
8
votes
1 answer

Starting with Calculus of Variations and Optimal Control Theory

I want to study the calculus of variations. I understand this to be a "more advanced version" of calculus, in the sense that we maximize functionals (functions of functions), by choosing a particular function, rather than maximizing a function, by…
8
votes
1 answer

Is the controllability Gramian always positive definite?

I am trying to understand the balanced truncation algorithm and have some trouble distinguishing between controllability matrix and controllability Gramian. If my understanding is correct, a linear time invariant system $\dot x(t) = Ax(t) + Bu$ is…
7
votes
0 answers

Kalman Filter with correlated measurement noise derivation

I have made great efforts on the derivation, and the results are really close but I am still missing the last step. If someone can help that'd be great! Problem setup Consider this modified Kalman Filter, where $x_k$ are the states and $z_k$ are the…
7
votes
1 answer

Optimal control

Consider the growth equation: $ \dot{x} = tu $, with $x(0)=0$ and $x(1)=1$, and with the cost function: $ J= \int_0^1 u^2 dt $. Show that $u^*=3t$ is a successful control, with $x^*=t^3$ and $J^*=3$ the corresponding trajectory and cost. If $u=u^*…
Natalie
  • 329
7
votes
0 answers

All equivalent inverse LQR problems

Inspired by this question I wondered if it is possible to fully parameterize the inverse optimal control problem. So given a stabilizing state feedback policy $$ u(t) = -K\,x(t), \tag{1} $$ for a linear time invariant state space…
1
2 3
71 72