One of my hobby projects is a real-time optimal flight controller for Kerbal Space Program rockets, and it also uses CasADi! It's an absolutely fantastic software package that makes it dead simple for anyone to set up and solve NLP (nonlinear programming) problems. Indeed, a lot of hard work goes into making an online solver that can run in a reasonable time. I already have launch working quite well, but precision landing has proven more difficult.
For the initial trajectory, I use collocation (https://en.wikipedia.org/wiki/Collocation_method) to encode the physics constraints. For updates, I use the previous solution as the initial guess for the updated trajectory. In practice this seems to work quite well, but there are still some issues I need to iron out. Sometimes the trajectories it provides are unstable in the sense that if you slightly overshoot, it will generate a radically different solution. I believe the solution here will be to turn some of the constraints in my solver into soft goals instead.
It's difficult for me to say whether this is evidence that SpaceX definitely uses optimal control for their flight control, since we would naturally expect any reasonably efficient control algorithm to produce a similar trajectory. After all, if you have a solution that's 99% efficient, how much slack is there in a trajectory using only ~1% more time than the optimum? However, I would not be surprised if they do that - I'm just wondering how they test it, since my setup seems so finicky!
I’ve been using program blocks in Space Engineers to play with some parameters as well. I still need to install the orbital mechanics mod though. The math is incredibly hard (if you have basic training) , even if you have a simple model, but it’s fun to figure out.
Hats off for your project and what you’re trying to do!
For the initial trajectory, I use collocation (https://en.wikipedia.org/wiki/Collocation_method) to encode the physics constraints. For updates, I use the previous solution as the initial guess for the updated trajectory. In practice this seems to work quite well, but there are still some issues I need to iron out. Sometimes the trajectories it provides are unstable in the sense that if you slightly overshoot, it will generate a radically different solution. I believe the solution here will be to turn some of the constraints in my solver into soft goals instead.
It's difficult for me to say whether this is evidence that SpaceX definitely uses optimal control for their flight control, since we would naturally expect any reasonably efficient control algorithm to produce a similar trajectory. After all, if you have a solution that's 99% efficient, how much slack is there in a trajectory using only ~1% more time than the optimum? However, I would not be surprised if they do that - I'm just wondering how they test it, since my setup seems so finicky!