I do not think the divisions so sharp. Many formulations in RL can be seen as implicitly defining a particular (set of) differential equations. One can do something similar for certain simple GAs and write them as equations of an evolutionary game where stable strategies are good solutions to the objective. The RL diff eqs also fall in this class.
Another view point is to note that John Holland, inventor of GAs, was more interested in the application of GAs in classifier systems than as objects of study in and of themselves. His work on the bucket brigade algorithm, a type of TD-learning, for credit assignment in complex reinforcement learning scenarios was first-rate and sadly still under-attended. In that setting, GAs were a search operator, focused on exploration while bucket brigade was for credit assignment. While GAs can be shown to be capable of doing adaptation as well as exploration, they really were meant to be part of a bigger whole by their inventor.
In fact, Deepmind's AlphaStar for Starcraft problem formulation can be seen as fitting into the learning classifier system framework where learners are themselves quite powerful neural networks. See: https://deepblue.lib.umich.edu/bitstream/handle/2027.42/2777...
Yeah, I agree and see where they're coming from, just wasn't sure if both RLs and GAs had been generalized into the same framework. Generalized to diff eqs is interesting.
I guess you could say that there is an objective function to maximize and instead of thinking about generations you could consider the same individuals over and over again... That's my intuition of the similarities they could consider, but I agree with you that GAs are not RL.
> Genetic algorithms are considered as part of reinforcement learning
Uh, by who? Sure, there are similarities, but GAs != RL.