The ideas seem quite relate. A common reservoir computing setup involves learning a linear map from a back box dynamical system (which can be a feedforward network if we really want it to) to some output. The only significant distinction I see from a short observation is that the input and output size in reservoir models are the same from what I’ve seen
https://en.wikipedia.org/wiki/Extreme_learning_machine "Extreme learning machines are feedforward neural networks"
https://en.wikipedia.org/wiki/Reservoir_computing "The reservoir consists of a collection of recurrently connected units"
So, no.