No; it can reason backwards from things it found in context, just not things trained into the model. If you have lines A, B, C there's no association in the model back from C to B. I don't think this can be solved by better reasoning.
A proposed solution I saw recently was to feed every training document in backwards as well as forwards.
A proposed solution I saw recently was to feed every training document in backwards as well as forwards.