The problem I have with this approach is that it seems models get confused with prescriptive information such as print “it worked” or a comment stating the desired intent of code (which it doesn’t do at all). As opposed to descriptive information: generating priors and comparing them to actual output to yield either confirmation/refutation or surprise.
I think double checking is better than not, but without the ability to really “know” reason it feels a bit like adding one more hull layer to the titanic in an effort to make it unsinkable.
I think double checking is better than not, but without the ability to really “know” reason it feels a bit like adding one more hull layer to the titanic in an effort to make it unsinkable.