This whole debate is always so bewildering. Programming paradigm fanboys get into heated arguments about which model is the "best" one, but actual computer science research uses myriad different models of computation, usually endeavoring to select the one that is most convenient for the given purpose.
Sometimes that could mean using the lambda calculus, particularly in study of language theory and type systems. Other times that could mean some sort of black box model, such as when proving lower bounds for solving problems using specific operations (see e.g. the sorting lower bound). Yet other times, like when establishing the ground-zero of some new variety of computational hardness, I can't think of many more suitable models to cut up into pieces and embed into the substrate of some other problem than those based upon Turing machines.
> Programming paradigm fanboys get into heated arguments about which model is the "best" one
My original comment certainly reads that way, but my intent was really to point out that it doesn't make sense to privilege the Turing machine model in the study of computation. I wrote more about why in this comment: https://news.ycombinator.com/item?id=27334163
Well, computing science studies programming paradigms; so defining them and analyzing what makes them suitable for what purposes is pretty much within its scope.
As I said above, it may very well be that the best usage for Turing machines is using them in mathematical proofs; where the efficiency of the computation is not a concern.
Really the best usage of all the computation models we're discussing here is using them in mathematical reasoning. If you're looking to "create real working programs," then a better basis is probably going to be some combination of actual industry-grade programming languages and actual CPU architectures.
This response might come off as a little facetious, but seriously, I think the idea of "founding" industrial computing languages/platforms upon theoretical research models of computation misunderstands the relationship between theory and practice. There is a relationship for sure, the research on these models usually does want to translate into real-world implications somehow, but your functional programming language is not the literal lambda calculus.
Certainly practical programming languages are not a one-to-one implementation of a theoretical model, but these models do create families of related languages that keep a close relationship and are separated from languages based on a different model.
Each time a new theoretical model is created to represent a particular programming problem, entirely new languages are created to ease the practical approaches of building systems for the underlying problem.
And it is worth keeping track of which models are good for which problems. So no, theoretical models are not good just for doing math with them, also for guiding practical usage.
Sometimes that could mean using the lambda calculus, particularly in study of language theory and type systems. Other times that could mean some sort of black box model, such as when proving lower bounds for solving problems using specific operations (see e.g. the sorting lower bound). Yet other times, like when establishing the ground-zero of some new variety of computational hardness, I can't think of many more suitable models to cut up into pieces and embed into the substrate of some other problem than those based upon Turing machines.