Hacker News new | past | comments | ask | show | jobs | submit login

What if there is a way to write lambda calculus programs that makes beta reduction "easier" in that sense? We are used to reading things like ((λ y . y) (λ y . y)), but there may be another notation with better properties. If there was, we wouldn't even have to use it, because simply the fact that it exists would justify using beta reduction counts as a measure of complexity.



But this is clearly not a property of the particular notation, but lambda-calculus itself (and the way beta-reduction is defined). If you were to make it a constant-time operation in length it literally wouldn't be lambda-calculus.

Which actually makes me curious: what sort of change did you have in mind?


If you could quickly translate a lambda-calculus expression into a program written in some other model of computation (that was easier to simulate on a Turing machine) and back, then you could do the complete reduction in the other model, only using lambda-calculus as your input and output. To perform the entire reduction you would still have to scan the expression in O(n), but it wouldn't necessarily be O(n * number of reduction steps). There could potentially be a relationship between the number of beta reductions and the number of steps required to execute the reduction in the other model, in which case beta reduction count would be justified as a measure of time.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: