With the insane decision to use a Turing-complete language, the contract's behavior undecidable. You don't even know if the contract will halt.
Limited "gas" (execution time) isn't the solution, because the lesson of the Halting Problem isn't that a program might not terminate. Even if the program halts within a finite time (or "gas"), the behavior of the program on the current input is still undecidable.
Any serious attempt at writing "smart contracts" should demonstrate their understanding of the science of insecurity[1] and limit the contract language's complexity to deterministic context-free.
Using a Turing-complete language does not mean that all possible programs written in that language are undecidable. As a trivial example, if a particular program doesn't have loops or recursion, or if it loops for a fixed number of times, it quite obviously will halt. More generally, it's possible to write simple code with clear and obvious effects.
If were really that bad to use Turing-complete code, then smart contracts would be the least of our problems, because Turing-complete code runs our airplanes, medical equipment, and nuclear reactors.
Obviously all EVM programs halt, otherwise the Ethereum network would be completely ruined immediately. You already know why they halt: there's a gas limit.
Ethereum programs aren't Turing complete. They always halt. Turing complete programs don't generally halt. That's the definition.
You should demonstrate your understanding by using these terms correctly!
This whole argument about Turing completeness is a huge red herring.
The Parity multisig attack was an extremely simple programming error that has nothing to do with Turing completeness.
That people put their trust in such a contract despite its source code being obviously wrong points to the real problem.
The real problem right now is the lack of convenient tools and established practices for formal verification. If you think formal verification is impossible for EVM, you're wrong and you fundamentally misunderstand computation.
I've seen this criticism over and over again on internet forums: "EVM is so stupid, it can't be formally verified because it's Turing complete."
It's totally wrong: EVM isn't Turing complete, and it doesn't matter.
Look at "The Art of Computer Programming." It's full of Knuth's proofs of the correctness of algorithms written in assembler and imperative code. Look at any proof of correctness of a sorting algorithm.
OBVIOUSLY you can prove the correctness of programs written in Turing complete languages. You just look at the specification of the language, formulate a specification of your program's intended behavior, and then prove that the code matches the specification. This is how formal verification works, regardless of Turing-completeness.
To clarify, what is the impact of Turing completeness on formal verification? It means that for some ARBITRARY program, you cannot ALWAYS prove that it halts.
But even with some non-Turing complete formalism, you obviously cannot automatically prove the correctness of an arbitrary program; that's just nonsense. You could be certain that the program halts, sure, but how valuable is that? Ethereum already guarantees that all programs halt using the very simple gas mechanism.
> You already know why they halt: there's a gas limit.
Did you actually read my post? I'm not claiming halting is a problem; it's merely an example of undecidability. The "gas limit" doesn't help in the cases where the program does halt with unexpected behavior.
> Turing complete programs don't generally halt. That's the definition.
Where did you get this nonsense? See [1] for the actual definition.
> attack was an extremely simple programming error
Yes, which is why it's stupid to use a language with a grammar that cannot be proven. If trivial mistakes happened in production code, how do you expect to find the subtle bugs that happen because of the complex interactions that occur in recursively enumerable grammars?
> prove the correctness of programs
You can do that for some programs, but not all. The point about grammars is there are classes of grammar that allow automatic verification. Using a recursively enumerable grammar instead of a regular or context-free grammar demonstrates a severe misunderstanding of either language theory, or secure/safe engineering practices.
int_19h description[2] of some of the insane parts of Solidity suggests a PHP-style fractal of bad design[3], not a platform for proving correctness.
If you can't prove the correctness of your program, you shouldn't run it. Why are you concerned with proving the correctness of arbitrary programs? Can you give an example of an Ethereum-like program that can't be proven correct?
Here's how reasonable Ethereum contract development should happen. You construct the contract, either in some high level language or in bytecode. Then you prove, using some reasonable formalism (TLA+, Isabelle, paper, symbolic execution, or whatever), that the contract does what it should do. Then people who want to trust the contract look at this proof.
This is not at all hindered by the EVM being quasi Turing complete, because at no point do you have the question "How can I mechanically verify halting of an arbitrary program?"
> With the insane decision to use a Turing-complete language, the contract's behavior undecidable.
OTOH, if the language isn't Turing-complete, there will almost certainly be things you'll want a contract to do that it can't. You can have generality or decidability, but not both.
If the language is Turing-complete there are definitely things you'll want in contract (such as the execution of it halting in finite time) that you won't be able to enforce. When code is the literal definition the contract, non-Turing complete is the way to go, because you can't reason about your code otherwise.
Turing-complete means you cannot write an algorithm to prove termination in every terminating case, but it doesn't mean you can't prove termination in particular cases, and it certainly doesn't mean you can't reason about your code.
Based on other comments here there certainly may be questionable decisions in the design of Solidity but that doesn't mean Turing-completeness is one of them.
I'm sorry but we have 6 decades of fundamental security research that has shown that it is essentially impossible to have a sufficiently complex, secure system.
The idea that we will ever stop needing human arbiters is laughable and pure hubris.
It's a matter of scope. You can build a secure currency exchange protocol: I have yet to hear of a technical issue with BTC that threatens its raison d'être.
But when you're making a distributed, general purpose, scriptable contracting platform at some point you ought to consider trimming the core way down.
Maybe if the contracts had more constrained semantics ETH programming would be less scary, at the cost of some bells and whistles or more verbose syntax.
> No, being Turing complete makes it impossible to be 100% secure and trustworthy, by definition.
Being Turing-complete means it's possible to write some contracts/programs for which certain features are undecidable by analysis.
It doesn't mean that programs that could be also be written in any more limited language can't have all the properties about then that could be proven in the non-Turing complete language proven when written in the Turing-complete language.
> It is possible to have trusted contracts, iff their logic is (mathematically) proven.
Trustworthiness of programs (including “smart contracts”, which are only loosely analogous to traditional contracts) is about whether the actual people using them get what they expect out of their behavior. Mathematical proofs of formal properties may be useful to that in some cases, but are not generally sufficient.
> It is possible to have trusted contracts, iff their logic is (mathematically) proven.
Suppose a non-Turing-complete language were used instead. Now, imagine a contract that is highly complex. If a layperson decides to trust that contract without reading the code, does it matter whether the language used to implement the contract was Turing-complete or not?
I'd argue that trust of contracts is pragmatically more of a social concept than a computational concept, since at scale the vast majority of contract users would make their trust decision about a contract based on non-technical factors.
> If a layperson decides to trust that contract without reading the code, does it matter whether the language used to implement the contract was Turing-complete or not?
Um, yes? Literally the point of weaker languages in this context is that they can be statically provable.
These are good points. What if the default behavior of a contract was to reset the world to its previous state if the program does not terminate within 30 seconds?
Arguably unclear termination semantics would be a good reason not to trust a contract. Ideally trust mechanisms would exist such that closed source contracts were trustable via a pure insurance-based mechanism.
> if the program does not terminate within 30 seconds?
As I attempted to say in my previous comment, undecidability[1] isn't limited to deciding if a program will halt (or even the weaker question of if it will halt in a given finite time). Recursively enumerable[2] languages that require a Turing machine to automate have an intrinsic complexity (i.e. any Turing-complete language). This isn't a "work harder to solve it" level of complexity; some behavior will always be undecidable.
You might experience this problem in the traditional form of the "Halting Problem" where the question "Will this program halt?" may be answered with "I don't know, it's still running" in addition to "yes" or "no". However, this level of complexity can also cause unexpected output due to subtle interactions inside the program that are difficult to predict from the source. Some behaviors are provably unknowable.
You can avoid this problem by using a simpler class of language. Regular languages (which "regular expression" match[3]) are safe, as are deterministic context-free[4] languages. You may find the lack of loops limiting.
What if Ethereum offered a non-turning-complete subset of Solidity (that included formal verification) which could optionally be used in smart contracts. Would this cause you to view the platform differently?
Not GP, but personally I would not trust any system built by a bunch of people building computational contracts who only learned what Turing completeness implies after-the-fact. I'd think they were marketeers willing to do or say anything to sell their snake oil.
With the insane decision to use a Turing-complete language, the contract's behavior undecidable. You don't even know if the contract will halt.
Limited "gas" (execution time) isn't the solution, because the lesson of the Halting Problem isn't that a program might not terminate. Even if the program halts within a finite time (or "gas"), the behavior of the program on the current input is still undecidable.
Any serious attempt at writing "smart contracts" should demonstrate their understanding of the science of insecurity[1] and limit the contract language's complexity to deterministic context-free.
[1] https://media.ccc.de/v/28c3-4763-en-the_science_of_insecurit...