My view is quite unconventional, but I believe, computer science is a branch of mathematics that deals with large but finite structures (so they need an algorithmic description).
Compare with most of "legacy" mathematics, which studies countable structures (so the description can use arbitrary series).
Of course, there are larger sets, but they mostly serve as a theater (just like countable infinity is just a theater of computer science).
Finite sets in the rest of mathematics are sometimes considered uninteresting, but not so in computer science, where only very small sets (roughly of up to 32 elements, all subsets can be easily checked on a typical computer) are considered uninteresting.
Good example is number theory, which philosophically considers all natural numbers, regardless of size, to be equal type of objects. In computer science, the (although fuzzy) magnitude of numbers plays much bigger role.
> I believe, computer science is a branch of mathematics that deals with large but finite structures (so they need an algorithmic description).
This is a strange claim since the entire field was founded upon the investigation of potentially (and often actually) infinite computations.
> Compare with most of "legacy" mathematics, which studies countable structures (so the description can use arbitrary series).
Define "most". Do it in a way that makes real and complex analysis and topology (and probably many other branches) the smaller part of mathematics.
Most importantly though, my problem with this kind of discussion is that the question itself is meaningless. Not everything can be classified into neat " X is a Y" relationships. Not everything needs to be classified into such relationships. Even if the discussion reached a consensus, that consensus would be meaningless. Computer science is a part of math? OK, but so what? Computer science is not a part of math? OK, but so what? Neither conclusion would tell us anything useful.
> Computer science is a part of math? OK, but so what? Computer science is not a part of math? OK, but so what? Neither conclusion would tell us anything useful.
I assumed the implication here is that CS, like math, is considered by many to not be a science, but rather a field of construction based on logic. The obvious problem with calling computer science a science is that it isn’t fundamentally based on measuring empirical evidence of a natural process. Maybe that still lands in the ‘OK, but so what?’ category, on the other hand this has been much discussed re: math, and it may be useful to clarify in what ways CS is not employing scientific method.
> it isn’t fundamentally based on measuring empirical evidence of a natural process.
What leads you to say this? If computation is in some sense the construction of certain forms of mathematics, is computer science not then the empirical study of computers (the objects which instantiate the math) and computation (the process of instantiation)? Of course there is abstract theory as well, but that's just as true in physics
Newell and Simon had some thoughts: "We build computers and programs for many reasons.
We build them to serve society and as tools for carrying out the economic tasks of society. But as basic scientists we build machines and programs as a way of discovering new phenomena and analyzing phenomena we already know about... the phenomena surrounding computers are deep and obscure, requiring much experimentation to assess their nature."[0]
The fact that digital computation is a new process doesn't make it "unnatural", it might be argued; some also contend computation takes place not merely in digital computers but much more generally, in which case distinctions between computer science/cognitive science/physics blur
Agree with your broader point, though. I'm not aware of any consensus on the epistemological or ontological status of computer science, or on its relation to the other sciences. It seems (to me) subject to many of the same philosophical questions that dog mathematicians, re: discovery vs. invention, the uncertain reality of various abstractions, generalizability, etc
Likewise agree that consideration of the methods employed in computer science can be fruitful, in particular if the goal is not so much to establish once and for all which category CS falls most naturally into, but simply to stimulate critical thought about the fundamental questions
I meant natural in the sense of originating from nature, specifically as opposed to something built by people. The fact that digital computation is a synthetic construction of humans is what makes it “unnatural”, that it’s new is just a byproduct humans having invented it recently.
I’d agree there are ways that we can observe computation as a scientist and form hypotheses and perform experiments, especially if, for example, I write a program I don’t fully understand and don’t know how to predict the behavior of, or more maybe much more commonly when I observe software written by other people.
Thinking about the analogy to telescopes, the implication is that computers are an instrument for measuring something. Telescopes measure things about planets and stars, physical things that occur in nature. But what exactly do computers measure if they’re to be considered a measuring device? It’s fun to think of a computer being a physical device that measures pure logic; we can physically observe something that doesn’t occur in nature.
On the other hand, I’m hesitant to not draw some kind of line between CS and the hard sciences like physics, chemistry, biology, because there seem to be real differences between them. (I was going to point out examples, but realized it’s fundamentally tricky to nail down and I’d be setting a trap for myself. ;)) Yes I agree the philosophy of where CS lands, and what CS really is, does land in the same ambiguous camp as mathematics (probably because CS and math both truly are in the same category of abstract logic, not directly tied to physical observations.) Maybe more useful and abstract tools are more difficult to categorize precisely because they are used as part of all the sciences and arts...
> This is a strange claim since the entire field was founded upon the investigation of potentially (and often actually) infinite computations.
To me, that is not that surprising, although it's a good point.
My view has to do with history of mathematics. People were fascinated with infinities long time before they considered that large but finite systems can be also interesting. I think applications of mathematics, mainly geometry and physics, are responsible too.
The development of more finitist taste in problems (somebody else mentioned the constructivism, which I think is fitting) came with the practical need to do computations and developing algorithms.
So I am not that surprised that one of the early forays into theory of computation are through the lens of infinite, rather than finite.
> Most importantly though, my problem with this kind of discussion is that the question itself is meaningless.
Of course, I forewarned that it's just my view, and you're free to ignore it.
Look, partly why I mention it, it seems rather surprising to me; I would consider infinite structures to be more complicated, in some sense; yet, in the history of mathematics (which lately includes CS, as a study of large but finite), these were studied first. There was nothing that would prevent Ancient Greeks (or Euler) from discovering, say, lambda calculus, or how to do sorting efficiently. Although, it seems in many fields we progress from more complicated to simpler methods, in some way. But I think it's partly precise because the finite is often considered uninteresting by mathematicians, it was overlooked. And that's the philosophical point I am trying to emphasize. Different fields of math perceive (in the way they treat them) the same structures differently, and I gave an example of natural numbers. Another example is the notion of the set cardinality, in most areas of mathematics people only care about countable/uncountable distinction.
> “ Not everything can be classified into neat " X is a Y" relationships.”
I’m with you on skepticism of the x-is-y relationship; however, I read the comment as comparing math versus computing academics.
Then, the so-what answer would be informative for the neophyte or youth who is interested In computing but struggles with mathematics instruction. Right? That’s a real thing in education.
In fact I find this to be the principal benefit of online MOOC courses. You can compare styles of instruction, and pedagogy from major universities from across the US (and internationally).
> computer science is a branch of mathematics that deals with large but finite structures (so they need an algorithmic description)
Another way to put this is that computer science deals with mostly constructive mathematics (more precisely, mathematics that uses intuitionistic logic, the kind that is natural to most programmers and computer scientists anyway). For instance, when you prove the fundamental theorem of arithmetic, you actually can translate that into an algorithm for factorizing numbers into products of powers of primes. And the converse holds too, an algorithm is a proof! If you can give me an algorithm that, given a number n, always produces a prime bigger than n, then that actually witnesses the infinitude of primes.
Constructive methods are everywhere in CS, for instance, to prove a proposition P, it's possible in classical math to say "assume (not P) is true, then derive a contradiction, hence P", however that would be really unnatural in CS! You never hear "I want to show an algorithm to solve P exists, let's suppose it's not computable, ... contradiction!", because you don't end up with an algorithm at all (what you proved instead was that it's impossible for an algorithm to not exist, without saying what it is). Likewise, you if want to show some number/program/data structure has the properties you care about you almost always give the description explicitly.
For more information, I'd say that type theory is a great intersection of math and computer science in a way that's quite accessible to programmers, since we're already used to this kind of thinking, even if we weren't explicitly taught it.
Although the standard proof of the incomputability of the halting problem assumes it is computable and then reaches a contradiction. And that's more or less the central proof of everything to do with computability.
That's still constructive. This can be a confusion because there's actually two kinds of contradiction proofs:
to show ¬ P, assume P then derive a contradiction (false, ⊥)
i.e., ¬ P := P -> ⊥. This is actually just fine and constructive, it's just how you prove a negation.
OTOH, there's another kind of contradiction proof:
to show P, assume ¬ P, then derive a contradiction (false, ⊥)
Written with function notation, this becomes P <-> (¬ P -> ⊥)
But if we unfold the definition of negation, we have that
P <-> ((P -> ⊥) -> ⊥)
The left to right direction holds classically and constructive, but the right to left direction uses double negation, which can is equivalent to the law of the excluded middle, AKA classical reasoning.
Sometimes, the strategy of proving ~P by showing how P entails absurdity is called "proof by negation". The phrase "proof by contradiction" is then reserved for the strategy of proving P by showing how ~P entails absurdity.
On that usage, constructivists are happy with all proofs by negation, but not generally happy with proofs by contradiction.
I find relevance logic much more intuitive than intuitionistic. Intuitionistic has still some weird assumptions that make it fit a lookup table like classical logic.
This is an interesting view that captures a substantial portion of computer science including also programs and proofs with a short and clear criterion.
However, a key drawback of this view is that it undersells the linguistic aspect of computer science which is manifested in the search for suitable programming languages.
I think it is justified to regard the design of programming languages as a core area of computer science, and that design space is not finite.
Programming languages are not full linguistics, at least not yet. We focus primarily on syntax, semantics and pragmatics. All of this, though, is firmly rooted in mathematics, defining grammar as expressions and mathematical relationships. This then enables formal mathematical proofs where we can reason about outcome.
I don't know if there exists such search that you mention, or it is more of an optimization of language features to problem space.
Perhaps in a future where we're able to program our machines by having a conversation with them...
This may indicate that the sampled programmers did not program in the way it can be done for example with declarative languages, i.e., primarily as a linguistic activity where we describe what we know about the task.
I also like the description in Programming as Theory Building by Peter Naur:
Thanks for the articles. I definitely don't do it the same way I process language. To use the same example- I describe a task unambiguously, which makes it a translation from written text to memory regions or execution paths. It gets more concrete than just what I know about the task. A better parallel would be to writing mathematical formulas.
When I process language, I believe, more of my brain is engaged in empathy, in trying to match context, in allowing ambiguity, storing ideas and concepts to be disambiguated at a later point.
Yeah I remember that I started reaching my first CS successes once I knew enough of the mathematics and properties of the hardware to "think like a machine" and was trying to explain some of my peers still struggling how to do a more step by step execution in their mind.
In hindsight it's clear that you dont tell a story when you program a computer or not the way I'd describe my day or teach my kids about a phenomenon. It feels more like unrolling a tape and executing a receipe in the most simple way you can after you built a mind model of the machine. You mimic an actor maybe, rather than imagine an event ?
I have a very different experience; when I code, it feels similar to mapping out a conversation or constructing a narrative, and this is how I teach it to my students.
I've met people who view it as mathematics or philosophy or a bunch of other things; in the end, I think there is a wide range of valid mental models for how computers function and therefore how to communicate with them.
Did they compare identical information expressed as written language or programming language or did they compare messages that contain logic expressed via programming languages vs messages that don't contain logic expressed via written languages?
A substantial portion of theoretical computer science.
There are huge swaths of the field that deal with things like machine learning or distributed systems or computer architecture that don't really fall under that categorization.
When you've disposed of those, read the less well-known paper _Programming as Theory Building_ by Peter Naur (see https://pages.cs.wisc.edu/~remzi/Naur.pdf for a link).
Are there computer science papers that could fit within mathematics? Yes. Are there important computer science papers that clearly don't? Also yes.
I agree (as somebody else pointed out) these are more about software engineering (and I don't see a contradiction in being both, many people we consider physicists today were also engineers in their time).
Although I will point out, there is https://en.wikipedia.org/wiki/Structured_program_theorem which is a mathematical statement. Whether to actually follow that result when building software is a matter of engineering taste, but the theorem tells you, at the very least, you can do so.
Let's see. Dijkstra introduced the A* graph search algorithm. Knuth wrote THE classic algorithms book. Peter Naur is best known for his work on parsing (he is the N in BNF notation).
Even if you try to draw a distinction between computer science and software engineering, most of the people writing those papers were pretty squarely on the computer science side.
But I don't draw that distinction. ALL of them were tenured professors of computer science. ALL of them were published in journals of computer science. There is no valid basis on which you can draw an artificial line between computer science and non-computer science, and put those outside of computer science. Doubly not if your goal is to say that computer science is a subset of mathematics rather than its own thing.
Your claim (as you explain elsewhere) seems to be that because discipline of software engineering exists and is related to computer science (and there is a fuzzy boundary between the two), we cannot think of computer science as a subfield of mathematics.
I honestly don't see how that follows. Does the fact that writing LAPACK required software engineering mean that numerical linear algebra is not a part of mathematics?
My basis for grouping them together is because of the subjects under study and research methods used. You can always claim that two different fields are different, but I think making arguments for some relationship is more useful.
My claim is not that software engineering is related to computer science, it is that software engineering IS PART of computer science. And being part of it, means that computer science is not merely an applied subfield of mathematics. A truth that mathematicians and computer scientists accepted decades ago when computer science split off from mathematics into its own department.
AS for your example of LAPACK, "The existence of evening, my dear Boswell, does not mean that day is the same as night." Numerical linear algebra is on a boundary between mathematics and computer science. There is a large boundary between math and computer science. But then again the same can be said of, say, math and physics. But by the time you're worried about how to use caching effectively, and make that work in a distributed computation, you're pretty far on the computer science side.
Also your idea of grouping them because of research methods is problematic at best. For example machine learning research has a highly experimental flavor that does not fit in mathematics at all. Quantum computing contains a lot that is closer to a field of applied physics than of mathematics. And so on.
On what basis do you claim that software engineering is not part of computer science? Those papers were all written by professors of computer science and published in journals of computer science. All were cited many times by other papers written by other professors of computer science in other journals of computer science.
Software engineering is taught as a part of the field that's called computer science, but the original question was about whether the field called computer science is actually a science.
I guess the difference in that regard would be that (software) engineering doesn't necessarily follow the formal scientific method of formulating a hypothesis and then testing it experimentally. That would, in some sense, make it distinct from science. The same would practically apply to many areas in computer science that are studied experimentally, not just software engineering.
To elaborate on that a little, (software) engineering does build on experience and empiricism, as well as analytical thinking, but in practice it may be more in the form of "lessons learned" than in the form of hypothesis testing.
That doesn't make it any less valuable, or even any less valid as an academic area of research. It just, in some sense, makes it possibly distinct from the sciences.
Of course there are also problems in CS that can actually be studied with the scientific method, but I think amelius might have meant that publications such as Dijkstra's and Knuth's papers on goto would be more in the "lessons learned" category than in the "results from the scientific method" category. They would thus not really make computer science a science even though they're not really in the "CS as a part of math" category either.
I don't think there's necessarily a disagreement here. You're talking about "how" to approach problems and he's talking about "what" the problem space is.
At Dartmouth College, computer science started as a branch of the Mathematics Dept. It was eventually made it's own department (located in a former clinic for the mentally ill, Sudikoff Laboratory which some see as ironic). Now Dartmouth is building a new addition to the engineering building (Thayer School of Engineering) and the Math and Computer Science Dept's will move there. I believe the thinking is that those dept's collaborate so much that having them located in the same complex will be helpful. Much of computer science research is more about applications of computers rather than research into computing itself, though some pure computing theory is still being worked on, and everything is becoming cross disciplinary across many fields. One CS project I worked on was designing a smartwatch for health apps. It involved electronics engineering, operating system development, health applications development, sensor data processing, medical research on senior frailty, gesture UI design, UI usability research, secure wireless data transport and collection and more. It was run by a CS prof though. If you go into CS you can end up working on anything and everything. Another CS robotics project involved electronically herding cattle. Where do cattle fit into CS theory? As end users of computers? :-)
At Harvey Mudd College, when I was there in the 80s, the Computer Science department was part of the Biology department: They had one prof who did CS exclusively, one who did Biology exclusively, and one who did both. The “real” departments, Mathematics, Physics, Chemistry and Engineering all offered the option to do a CS option within them, but one of the cool things about Mudd of that era (I’m not sure how much of this has survived since then), was that you didn’t really do a specialized degree in Math, Physics, etc. but got a good broad education across the discipline.
This is good thinking and I am sad to hear that it is unconventional.
And now I'll contradict the part of what you said about finite structures: At a theoretical level, the field deals with infinite structures, namely the Turing Machine tape and infinite time. We use finite Computing Machines to simulate a finite section of that tape in a finite time.
Maybe we should call it Turing Machine Science, because we use computers to study the behavior of programs in Turing Machines, just as astronomers use telescopes to study the behavior of atoms in stars and particle physicists use particle accelerators to study the behavior of elementary particles. We will never touch those particles, stars, or Turing Machines, but we can know them, hence the science.
Software engineering is like using one's understanding of the emissions of the sun to design better solar panels. Very practical, but you don't use your understanding of gravity-driven fusion reactors every day.
When you mentioned infinite structures, I thought you'd bring up the idea that our goal to automate often pits us against problems defined as collections of infinitary instances. I don't think the potential infinity of tapes and running times poses as much of a concern by comparison.
Maybe computer science is about giving, to borrow a little bit from Hilbert, finitary representation to infinitary structures. (Finitary representation with other properties of interest, such as tractability and whatnot, of course.)
Nope. Turing machines are arbitrary and rather unmathematical. The lambda calculus is a much better computational formalism, more mathematically grounded and oriented, with far more direct practical applications.
This reads like total nonsense to me. Why would Turing machines be 'arbitrary and unmathematical'? What's 'unmathematical' about them? They can be formally and precisely described and I don't know of any mathematician who wouldn't accept TMs as a sound definition.
Turing machines are arbitrary in that they don't derive from some mathematical basis. You could easily reformulate the abstraction in many different ways, replacing "tape" or a "read/write head" with some other quasi-physical concept that don't normally appear in mathematics. Those concepts are spurious, just provided for their analogy to real-world machines.
You can provide formal descriptions for many things - the Perl programming language, for example. I would also call that language unmathematical. You can study such objects mathematically, but they are essentially external objects of study which one is using mathematics to make more tractable.
In contrast, Curry and Howard discovered direct correspondences between logic and lambda calculus. For example, intuitionistic natural deduction is isomorphic to typed lambda calculus. The internal languages of Cartesian closed categories are lambda calculi.
If Church hadn't discovered lambda calculi, they would have eventually been discovered via one of these correspondences.
* Girard-Reynolds System F as a common language for both second-order propositional logic and polymorphic lambda calculus
* higher-order logic and Girard's System Fω
* inductive types as algebraic data type
* necessity in modal logic and staged computation
* possibility in modal logic and monadic types for effects
* The λI calculus corresponds to relevant logic.
* The local truth (∇) modality in Grothendieck topology or the equivalent "lax" modality (◯) of Benton, Bierman, and de Paiva (1998) correspond to CL-logic describing "computation types".
As such, lambda calculi are inextricably embedded in mathematics and logic, and their core features (ignoring superficial choices such as syntax) are a discovery, rather than an invention. We can't change the equivalences described above, they are facts that arise from the formalisms developed to address subjects like logic and categorical analysis. The same is not true of Turing machines.
> They can be formally and precisely described and I don't know of any mathematician who wouldn't accept TMs as a sound definition.
Just because something can be formally and precisely described doesn't necessarily mean it's (aesthetically) "mathematical". Another example that comes to mind now is the notion of an applicative functor (Applicative in Haskell) — sure, you can give the categorical definition for it but it's quite an oddly specific structure (if anyone reading this knows where they arise in non-FP contexts, I'd really like to know!).
So for Turing machines, maybe a question we might ask is, "how can we ensure Turing machines always terminate?" This is a much harder question than say, imposing a type system on untyped lambda calculus which unconditionally makes all terms terminate, while still retaining the ability to do recursion and useful work.
A good yardstick to judge the "mathematicality" of a construction often amounts to looking at its compositional properties. That is, could you re-use the concepts over and over again, and could variants of the structure be uniformly described? While Turing machines and lambda calculus are both equivalent and can be both formally described, one does win over the other in terms of having a simpler, more compositional structure.
This is quite subjective, and in any case it feels remarkable that both Turing and Church came up with their constructions to describe computation formally!
> Just because something can be formally and precisely described doesn't necessarily mean it's
> ... (aesthetically)
The 'aesthetics' part is an interesting argument I didn't see, and yes, I agree that a particular concept arising in different areas of math is 'aesthetically pleasing'.
> ... "mathematical"
That's the part I have a problem with, mathematical objects are (more or less) exactly those that can be precisely and formally described. "Lambda calculus is a more elegant mathematical object" is not a statement I have a problem with, "Turing machines have a lesser status as mathematical entities", on the other hand, is sort of weird.
> So for Turing machines, maybe a question we might ask is, "how can we ensure Turing machines always terminate?" This is a much harder question than say, imposing a type system on untyped lambda calculus which unconditionally makes all terms terminate, while still retaining the ability to do recursion and useful work.
You can bound the number of steps or the size of the tape :)
That's not a "well, actually", the point I'm making is that other computational models (TMs, pointer machines, RAMs, counter machines) are more natural formalisms to think about some problems. Sure, logic/proof theory is intimately related to lambda calculi and it would be extremely unnatural to formulate the same ideas using Turing machines, but for things like analysis of algorithms, complexity theory or numerical analysis it would be similarly unnatural to use lambda calculus as the computational model instead.
They certainly aren't lesser forms of math, even though one might find them less aesthetically pleasing.
> ... it feels remarkable that both Turing and Church came up with their constructions to describe computation formally!
It is. It completely blew my mind when I first learned about that fact. If computation can be exactly described by different formalisms resulting in the exact same set of computable functions, then it must be a very fundamental feature of how "things" work!
> That's the part I have a problem with, mathematical objects are (more or less) exactly those that can be precisely and formally described. "Lambda calculus is a more elegant mathematical object" is not a statement I have a problem with, "Turing machines have a lesser status as mathematical entities", on the other hand, is sort of weird.
I agree as well. I don't think Turing machines have any lesser status (it's literally equivalent to lambda calculus, after all!)
> but for things like analysis of algorithms, complexity theory or numerical analysis it would be similarly unnatural to use lambda calculus as the computational model instead.
That's a good point as well. Because of the nature of hardware, neither lambda calculus nor Turing machines are very good fits, so RAMs and pointer machines essentially take over.
> I don't think Turing machines have any lesser status (it's literally equivalent to lambda calculus, after all!)
"Equivalence" has a specific meaning here, though. The term "Turing tarpit" was invented to point out the limits of that equivalence, and any attempts to define computations for Turing machines quickly run into that.
The sense in which I consider Turing machines to be "rather unmathematical" is closely related to this. You can write useful mathematical proofs in lambda calculus - as I've pointed out elsewhere, there are automated proof assistants based on this fact. There's no such equivalent for Turing machines. Theoretically, there could be, due to Turing equivalence, but in practice, no-one wants to exploit that.
Given one formalism that has actively been used for such mathematical purposes, and another that has been actively avoided, I call the latter rather unmathematical by comparison to the former.
People can quibble with my word choice, but it's describing a real, measurable phenomenon, the effects of which have played out predictably over the last 70 years.
Why would you think that? I believe I’ve read that even Church himself said that Turing machines are a more elegant basis for computations, since they are much easier to mathematically reason about. I’m sure one can prove everything proved for Turing machines for lambda calculus, but I disagree with your statement that it is more mathematically grounded. It may be true in a syntactic form, but definitely not in a mathematical structure one.
> I believe I’ve read that even Church himself said that Turing machines are a more elegant basis for computations, since they are much easier to mathematically reason about.
I'd be interested in a source for this.
Lambda calculi are used as the basis for several functional programming languages, which seems to argue against the idea that they're less easy to reason about.
They're also used as the basis for proof assistants such as Coq. Coq is based on the calculus of constructions, which is a typed lambda calculus. Again, this would be a mystifying choice if lambda calculi are difficult to reason about.
> I disagree with your statement that it is more mathematically grounded.
"In his review of Turing’s work, Church himself acknowledged the superiority of Turing’s analysis of effectiveness, saying:
computability by a Turing machine … has the advantage of making the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately. (Church 1937a: 43)"
Moreover, Gödel himself (as well as other mathematicians) found Turing machines and Turing's thesis to be more persuasive:
Gödel also found Turing’s analysis superior. Kleene related that Gödel was unpersuaded by Church’s thesis until he saw Turing’s formulation:
According to a November 29, 1935, letter from Church to me, Gödel “regarded as thoroughly unsatisfactory” Church’s proposal to use λ-definability as a definition of effective calculability. … It seems that only after Turing’s formulation appeared did Gödel accept Church’s thesis. (Kleene 1981: 59, 61)
Gödel described Turing’s analysis of computability as “most satisfactory” and “correct … beyond any doubt” (Gödel 1951: 304 and *193?: 168)."
For what it's worth, I do think you've started an interesting discussion!
You're allowed to have an opinion that you prefer lambda calculus over LCMs (logical computing machines -- a better term for Turing machine's that Turing favoured) for conceptualizing and reasoning about computation mathematically.
Myself, I find Conway's game of life to be the superior framework over both lambda calculus and Turing's Logical Computing Machine :-P
However, those quotes are not saying "that Turing machines are a more elegant basis for computations, since they are much easier to mathematically reason about." I still consider that statement to be false, and I've substantiated that in my comments.
The quotes from Church and Gödel are saying that Turing's formalism was the more helpful in making the case that it had captured the notion of effective calculability. That's understandable - it's much like e.g. Cantor's diagonal, in that it makes its subject very concrete.
But that doesn't tell us anything about the usability of the formalism as an actual mechanism for computation, or analysis of computation. In that respect, lambda calculus has proved far more useful, as shown by the examples I mentioned, and many more. Another example is denotational semantics, which maps programming language semantics to lambda calculus.
In fact, one of the inventors of denotational semantics, Dana Scott, developed the first non-trivial model of the lambda calculus, in terms of complete lattices. That model addressed the concreteness issue, albeit decades later, in the early 1970s. It's possible Gödel, Church etc. might still have preferred the Turing tape as an intuition-friendly proof for effective calculability, but it would no longer be possible to reasonably make Gödel's "thoroughly unsatisfactory" claim about the lambda calculus.
Coming back to denotational semantics, I'm pretty sure no-one has ever provided a non-trivial language semantics in terms of a Turing machine - and if you wanted to do that, one of the easiest ways to do it would probably be to implement a compiler from lambda calculus to Turing tape, and generate the Turing representation automatically from that.
More generally, my position could be simply refuted with counterexamples of tractable Turing machine solutions to any of the various problems that lambda calculus has been used to address, like programming language implementations, programming language semantics, duals of logical systems, and proof assistants. To my knowledge, no such examples exist, and the reason for that is because what I'm saying is a fact, not an opinion.
> Myself, I find Conway's game of life to be the superior framework over both lambda calculus and Turing's Logical Computing Machine :-P
I'll agree that the game of life is only slightly less useful than the Turing machine as a model of computation!
> The quotes from Church and Gödel are saying that Turing's formalism was the more helpful in making the case that it had captured the notion of effective calculability. That's understandable - it's much like e.g. Cantor's diagonal, in that it makes its subject very concrete.
This is what I primarily meant with my controversial statement, sorry I didn’t articulate it well enough. Of course lambda calculus is very useful and has plenty of applications, never meant to say otherwise!
I think you've demonstrated far more familiarity with the subject matter than me (and my own understanding is admittedly pretty basic), and just wanted to say I really appreciate your thoughtful response. You're right that those quotes don't really fit for what you were asking for evidence of.
I also agree that the absence of such a refutation is a pretty solid argument that your position is simply a fact and not an opinion as I argued.
Turing machines are terribly inefficient, though. They may be easier to reason about than lambda calculus, but not good for practical computing purposes. Their value was in proving that logical and mathematical reasoning could be mechanized with an automatic device, something that had not been clear until then.
Computing science cares a lot about building efficient processes. Thus to create real working programs, a better basis is a combination of lambda calculus for defining mathematical structures and (Von Neumann based) agent-based models for defining stateful processes.
Modern programming languages are evolving to be capable of representing either model, to adapt themselves to the style more suited to the problem at hand.
Efficiency is not important at all when you talk about a mathematical proof. Do we really care about a proof using induction taking n or n^2 operations?
For seeing whether a given statement holds true, minimizing complexity is the most important.
> Typically CS cares about the process to find a result, and not just its value nor the possibility or impossibility to find it.
Typically, programmers care about the process to find the result. Some programmers are computer scientists. You can not conclude that all computer scientists are programmers, so your statement is based on logical fallacy.
Computer science still has computational complexity theory and analysis of algorithms as sub-fields, with complexity classes and Big O notation. These very much care about the process to find results, which was the basis for my statement :-) The fallacy is on your interpretation of what I said.
There are results in theoretical computer science that don't care about the efficiency in the process; the most essential are the computability theorems exploring what parts of mathematics can be computed, and what we mean by computation anyway.
But the most essential part -the halting problem- was resolved long ago, so the lion's share of applications of computing science is in finding practical ways to use computers, which again needs to take efficiency into account.
> Efficiency is not important at all when you talk about a mathematical proof. Do we really care about a proof using induction taking n or n^2 operations?
Efficiency and tractability of representation is also an issue, though, and mathematicians (and programmers) do care about that, a great deal. That's why the lambda calculus has been used as the basis for proof assistants such as Coq, whereas Turing machines have not.
This whole debate is always so bewildering. Programming paradigm fanboys get into heated arguments about which model is the "best" one, but actual computer science research uses myriad different models of computation, usually endeavoring to select the one that is most convenient for the given purpose.
Sometimes that could mean using the lambda calculus, particularly in study of language theory and type systems. Other times that could mean some sort of black box model, such as when proving lower bounds for solving problems using specific operations (see e.g. the sorting lower bound). Yet other times, like when establishing the ground-zero of some new variety of computational hardness, I can't think of many more suitable models to cut up into pieces and embed into the substrate of some other problem than those based upon Turing machines.
> Programming paradigm fanboys get into heated arguments about which model is the "best" one
My original comment certainly reads that way, but my intent was really to point out that it doesn't make sense to privilege the Turing machine model in the study of computation. I wrote more about why in this comment: https://news.ycombinator.com/item?id=27334163
Well, computing science studies programming paradigms; so defining them and analyzing what makes them suitable for what purposes is pretty much within its scope.
As I said above, it may very well be that the best usage for Turing machines is using them in mathematical proofs; where the efficiency of the computation is not a concern.
Really the best usage of all the computation models we're discussing here is using them in mathematical reasoning. If you're looking to "create real working programs," then a better basis is probably going to be some combination of actual industry-grade programming languages and actual CPU architectures.
This response might come off as a little facetious, but seriously, I think the idea of "founding" industrial computing languages/platforms upon theoretical research models of computation misunderstands the relationship between theory and practice. There is a relationship for sure, the research on these models usually does want to translate into real-world implications somehow, but your functional programming language is not the literal lambda calculus.
Certainly practical programming languages are not a one-to-one implementation of a theoretical model, but these models do create families of related languages that keep a close relationship and are separated from languages based on a different model.
Each time a new theoretical model is created to represent a particular programming problem, entirely new languages are created to ease the practical approaches of building systems for the underlying problem.
And it is worth keeping track of which models are good for which problems. So no, theoretical models are not good just for doing math with them, also for guiding practical usage.
"Computing science" is more commonly used as an alternative, but yes, that approach makes much more sense than focusing on some particular abstraction.
I should also clarify that I didn't intend to argue that lambda calculus should be the only way of understanding computation, but rather that it doesn't make sense to privilege the Turing machine model, as the comment I was replying to suggested.
The Turing machine model is arbitrary, and it is unmathematical in the sense I've described in my comment linked above. A different culture (or species!) would be likely to come up with a different computational machine-like model, but any culture that develops formal logics would be likely to discover the lambda calculus as a consequence of that.
That might be true if there were no more natural mathematical representations of computation. But as I've pointed out in another comment (https://news.ycombinator.com/item?id=27334163), there is such a representation - the lambda calculus.
There are large and finite structures in mathematics that would be very odd to call computer science. Take group theory and, say, the monster group [1].
I believe the core difference between comp-sci and other math is in taking into account the efficiency of the representation and/or transformation.
Thus defining the Monster group is discrete mathematics, but not computer science (or not directly). Finding parts of the Monster groups that can be represented and manipulated by computers would be computer science.
Subject boundaries are arbitrary, but they have a practical implementation in terms of university CS departments.
Some CS departments came out of math departments, and some universities have a "Department of Mathematics and Computer Science. The theoretical side of CS does seem like a branch of mathematics.
Other CS departments came out of EE departments, and and some universities (MIT, Berkeley) have a "Department of Electrical Engineering and Computer Science." The practical side of CS does seem like a branch of engineering.
A number of stand-alone CS departments (Stanford) still seem to be part of the school of engineering, so my vote is for CS as an engineering discipline with a theoretical basis in mathematics (perhaps like information systems or signal processing.) I also like the idea of Caltech's "Computing and Mathematical Sciences," as it seems to bring a lot of computing and applied math under one roof.
Before there was a CS major at many universities you would major in mathematics if you wanted to work with computers.
John Kemeny majored in mathematics and taught “Finite Mathematics” while a professor. He saw the application of BASIC not as a new field but as a way to simplify computers so that it wasn’t only mathematicians and scientists who could program them.
Dartmouth is an example of a CS department coming out of the Math department rather than the EE department.
I tend to think that basic (though probably something like Python/numpy rather than BASIC proper) programming fits well into many math courses. The fact that modern TI calculators can run Python seems to mesh nicely with this. (And classic calculators with BASIC also have a long history.)
Math courses often in introduce algorithms for arithmetic and algebraic computation, so some coverage of algorithms as a concept seems to fit as well.
> BASIC not as a new field but as a way to simplify computers so that it wasn’t only mathematicians and scientists who could program them.
This is a fantastic vision. Of course mathematicians and scientists also benefit from user-friendly languages like BASIC or Python. But the idea that computer programming (and computing in general) could be helpful to undergraduates majoring in humanities and social sciences, and perhaps to the public at large, was probably still a fairly radical idea in the 1960s! Trying to make that happen by creating a programming language that first-year students could learn in an afternoon (or so) was/is a remarkable step toward making that vision real.
Computer science research is mix of (at least) mathematical, engineering, and scientific traditions that varies by subfield. Among the subfields I'm most familiar with, theoretical algorithms are mostly mathematical, while algorithm engineering is closer to engineering. Data compression has both mathematical and engineering work, while bioinformatics is a mix of all three traditions.
It's not always clear what is computer science and what isn't. For example, you can find both mathematicians and computer scientists in theoretical computer science. In bioinformatics, you often see similar work from people with CS and life sciences backgrounds.
Computer science has two origins, born from electronic engineers who studied the ways to build the physical devices required for computation; and mathematicians who studied the best way to represent the desired computations as configurations on such devices. This double soul still lives as the distinction between hardware and software (though the line can be blurred, with programable microchips like FPGAs or the direct implementation of algorithms on hardware in ASICs).
However, there's a general and unified way to define anything computing-related, unifying both traditions: seeing computer science as studying the automatic processing of symbols. I.e., anything that the human mind treats as a symbol with a meaning, that can be represented in physical devices and transformed in a different set of symbols through mechanized processes.
This definition widens the scope of comp.sci beyond its origins in representing calculations of physics and maths, to include other fields typically taught in the degree but that people don't think of as computer science: natural language processing, design of viable user interfaces (human-computer interaction), design of adequate programming languages apt for different problems.
The most general view of this definition encompasses disciplines far removed from engineering, but which are nonetheless used to expand its frontiers: aristotelian ontology was used to invent object-oriented programming, and is still used to explore the semantic web; or semiotics to explore what ideas can be represented and processed automatically, and the best way to build user interfaces tailored to the way we think.
Yes. Indeed in much of the world outside the USA there isn't even such a label as "computer science"; the stuff that's taught as "computer science" in the USA is taught under some sort of "math" label.
I studied it under the name of "Informatik" (Germany), which seems to be the most commonly used word in Europe.
The Wikipedia article for "Computer Science" has relevant information, quote (some line breaks added):
In the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACM—turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist. Three months later in the same journal, comptologist was suggested, followed next year by hypologist. The term computics has also been suggested.
In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "informazione automatica" in Italian) or "information and mathematics" are often used, e.g. informatique (French), Informatik (German), informatica (Italian, Dutch), informática (Spanish, Portuguese), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek.
Similar words have also been adopted in the UK (as in the School of Informatics of the University of Edinburgh). "In the U.S., however, informatics is linked with applied computing, or computing in the context of another domain."
In Portugal, there is some time to spend with such matters during the 5 year long that an Informatics Engineering degree takes (certified by the Engineering Order).
Since Bologna, it got levelled to Masters, but most still take the 3 + 2 years, not to be left behind the older generation that had 5 + 2 for Master.
So plenty of time to go down into the subjects that USA consider computer science.
However for those that want to really go down the rabbit hole of computer science as seen on USA, the appropriate degree is Applied Maths into Computing.
As seen on USA? European CS degrees are generally more comprehensive because we do all the humanities already in high school which leaves more time for math and it is more common for people to continue with a masters degree.
> Good example is number theory, which philosophically considers all natural numbers, regardless of size, to be equal type of objects. In computer science, the (although fuzzy) magnitude of numbers plays much bigger role.
Somewhat similarly, I have a degree in number theory / modal logic systems that was issued by the philosophy department at my university. The concepts are similar to a CS degree except you get really good at describing the problem because philosophy degrees usually involve a lot of writing.
Legitimate computer science should be about assembly and C for the most part. Knowing mathematically how the 1s and 0s are outputting through the machines circuit.
Nowadays it's just an antiquated term for programmer. The average C programmer if wager is above 40 years of age whereas the average python or web dev dev must be pushing late 20s.
I think it really depends what part of math and what part of computer science you're talking about. Certainly foundations of computation use finite objects from countable sets... But there is more to CS than turing machines and encodings of numbers.
For example: Computer vision, machine learning, data science, cryptography, etc are all rife with infinite objects!
Proof assistant software and SMT solvers can also prove things about infinite mathematical structures from number theory, ZFC, topology, etc.
Pure mathematics also cares about finite algorithms. Every proof is a finite sequence of deductions on finite objects from a countable set... eg a computer program!
Other examples include: computing bounds, integrals, roots of polynomials, divisors, bases, fundamental groups, etc. Pure math is full of computation!
tl;dr the line between math and cs is extremely fuzzy.
Compare with most of "legacy" mathematics, which studies countable structures (so the description can use arbitrary series).
Of course, there are larger sets, but they mostly serve as a theater (just like countable infinity is just a theater of computer science).
Finite sets in the rest of mathematics are sometimes considered uninteresting, but not so in computer science, where only very small sets (roughly of up to 32 elements, all subsets can be easily checked on a typical computer) are considered uninteresting.
Good example is number theory, which philosophically considers all natural numbers, regardless of size, to be equal type of objects. In computer science, the (although fuzzy) magnitude of numbers plays much bigger role.