Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You could represent all the input on different levels as numbers, e.g. all EM waves hitting our eyes, then all the physical output from our body also as numbers, and everything that causes this output from input within is what you would consider to be a lookup table.


What are the dimension of the input and output spaces involved in this idealization? In the case of a neural network there is no idealization. The network is software, it's a number. It's inputs and outputs are all bounded and can be expressed as a table of bounded tuples.


You could pick a very large number depending on a reasonable processing capability a human has, which represents all the significant physical interactions on a human body over a certain amount of time. Then take the output over a certain amount of time, being all movements of the body.

If you wanted to focus on thoughts alone, you might want to skip few layers/systems, to give input directly to whatever causes thoughts to happen.

All particles and their interactions could also be represented as numbers. But it just depends on what level we do this, and at what level what kind of complex logic is required.


Ok so give me some concrete number.


5.1536672454... could be approximated strength for a nervous signal of some sort in a human body in some unit of measurement.


I think the OP is right. All the input to a human brain can be expressed as numbers, at any given time a specific radiation, vibration, or chemical reaction is hitting our "sensors" and by the law of physics this is just numbers ( in terms of differentiation, brain does not know absolute values ).

Our output ( mechanical and vibrations ) is also fully quantifiable, thus numbers.

One giant lookup table.


Provide some concrete numbers for solar radiation then as a lookup table. You guys are confusing abstraction and idealization with what it means to be a thinking person. There is no such abstraction and idealization happening with software. The software is really just a number, there is no idealization or abstraction happening when I claim that GPT is a sequence of bits representing a numerical function.


You can't really have it both ways, being reductionist when it comes to computers (it's just a finite set of numbers, so there is no reasoning), but not permitting to use the same line of argumentation with humans (it's just a finite set of particles).

At any rate, this is an ages-old discussion in philosophy, so most likely we are not going to settle this in a Hacker News thread.


Abstraction is a property of a description of a thing, not the thing itself. In reality, what we call "GPT" is the highly organised behaviour of many electrons, probably distributed across many computers, each with extremely complex hardware of various kinds, etc etc. Calling it a sequence of bits representing a numerical function is a choice of description - an abstraction, even!

In this case it's a good description, because it correlates with the GPT in reality quite well. But they are not the same thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: