Hacker News new | past | comments | ask | show | jobs | submit login
Can a Human See a Single Photon? (1996) (ucr.edu)
195 points by adenadel on Nov 12, 2017 | hide | past | favorite | 91 comments



Would there be any interest in a series I've been thinking of putting together about all the fantastically counterintuitive ways your eye fools you without you even noticing? Example:

https://media.giphy.com/media/3o7btPOMufN5FziFWg/giphy.gif

The fact that colors influence colors around them means that there are far more than 256^3 viewable colors: the spatial axis is like a fourth channel.

Another fun one: If you take a paper plate and poke a hole in it, then take a toothpick and stick it in front of the hole, from the bottom, pointing upwards.... Then it'll look like it's coming down from the top! It's very hard to explain in words, but basically it's a pinhole camera, so everything appears upside-down. Your brain tries to compensate and flips it right side up, but it can't quite do that with the toothpick-hole scenario. So when you push the toothpick up from the bottom, it looks like it's coming down from the top.

Lastly: https://i.imgur.com/epXYhJd.png If you cover up the top half of the image, the bottom looks gray. But the strawberries look red.

A fun puzzle is, how do you see an individual pixel? Can you think of any common object that would help you do this? (No magnifying lenses, for example, and using a phone is sort of cheating.) Answer below.

...

Answer: pluck a hair out of your head and put it on your screen. The line along the edge of the hair will turn into a stairstep pattern, which is just another way of saying you can see individual pixels.


Perhaps a nitpick, but 256^3 doesn't really represent anything about human perception-- it's just how we store colors in computer memory. The human eye can perceive a far wider range of colors than any screen can (currently) represent. It's just as valid (and probably more accurate) to use floating point to store color (rather than bytes), and in many high-fidelity applications this is actually how it's done.

The bit about colors "looking different" in different contexts also isn't really an "extra axis". The example you give is just your brain (attempting to) separate out the illumination from the scene colors. The range of actual colors you can perceive isn't increased by this.

Fundamentally human color perception is three-dimensional, because humans have three classes of color receptors on their retinas. There are also rods in the retina (the cells that are color-insensitive and detect only brightness), and in principle they could inform color perception, but this does not seem to be the case-- the intensity of light only shifts things around in the original three-dimensional space.

There are also a very, very few people (women only) with genetic mutations that give them four different color receptors in their retina. However, there is no evidence that their brains can take advantage of this extra information and allow them to perceive an extra "axis" of color. It is much more likely that the "mutant" receptors are lumped in with the stimulus of one of the others, which will shift their perception of color, but not expand it.

In short: There are three axes of color perception. Optical illusions don't change that.


The retina itself does various forms of processing, some of which is about eliminating noise, but some of which is involved in relatively complex things like edge detection. The nerve bundles connected to the retina do further processing. The visual cortex itself does a lot of processing, then the conscious part of the brain does even more. There's recompositing and temporal correlation, motion-compensation, prediction, edge detection, lighting/shadow compensation & color perception, threat assessment, pattern recognition, object recognition, face recognition and that's just scratching the surface.

Even transmission of color information isn't a simple RBG signal. It appears the eye transmits differential signals (such as B-Y) and other information as well.

There are multiple layers of processing that go on in our visual system. You can't reduce it to three axes. The eye is not a camera, doesn't work like a camera, and can't be simplistically thought of as if it were a camera.


All of those things about visual processing are true.

> You can't reduce it to three axes.

This is my claim: It is possible to represent all possible visual stimulus with: f(direction, time) -> (a, b, c). That is what I mean by "vision is three dimensional".

It is not possible to add another channel "d" to the (a,b,c) tuple which is not redundant; so anything you add would not be an extra axis or increase the dimensionality of the perception space.

The brain/retina may generate lots of extra "channels" from processing this (moving) three-valued image, but those extra channels can't be stimulated independently of each other; they depend entirely on those original three degrees of freedom as a function of space and time. In that sense they aren't really "axes", because they aren't degrees of freedom. That's what I mean by "there isn't an extra channel."

If you agree with that, then we don't disagree and we are just arguing semantics. :)


To be fair, the receptors are VERY time dependent too. Take a stare at the top bar for ~20 seconds and then move your eyes down just a bit. You'll see an echo image of the opposite color (colorblindness depending). This shows that even a 'little' time spent saturating the retina will effect the later images.

As other people mention, the retina does a LOT of pre-processing before any 'signal' gets sent. The very first synapse from the receptor is heavily modified by horizontal and bipolar cells in a time-dependent reactive manner. The 'images' we 'see' are nothing like what the retina 'sees'. People with cataracts will commonly have very advanced occlusions before thinking there may be a problem, due to the incredible amount of processing the retina itself does. They really cannot 'see' that they cannot 'see'.

So, trying to reduce vision to a color mapping is just so ... computational. Biology is not like that all the time. It's not a 'map' like with sound and hearing (where tonotopy is semi-conserved throughout brain processing). With vision, things are used as 'information' almost instantly, but for certain by the end of the LGN into V1. You can't really backtrack the data through the system once it hit V1, it self-modifies the memristance of the synapses too quickly to chase the action potentials again. So thinking of things as 'red' isn't what the brain wants to do, it's not matrix algebra. It's 'grandmother' and 'tiger' and 'that-look-in-her-eye'.


It’s funny that you introduce time in your equation, because it shows how your view is incomplete. In our visual system, you cannot reduce to three dimensions without creating a recursive relation. It should be more like:

Ret(a,b,c)_t = f(Ret(a,b,c)_{t-1}, current_stimulus)

Since it is a recurrence, we cannot reduce to three dimensions.


Couldn't you use this same line of thinking to argue that only one dimension is required? Our eyes may generate 3 channels, but don't they depend entirely on the frequency of light received which is one dimensional?


Light is not perceived at a single frequency. It is a spectrum. That’s why you can see non-spectral colors like pink.


OK, well, are there any colours which are not representable with two frequencies? That is still less than three dimensions.


White.


Isn't that representable with a mixture of blue and yellow light?


You can pick a blue and a yellow and a proportion at which to mix them which will produce white, but for any two primaries you pick, there are a multitude of colors you can't represent. For example, if your basis is blue-yellow as you suggest, then you can't represent red or green (and an infinity of other colors off-axis).

Fundamentally the chromaticity diagram[1] is 2D (with a third axis for intensity). You need to mix at least three points to span the space of the plane.

[1] https://dotcolordotcom.files.wordpress.com/2012/08/anatomy-o...


You can represent all colors on three axes. If you choose each primary color so that it stimulates exactly one of your three color receptors, it is possible. However, there are no wavelengths of light with that property, so in real life, it's not possible to produce a monitor that can show you all colors you can see (with only three primaries, that is).

You can see the various compromises that people have developed. sRGB has real primary colors (your monitor is probably using them right now), but there are many colors that it can't convince your eye to see. (There are also colors you can see that don't actually exist as a single wavelength of light; magenta for example.) Adobe RGB picks different primaries and you can see more colors. DCI-P3 picks still different primaries and gives you even more colors. But in the end, there aren't three points you can pick that will give you all colors.

You can pick more than three primaries to get more colors. CMYK for printing is an example of this.

If you want to mathematically represent all colors, that's easy. CIExyz does that, but the "primaries" can't even be called that because they have no relationship to human anatomy.

This Wikipedia article is a good starting off point for understanding color perception:

https://en.wikipedia.org/wiki/CIE_1931_color_space

(As for computer use, I tried a wide gamut display once and it's a disaster. 99.99% of all content is sRGB, so you might as well have your monitor using sRGB primaries. The metadata situation is too broken for browsers or the OS to know what color space an image is actually in -- displaying an sRGB image with Adobe RGB primaries makes images unreasonably vibrant. Displaying an Adobe RGB image with sRGB primaries looks ridiculously washed-out. Adding to the fun is that browsers like Chrome will convert so that things look right... but assume display colorspace if there is no metadata in the image. And all the photo and art sharing websites I checked a couple months ago actually delete that metadata, so if you compose your images in a colorspace other than sRGB, most people will see the wrong colors. It sucks. I took up black-and-white photography. Everything can display that :)

I would say I'm surprised that there are no monitors that use more than three primaries to display color... but the reality is that even wide-gamut 3 primary color is so broken that it's probably not even worth trying. You could see more colors, but nobody else would see the right colors. Why even bother, I guess.


Don't forget purple, which doesn't exist in the rainbow. It's red and blue, which average to green but we see it as purple. Which validates the three-axis idea: purple is when the red and blue axes are dominant. But it's still peculiar.


'red and blue' averages into green? Yeah, that's not really how our eyes perceive light.

We see the colors that we see because certain wavelengths of light stimulate certain cones in our iris, which makes our brains hallucinate in color, basically.

We see green when the short-wavelength-sensitive, medium-wavelength-sensitive, and long-wavelength-sensitive cones are all stimulated. A single wavelength of light around 520nm will stimulate all of these cones and satisfy the requirement for green to be seen. In fact this is usually the wavelength of light we'll find in a green laser.

But there can be multiple different wavelengths of light in combination to make the eye perceive green as well. And that's what's happening when we see purple. When we see purple (magenta), we're never, ever, seeing just one wavelength. Because as we've said, there is no such thing as a magenta wavelength. It's not in the rainbow.

So take red-wavelength light, and blue-wavelength light, with the key absence of green-wavelength light, and we'll see purple/magenta. But green-cone stimulating light must be absent or we will not see magenta.

It's pretty interesting how what we see is not necessarily what's out there.

Another interesting one is gray. There is no "gray" on the rainbow. Any time we're seeing gray, we're actually seeing a bunch of colors at once. Which is weird because gray appears colorless. But depending on the colors present that make us hallucinate that grey, there could be as few as 3 colors or even millions. We just don't see them because they're stimulating our cones in such a way as to neutralize each other. So under natural lighting, an aluminum macbook will be grey and colorless. But walk inside a room that only has red lighting (so no white lights) and that gray colorless macbook is now a colorful red.

Anyways, what we perceive has never been a literal transcription of what's actually present. The brain takes in stimuli from the outside world, but it alone ultimately decides what we see. It shows us different things under different conditions. Different lighting conditions, under the influence of drugs, colors next to other colors, light arranged in certain textures, whether we're malnourished, whether we're daydreaming with our eyes wide open, whether we've just finished staring at an intensely bright light or a colored wall and are now looking at something else, all of it influences the colors that we actually end up seeing. The brain always has the final say in what we perceive, not the outside world. We're basically hallucinating 24/7, all the while thinking what we see is what actually is.


Optical illusions reveal that we have more than three axes of color perception. The first gif and the last photo are examples of this, and the renaissance artists had a firm appreciation of it. (See DaVinci's journals, specifically the chapters on color.)

Part of the reason I want to put together this series is to rekindle this knowledge. It seems very much "lost": your comment represents the status quo of color science, but if you were an ambitious artist in 1450, you'd run experiments and determine that much of what we take as fact is actually quite a lot more complicated in practice.

The eye, and especially our perception of colors, is so complicated that entire tomes barely scratch the surface. And Munsell's discoveries were only made in 1900, barely over a hundred years ago. There's still a lot more to discover.

To be clear: it's true that our three receptors imply three axes of color perception. But when you assemble an image as a whole, the entire portrait results in an experience quite different than any individual color.

EDIT: That the perception of color is influenced by surrounding colors is true, but that this adds a "fourth channel" is [citation needed].

One of the most important aspects of color science is that you have to be willing to believe the possibility that some strange ideas are true.

In this case, I can dispel that illusion, but only if you're open minded to it:

Consider a painting. Why choose a certain shade of yellow? To produce an effect.

The above images demonstrate that where you put that shade of yellow causes a very different effect.

Now, if you accept as an axiom that the only reason to choose a color is to produce an effect, then that means all three primary colors (red, green, blue) are different axes in your ability to cause an effect. But the fact that the arrangement of colors causes different effects means that the spatial arrangement is a fourth "lever" that you can use to change the experience. That implies it's accurate to call this phenomenon a fourth channel.

(I'd post this edit as a reply, but HN isn't having it right now.)

One interesting area of science to investigate is the frequency spectrum of natural images. Our brains are tuned to see certain frequencies more than others, e.g. blades of grass. And the reason colors produce different effects depending on where they are in relation to each other is to help us resolve different shapes in an image.

http://web.mit.edu/torralba/www/ne3302.pdf

https://www.cs.cmu.edu/~efros/courses/LBMV07/presentations/0...


If you have a one dimensional line, you can use a 1 dimensional value x to represent its position. If you add a second input, y, and instead say that your position on the line is x+y, the line is still one dimensional, even though y changes how the value x is perceived. (Because the point x,y can still be represented with a single value z where z=x+y)

There are three colour receptors, so colour is 3 dimensional to us. That the surrounding context shifts where that 3d point falls in the colour space doesn't create more colours, it just shifts the perceived colour around the colour space. In your strawberry example, the gray and red already exist in the 3d colour space. The context did not create these, it just shifted our perception a little off the point in the colour space that the pixel value alone would have placed it.

Similarly, if I wear sunglasses, the darkness doesn't add another dimension, it just shifts all the values a bit in a direction that makes them darker.

Note: I'm not actually arguing that there are only 3 dimensions of colour (although our three receptors would suggest it), as I don't know enough about it. I'm just pointing out a flaw in your logic.


The statement I take objection to is this:

> The fact that colors influence colors around them means that there are far more than 256^3 viewable colors: the spatial axis is like a fourth channel.

That the perception of color is influenced by surrounding colors is true, but that this adds a "fourth channel" is [citation needed].


I'm not sure I buy that it means there are more absolute viewable colors there, but rather there are merely different interpretations of the same colors. We do have fairly strong confirmation that our brain doesn't interpret color absolutely but can adjust based on luminescence (see: all examples of shadow illusions) and patterns (see: https://www.youtube.com/watch?v=mf5otGNbkuc), perhaps that's what is meant by a "fourth channel"?


The simplest example of this is that in a watercolor class I was told to make shadows bluer than the object, not just blacker. And indeed it looks much more natural than a black shadow, and I think it's this effect of juxtaposing colors.


The sky is blue and the sun is yellow. Subtract the sun and you are left with only blue light from the sky.

...which explains why indoors, shadows /do/ tend to be colorless: http://www.camilleprzewodek.com/uploads/1/5/9/2/15923552/639...


I’ve referred to DaVinci as the father of CG, based on his formalizing the perspective projection. This bit about color adds to my thesis.


> Lastly: https://i.imgur.com/epXYhJd.png If you cover up the top half of the image, the bottom looks gray. But the strawberries look red.

The strawberries even look reddish to me if I crop the image down to just one of the red-appearing parts of one strawberry and zoom that to a reasonable size. Looking at the RGB values, I see that the exact gray (128,128,128) parts of the strawberries are surrounded by areas that are slightly different, usually in that they have a slightly smaller red component and maybe a little more green and blue, e.g., something like (118, 129, 129).

If I zoom in enough that I don't see those very slightly less red regions, the strawberries seem gray with no hint of red.

This works even when 80% of the visible image is pure gray in the central, with edge areas having slightly reduced red or slightly increased blue or green. I wonder why it works that way, instead of the other way around, which would be that the large central gray would be seen as gray, with the edges seen as being bluish/greenish?

I wonder if it has something to do with the distribution of colors in nature? Offhand, I can't think of many natural things that are a pure gray, so maybe we favor an interpretation that has the whole image having some color over one that has a large, unnatural gray area?


Several years ago I went to a James Turrell exhibit at the LA County Museum of Art. He had this one piece called "dark spaces" that is kind of like the experiment where you go sit an a dark (i.e. lightproof) room for like 10 or 15 minutes and then you start to see this 'blob' in front of your eyes where he's leaked in a really small amount of light. Anyway if you like weird things with light he has a lot of them. Since he uses light as his medium you can't get too much of a sense of them from a photo, you kind of have to see them in person.

http://jamesturrell.com/work/type/dark-space/

http://www.lacma.org/art/exhibition/james-turrell-retrospect...


There is a book called "Vision Science" (by Stephen Palmer) that is full of interesting stuff about how the brain interprets visual information.

One thing I liked (there were lots): you probably know that your eyes move around with jerky motions called saccades. But if our eyes move in jerks, why don't we observe motion blur (like a camera would have) while our eyes are moving? It's not that our eyes aren't affected by motion blur: the answer is that our brains are wired so that when our eyes are moving, we don't really see anything at all. You just kind of discard the input you get while your eyes are jumping around.

https://en.wikipedia.org/wiki/Saccadic_masking


> there are far more than 256^3 viewable colors

This is trivially true: the gamut of sRGB does not cover all possible colours. See this diagram: https://en.wikipedia.org/wiki/SRGB#/media/File:Cie_Chart_wit.... As examples, you can't accurately represent true (spectral) violet or darker/deeper but fully saturated yellows in sRGB.

> Lastly: https://i.imgur.com/epXYhJd.png If you cover up the top half of the image, the bottom looks gray. But the strawberries look red.

To me, the strawberries look red, the plate is a desaturated cyan, and the meringue is a very desaturated yellow, whether I cover the top half of the image or not.


>Lastly: https://i.imgur.com/epXYhJd.png If you cover up the top half of the image, the bottom looks gray. But the strawberries look red.

For me covering the BOTTOM half makes the bottom look grey. Otherwise it looks slightly red.


So the brain has its own adversarial examples, it's not just GANs that can be fooled.


"Would there be any interest in a series I've been thinking of putting together about all the fantastically counterintuitive ways your eye fools you without you even noticing?"

Yes


> fantastically counterintuitive ways your eye fools you without you even noticing

It does not have to be "fooling" when if fact it's just part of the processing pipeline.

> The fact that colors influence colors around them means that there are far more than 256^3 viewable colors: the spatial axis is like a fourth channel.

No it's not: it may as well be that the function applied to the perceived colours, while takes neighbours as an input - still outputs the same 256^3 colours.


> No it's not: it may as well be that the function applied to the perceived colours, while takes neighbours as an input - still outputs the same 256^3 colours.

Kinda like those post-processing shader effects (like blur etc) that look at neighbouring pixels to output a 256^3 colour value for the center pixel. The pixel value never leaves the 256^3 range, but its value is adjusted based on what surrounds it.


Let me up the ante, not only can we detect single photons.

A pioneering study at the interface of biology and physics found, that isolated rod photoreceptors of frogs are sensitive to differences in the light sources statistical properties, e.g coherent (laser) vs thermal (sun, light-bulb) vs sub-poissonian (resonance fluorescence of quantum dots) light statistics:

"Measurement of Photon Statistics with Live Photoreceptor Cells" (2012) [1]

"The results indicate differences in average responses of rod cells to coherent and pseudothermal light of the same intensity and also differences in signal-to-noise ratios and second-order intensity correlation functions"

It seems Lorentz hypothesis in 1911 was right, that the boundaries of our perception are set by basic laws of physics, and that we reach the limits of what is possible.

===

[1] [pdf] https://physics.aps.org/featured-article-pdf/10.1103/PhysRev...


The eyesight of mantis/pistol shrimp is similarly crazy in it's complexity and it's in an invertebrate. Evolution seems to pick out some crazy stuff when it comes to animals near the water-air interface. For instance, they are able to detect circularly-polarized light across multiple octaves of light with a single receptor, covering all the Stokes' vectors. I've done a fair bit of optics and I have NO idea how to begin constructing a quarter-wave plate that would cover multiple octaves. So crazy.

https://en.wikipedia.org/wiki/Mantis_shrimp#Eyes


I remember learning that from this video:

https://www.youtube.com/watch?v=F5FEj9U-CJM


I can't find what this Lorentz hypothesis of 1911 is, but surely you are claiming too much. Physical entities can detect X-rays, but humans haven't evolved that ability (except, recently, externally/collectively).

The idea that our perceptual apparatus is optimal, rather than a contingent product of a particular evolutionary path, is easy to refute -- but maybe that isn't what you mean by "reach[ing] the limits of what is possible."


OK, by way of acknowledgement of splittingTimes' downvoted clarification, it seems the phrase "the boundaries of our perception are set by basic laws of physics, and that we reach the limits of what is possible" comes verbatim from a text in which the context is clearly the detection of photons. So it wasn't intended to be a claim of optimality for the human perceptual apparatus in general.


We can reach maximum optimization of wathever physical property we use. For example our brain and it super efficiency vs a super computer.


But humans had millions of years to evolve.

The computer is only 70 years old. Give it 100 years and we will make it as efficient as a human brain, possibly even better.

Evolution is a slow process. Non sentient trial and error. We beat that in time easily.


> However, neural filters only allow a signal to pass to the brain to trigger a conscious response when at least about five to nine arrive within less than 100 ms. If we could consciously see single photons we would experience too much visual "noise" in very low light, so this filter is a necessary adaptation, not a weakness.

Wonder if this is related to what I experience. Visual snow - basically seeing static in your vision, especially at night. I sometimes don't notice it for weeks, but then I just pick up on it again and can't stop noticing. It's still hard for me to believe that it's not normal, given that it happens with any camera ever built, but apparently not many people experience it.


Visual Snow is a condition which very few people are aware of (including the doctors). I still remember the day I started experiencing the snow. It was a sudden trigger. I couldn't stop noticing it for an year mainly because I was scared whether I am going to be blind. I went to atleast 4 or 5 doctors and no one found any problem with my eyes and I often got ridiculed by friends for making up something which I apparently don't have according to the doctors. I came to know about visual snow after an year from Internet and I realised that many other people have this and they also had a similar experience as mine.

https://www.youtube.com/watch?v=f34R3GC5I5k https://en.wikipedia.org/wiki/Visual_snow

If you are a billionaire you can support the fundraiser for finding a cure to Visual Snow which has only reached 1/5th of it's goal even after 40 months :(

https://www.gofundme.com/visual-snow


Two thoughts, in order:

1) This isn't normal? I get this plus an auditory analogue (imagine an impossibly high pitched sound like crickets chirping) which I've always figured was just what my 'noise floor' sounded like.

2) Why would you see this as something to cure? (Edit: Assuming it's something you only see at very low light levels - if you get it all the time even in bright light then that would be pretty bad.) It's just what you get when your visual system's auto-gain tries to amplify darkness. I'm not sure what else you'd expect.


I see it all the time! A couple of years before I could look at the sky in morning or evening and enjoy the clouds. Going to beach and watching sunset was my favourites. Now it's a pain to do the both. I can see star like particles flickering in the sky and floaters. It's also uncomfortable to look at the monitor if the brightness is high. Because of all these visual snow patients also suffer from depression. My vision used to be like watching a movie in Full HD TV before. Now it's like watching a movie in an old crappy tv which is having a bad signal reception.


That's odd, in that I've probably noticed visual snow-like artifacts ever since I was a child, but it never bothered me even a tiny bit.

Couple of personal observations:

- Most of the time I only notice snow or artifacts if I'm consciously looking for them or don't have anything else conscious occupying the brain (e.g. staring at a wall out of boredom, or closing my eyes and still paying attention to visual input)

- Once you start looking for visual artifacts, you'll see them everywhere. Right now if I stare at my ceiling in dim lighting, I see little multicolored 'heat wave' patterns roiling about as my visual sensory system works overdrive to extract more signal that there actually is. I also notice a slight 'ringing' halo around bright objects. But as soon as I try to do anything at all, my brain apparently decides that other things are more important and actively filters all this stuff out

Don't discount the possibility that you're putting yourself in a vicious cycle here:

perception of something wrong -> heightened subconscious threat processing (your brain starts looking for a problem in your visual perception) -> more conscious awareness of visual artifacts -> perception of something wrong

The way to break that cycle is to just worry about more important matters, and it'll either go away by itself or you'll stop caring.


> "an auditory analogue (imagine an impossibly high pitched sound like crickets chirping)"

This is tinnitus, something I've suffered from since childhood and it has gotten worse recently after attending a concert and standing too close to the speakers. It does indeed feel like a "noise floor"; in a silent room it becomes overbearing.


Just don’t pay attention to it. Visual snow is like floaters; both are perfectly normal. Some people notice them and then become obsessed; others barely ever notice them.


If you have not experienced it first hand then believe me it's no obsession. It changes your life completely.

Please refer to my comment here. https://news.ycombinator.com/item?id=15681108


it's not a disease, it's a normal aspect of vision


Could you elaborate on this?


everyone sees visual noise, especially in low-light


When you have Visual Snow you see it all the time.


yes, all the time, but more in low-light, that's normal - vision is noisy


I have constant ringing in my ears, but I'm not sure if this is like the common tinnitus everyone is describing.

It's something that gets much louder if I clench my teeth or stretch my body. I also get the occasional ringing in ear like everyone else does every now and then that lasts for a few seconds. This is much more clear, lower in pitch, comes from one of the ears and sounds more like a sine wave. What I hear feels like it comes from nowhere or in the middle of my head and sounds more like many sine waves phasing and changing at around 12khz. It sounds very "soft" or detailed and I've tried to reproduce it on the computer but it's very difficult as it easily sounds very harsh. (another way for me to describe it in terms of computer graphics is as if it's super-super sampled)

I've read that everyone can also hear ringing when placed in a quiet enough room, but some people can hear it all the time. (like me but it's not really a problem)

So everyone has visual noise and tinnitus but to different degrees, and when it becomes a problem for the person it's classified as a syndrome?


I can hear this ringing all the time, even during a conversation or on the street. I tend to not be aware of it, but when I am, it is ringing loud and clear.

I too tried to describe it - like a waterfall, like the sound of the cathode ray TV set, like the sound of pressured gas escaping from a hole, or just pure sonic pressure. The location of the sound is not in any of the ears but inside the head. When I yawn or when I breath out slowly on the mouth it becomes more intense.

I find it comforting and don't mind it. I even used it as support in meditation (it's called the "nada resonance" in yoga). It's just with me, and has always been. When I found out that not everyone hears it, I was quite surprised.


Interesting - are you stating that not many people experience the snow, or not many people don’t go for weeks without noticing?

I’ve always experienced it, and remember playing as a kid squinting my eyes really hard to make random colors and patterns appear (the latter often looking like a grey 3D grid deforming over time)


I'm pretty sure this is normal, but just something most people aren't aware of and it's hard to make people aware of.

I think it's a pretty common thing with hallucinogens like LSD where after the effects wear off people notice the normal visual noise they didn't before and it can bother them.

I see it all the time and also notice it more at night too.


Looking at the gofundme page posted this seems a bit more than just visual noise.

I was diagnosed with a mild version of schizophrenia and symptoms vary on a weekly or monthly basis. When it's at its peak, my night vision gets really noisy, when I close my eyes it's like I can still "see" based on the sound I hear and when I look at a bright light and look away it's like the motion repeats itself 0.5 seconds later.

And so when I'm more "normal" I have none of these issues. Sure I have some visual noise but it's not comparable to what the gofundme page describes.


It's likely that what you're noticing is an artifact of how visual perception works. Even in pitch dark, your brain is still trying to do pattern matching, and because that process of pattern matching results in what we 'see', you are unlikely to perceive perfect darkness.


I thought it was an artifact of how our vision system works, not input to that system.


Maybe you're seeing Prisoner's Cinema :

https://en.m.wikipedia.org/wiki/Prisoner%27s_cinema


Visual Snow is a completely different condition

https://en.wikipedia.org/wiki/Visual_snow


I see this. I always assumed everyone did...


Article is from 1996.

Another study was done last year and they also found humans can probably see single photons, though the study is limited as its sample size is n=3.

https://www.nature.com/news/people-can-sense-single-photons-...


That Nature article and the OP are in complete agreement. Neither say humans can visually experience single photons.


That's not the actual paper. This is: https://www.nature.com/articles/ncomms12172


A low subject count is a big problem in a lot of psych studies, but much less so in psychophysics. They typically do thousands of trials on each subject and the variance between subjects is usually extremely small.


> can probably see single photons

Photons don't exist in the first place. Science has moved on and modern models don't assume the existence of actual particles. It's only electromagnetic waves.


That's not quite right, They still come in descrete packets. It makes sense to call these photons and I've been told it's better to just think of them using statistics rather than to use something like waves or particals.

Not that I understand quantum mechanics.


Ok, tell that to the Standard Model. [1]

Like it or not, in modern physics, "photon" refers to a particular type of discrete lump of momentum and energy. It is a thing you can definitely have one of.

[1] https://en.wikipedia.org/wiki/Standard_Model


Then its not a particle with zero mass as often defined.


How do the waves have inertia then? We know for a fact that photons exert force in the surface they strike


"Science has moved", "Photons don't exist".

How do you live with that?


They ran each subject for 2400 trials. Was that decided in advance? This really needs a registered replication.


Under ideal conditions human can see 9th magnitude stars. We tried under real sky, and got 8.1 magnitude stars at 80% probability.

Not sure how it translates to photon counts.



If you like the these kind of writeups, You might like the parent post, which have many such intresting articles from the Usenet Physics FAQ.

http://math.ucr.edu/home/baez/physics/index.html


john baez is a brilliant guy.

and also he's the cousin of joan baez, which makes me appreciate both of them even more.


There is a well known (I heard about it in the late 70's) problem with these types of experiments. The assumption is that if the subjects' "guesses" statistically match the assumed hit rate of photons then there must be a cause and effect. This experiments seems to make no attempt to eliminate the possibility that this is completely random.


  > This experiments seems to make no attempt to eliminate
  > the possibility that this is completely random.
Psychophysicists use two-alternate forced-choice experiments precisely to eliminate the possibility of the result being "completely random". Indeed, for this experimental design it---given sufficient trials---is quite easy to do hypothesis testing and compute confidence intervals for the hypothesis that the subjects can perform the task better than chance.

You are correct, of course, that this doesn't prove that they see the photons. Maybe the experimental apparatus was poorly made and accidentally a noise when the photons were flashed. Or maybe someone cheated and told them which trials the photons would be delivered. Or perhaps they simply have ESP. But the one thing that we can easily have confidence about is when an effect differs from random chance.


This was the 1942 experiment cited in the OP:

"The subjects were asked to respond "yes" or "no" to say whether or not they thought they had seen a flash. The light was gradually reduced in intensity until the subjects could only guess the answer.

"They found that about 90 photons had to enter the eye for a 60% success rate in responding. Since only about 10% of photons arriving at the eye actually reach the retina, this means that about 9 photons were actually required at the receptors. Since the photons would have been spread over about 350 rods, the experimenters were able to conclude statistically that the rods must be responding to single photons, even if the subjects were not able to see such photons when they arrived too infrequently."

In other words, they lowered the intensity of the light until the subjects guesses were no better than random then concluded that 9 photons were spread over 350 rods.

The 1972 experiment assumed about 6 photons were entering the retina.


  > "until the subjects guesses were no better than random"
Crucially, no: They analyzed performance at 60% and made their argument based on the photons delivered at that light intensity. If your objection is about that statistical argument (" ... the photons would have been spread over about 350 rods"), such an indirect method was necessary because the experimenters didn't have an apparatus that could emit single photons in 1942. The 2016 article mentioned upthread [1] improves on this by providing the simple and direct experiment: emit single photons and check if the subjects can beat a coin-toss over large numbers of trials.

1. https://www.nature.com/articles/ncomms12172


Have you ever heard of the Birthday Paradox? 23 people in a class, the odds of two having the same birthday = 1-(364/365)^(23x22/2) = 50.5% Well, the odds of two out of nine photons hitting the same rod out 350 is 1-(349/350)^(9x8/2) = 9.79% so there's your 10% improvement over a wild guess.

Factor in that two photons should have a much higher chance of hitting neighboring rods which would substantially increase the chance of a hit (at the synaptic level) the question really should be "Why is it so much worse than random?"


I agree the authors should have used the improved odds calculation you mention. However, doing so would only explain the missing 10% if two photons hitting a single rod led to 100% correct response rate, and it seems unlikely for the response to be so nonlinear. Anyway with such an experiment, we don't know why the subjects are above chance, and can only make inferences. As I point out above, you really need single photon emission to know for sure.

I went back to the original paper [1] and the authors' conclusions are much more measured than précis we are all discussing. From page 838:

  ... the range of 54 to 148 quanta at the cornea
  becomes an upper limit of 5 to 14 quanta actually 
  absorbed by the retinal rods. [ed: I presume this 
  is where the figure of 9 comes from]
    3. This small number of quanta, in comparison to
  the number of rods (500) involved, precludes any
  significant two quantum absorptions per rod [ed: oops],
  and means that in order to produce a visual effect,
  one quantum must be absorbed by each of 5 to 14 rods
  in the retina.
    4. Because this number of individual events is so
  small, it may be derived from an independent statistical
  study of the relation between the intensity of a light
  flash and the frequency with which it is seen. Such
  experiments give values  of 5 to 8 for the number of
  critical events involved at the threshold of vision. 
1. http://www.cns.nyu.edu/~david/courses/perceptionGrad/Reading...


All of human reasoning and understanding comes from matching our observations statistically with potential causes.

You may heat up a tea kettle a thousand times with a flame and conclude that the flame is heating the tea kettle, but perhaps every time the flame was too small and too far and it was in fact another source of energy hitting it from the environment warming it.

Our understanding of the world comes from substantial enough correlation that it moves into the realm of cause-and-effect. Though admittedly we often assume this too quickly.


But you have to be able to have a control. I heard Barbara Sakitt speak around 1976-77. She was the author of the widely cited "Counting Every Quantum" (1972) and she basically said that her original methodology was flawed. Unfortunately, no one cites her later work and I don't have access to the full text of "Information received from unseen lights" (1976) which I think was the paper she was presenting.


I would never have guessed this result, because it seems like a vast overoptimization for light detection: in what circumstances would a human being need to see a single photon? And if those circumstances are rare, then what is the cost (in fitness, energy, etc) that we pay to maintain that ability?


Rods can respond to individual photons, that is pretty much accepted. The issue is what can we perceive. The first nerve synapse screens out about 50% of isolated photons (ones that are not correlated from other rods) which has been estimated from modeling. Which means that some of those singular photons are getting through (perceived) and can be isolated from the background noise through careful experiments.

I don't think we evolved to see single photons, but that we evolved to be very sensitive to light while reducing the noise and that sweet spot means that we can quite possibly, under ideal conditions, see individual photons slightly more than we think we see non-existent photons.


Humans can see single gamma photons, even with closed eyes :-)



When we are able to guess, eg not have to be 100% accurate - all our sensors become super human.


an aside, but an interesting one, Biophotons... was new to me, https://www.ncbi.nlm.nih.gov/pubmed/?term=biophoton


Very nice write up on the experiment




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: