It would be interesting to see how this compares to theoretical limits. At a given brightness and collecting area, you get (with lossless optics) a certain number of photons per pixel per unit time. Unless your sensor does extraordinarily unlikely quantum stuff, at best it counts photons with some noise. The unavoidable limit is "shot noise": the number of photons in a given time is Poisson distributed, giving you noise according to the Poisson distribution.
At nonzero temperature, you have the further problem that your sensor has thermally excited electrons, which aren't necessarily a problem AFAIK. More importantly, the sensor glows. If the sensor registers many of its own emitted photons, you get lots of thermal noise.
Good low noise amplifiers for RF that are well matched to their antennas can avoid amplifying their own thermal emissions. I don't know how well CCDs can do at this.
Given that this is a military device, I'd assume the sensor is chilled.
Hang out with astronomers, they do this stuff all the time: poisson noise, dark current, readout noise, readout time, taking thousands of shots and stacking them according to the previous factors.
DSLRs have come with dark current removal for a few years now. Another fun fact, the sensor used in Canon's G15/16 has a Quantum Efficency of over 60%, meaning that 60% of photons reaching the sensor generate a signal. Most cameras have sensor with QEs between 15 to 25% (but have way larger sensors, so in the end a better SNR than the G16).
I should hope that they subtract off the dark current, astronomers have been doing that since the first time they used a CCD. Dark current is a big limitation to seeing dim things.
That's a much higher QE than the good old days, wow.
I knew it was over 60%, and I remembered (wrong) as the G15 and G16 having the same sensor. Welp, 77% QE for the G16 and 67% for the G15. There are others with higher QE, though in general those are used in smaller sensors.
DSLRs, while having lower QE numbers, have much larger sensors, and hence a better image quality.
According to Wikipedia "in the event of multiple exciton generation (MEG), quantum efficiencies of greater than 100% may be achieved since the incident photons have more than twice the band gap energy and can create two or more electron-hole pairs per incident photon." https://en.wikipedia.org/wiki/Quantum_efficiency
In some sense, this is useless. A sensor that counts 50% of incoming photons twice each has more shot noise than a sensor that counts each photon once.
I would take this list with a grain of salt.
There are several entries with a QE in the nineties (e. g. the Leice C with 95%) which seems implausible to me. Even more, further down the list are some entries way over 100% (e. g. Nikon D2X with 476%).
Can't edit now, but dark current removal isn't as widespread as you think it should be. Most cameras don't do it at all and will add (actually, subtract) a dark frame when the exposure time goes over a threshold (say, longer than 1 sec exposure) in which the camera takes another new photograph, with the shutter closed and then subtracts that dark frame from the original picture which is still in memory. This is quite useful actually, but not the same quality as dark current removal/supression.
As for those saying about the over 100% QE numbers... I didn't gather those, all I can say that a few years back that list was accepted as generally reasonable measurments.
Besides that, I've read of some imaging techniques which can form images with a really low number of photons (for example [0]), but those generally require a special setup (infrared light, lasers, that sort of stuff). So yeah, those QE's do seem fishy, but it could alse be the result of a photon affecting more than one photosite (or maybe the marketing team hearing about QE and finding a way to raise that number through "smart counting").
One of the professors in my former department was an astronomer from the US Naval Observatory. He's applying techniques he developed (along with other techniques astronomers use) to get better resolution MRI images.
Theoretical limits are probably very high. For one, collecting area can be pretty large. Obviously you don't want a square meter lens, but a foot-wide one would probably be acceptable for a specialized camera.
Second, nocturnal animals like howls see very well in the dark. Granted, they don't see colors but they still show that in theory seeing in the night is physically impossible.
PS. regarding the collecting area, something I've been wondering for a bit : isn't the number of photons a lens can capture proportional to the square of the surface and not the surface itself? I know that sounds counter-intuitive but with interference, quantum mechanics and stuff, I checked the math once and I could not see where I was wrong.
> Second, nocturnal animals like howls see very well in the dark.
True, but that's because as far as we know they evolved on a very low light world orbiting a red dwarf star. They're not actually nocturnal on their homeworld.
They also supplement their incredible night vision with sonar that's in the audible range for humans, hence their name.
I don't know what you mean by "surface", but the number of photos collected depends on the area of the lens. No quantum mechanics or interference involved.
I did mean area. And I thought the number of photons was proportional to the square of the area because the probability amplitude is proportional to the area. Therefore the probability should be proportional to the square of the area, shoudn't it?
It also seemed to make sense to me otherwise we would not need to build large telescopes, we could just build lots of small ones and fuse the images.
No, the probability is proportional to the area, not the amplitude. If a sensor of size A sees N photons, a sensor of the same size next to it sees another N. Fusing these two sensors together see 2N, not 4N. Otherwise, you violate energy/momentum conservation.
We do build small(er) ones and fuse the images (using interferometry), that's what many large telescopes are these days. In the radio, we've been mostly doing it that way since 1978 (VLA).
... we use interferomers because we want the added resolution, not just the additional photons. If you don't want extra resolution, you can just add up the images from all of the smaller telescopes.
Light is a wave and a particle, and if you are getting wildly different answers from thinking about it as a wave and as a particle, and you're looking at a macro and not micro scale, then you're doing it wrong. That's why I answered your wave question with a photon count answer.
> That's why I answered your wave question with a photon count answer.
But how do you count photons without using probability amplitudes? If you count them by using a classical reasoning of photons being small particles falling from the sky, I'd say you're doing it wrong, because photons are not classical particles.
CCDs count photons in a particular fashion, and it happens to involve individual photons doing things. You might think of it as photons knocking electrons off of atoms, but it's actually semiconductors with a narrow bandgap, so CCDs work at much lower energies than needed for ionizing radiation.
If the probability amplitude is proportional to the area, it doesn't matter if that term already encompasses a length² term. QM tells you then that the probability should be proportional to the square of that amplitude, even if that means it would encompass a length^4 term.
Not sure why I cannot reply to your other comment.
Anyway, astronomy has nothing to do with the scarcity of photons.
If you take a picture of M31 you have a relative abundance of photons, with most of them not being point sources.
If you look at any planet or globular cluster or nebula is the same.
Only when you are observing single stars you are having that. And single star observations are for sure not the whole astronomy.
HN requires a certain delay before responding, to encourage people to think first.
Many distant things, like galaxies, are very dim. M31 is almost visible to the naked eye, which is not dim to an astronomer. You might want to study astronomy before having strong opinions about it.
Seriously are you asking me to "study" astronomy?
I was doing astronomical research (planetoids, variable stars, supernovae) 20-something years ago.
And M31 is not "almost" visible to the naked eye. It is perfectly visible unless you live under a light polluted sky.
The objects that I captured I think were just below the 17th magnitude.
As I said before extremely dim objects cannot be seen simply increasing the aperture because that is helpful only for point sources.
Hubble Space Telescope captured some of the most dim objects in the universe.
With a "very small" mirror compared to some much bigger on this planet.
I'm aware HST has a relatively small diameter (2.4m), but how much of its resolving power (for lack of a better phrase) can be attributed to it being above the atmosphere? I've also heard that modern adaptive optics systems do a good job of mitigating atmospheric effects (turbulence?) for ground based telescopes -- I wonder if a ground-based "Hubble deep field" image could be generated?
Most of the HST resolving power is due to the fact of being outside of the atmosphere.
Adaptive optics have been improving in the last 20 years if I recall correctly, but even with the advancement in lasers to generate improved artificial stars and in the segmented mirrors to have a better atmospheric correction in real time, they still can't do anything regarding the loss of trasmission, especially in wave lengths different from the visible light window.
The fact that with a "small" diameter in nearly-ideal conditions you can have better results than much bigger apertures on the earth is still a testament that aperture alone can't do miracles, especially when there is a lot of noise.
And don't forget that HST optics are flawed.
Before the servicing mission that added the correction we were relaying on software adjustments.
And even with that huge handicap the results were nothing short of amazing.
That is valid only for point sources.
Stars are not points when seen in a telescope because of airy disks, atmospherical scattering and imperfections in the optics.
If you have perfect optics, you are outside of the atmosphere and the only limit is the airy disk then yes, for point sources you don't have that problem.
Now, are you saying that a valley is a point source?
If you look back, I made a statement about the number of photons collected. I totally understand that camera people, who have a shitload of photons available most of the time, don't think about them like astronomers do. But if it's very dark, it becomes astronomy.
Erratum: instead of "they still show that in theory seeing in the night is physically impossible.", read "they still show that in theory seeing in the night is NOT physically impossible."
The resulting brightness has nothing to do with the lens size.
The important factors are the focal ratio and the sensor size.
Obviously the sensor should have an appropriate size to receive all the focused light otherwise it's just wasted.
C'mon HN this guy is right, you can't apply Gauss's Law with a sphere when you have photons from every angle.
However I'm not sure whether the total number of photons collected is the right thing to be measuring. Aren't there serious aberrations at high magnifications in practice?
And aren't there, y'know, glowing objects in the night sky? Stars I believe? The moon? I don't know how many photons there are but a single green photon carries a miniscule 376 zeptojoules and I don't think my eye responds to anything below the picojoule range. Counting photons and looking for the information limit seems a little extreme in this, er, light.
I wonder whether you could use this technology for medical imaging if you shield the camera from any light that isn't being transmitted through the body. The possibility of recovering color information is exciting.
Surely 'nothing' is an exaggeration. In the limit of the lens size going to zero, the brightness also goes to zero. (Ah, you're also scaling other things in your mind I guess, so then this is not true.)
If you keep the setup such that all the light that comes through the lens will get to the sensor, and keep the sensor the same size, a larger lens would mean a higher brightness, right? Maybe in practice you're limited by other factors, but 'nothing' can't be right. Especially when we're talking about theoretical limits.
Edit: I think what you're saying is that the lens doesn't matter for the brightness if you scale the focal length and sensor size in such a way that the brightness stays the same. Which is obviously true, but if you leave out the last part, it isn't.
A lens with a smaller focal length will produce a projected image smaller than the same lens with a longer focal length.
A lens with a focal length of 1m and an aperture of 50 cm will produce a projected image that will be 4 times bigger than a lens with a focal length of 50 cm and 25 cm of aperture.
So even though the amount of light captured in the first case is 4 times, the effective brightness is exactly the same because it is spread in an area 4 times bigger compared to the second lens.
For this reason the only way to increase the brightness is to reduce the focal ratio, simply increasing the aperture maintaining the same focal ratio won't help.
Obviously if you have an aperture of size 0 then all the light is wasted because you can't have a sensor with size zero.
In an empirical way you can see this concept in the real world with the abysmally small lenses in the smartphones that have the same brightness of the much bigger lenses in SLR cameras or in refractor telescopes.
An important difference between animal (and our) eyes is that the biological systems use logarithmic intensity in signaling, not linear.
2 pulses encode a signal that's a factor higher than a 1 pulse signal.
You see this limitation in the video as well. That CCD video, the one with some vision and color but huge amounts of noise, I feel you see that it's clamping to some limited domain. Increasing it's range would improve things a lot.
I can see in a dark lab with my EMCCD camera. It doesn't look as good as in this video, but they got maybe a 3x improvement compared to what I can do at present, with minimal effort.
On the producent's website there is a comparison between their X27 and competition. EMCCD has fair 2nd place in this contest. It has more noise than X27, but what you can get is still remarkable.
It's not enough to do crazy quantum stuff in the camera, the light source would have to be similarly manipulated in a quantum way as well.
My understanding is that the human eye has a surprisingly high quantum efficiency (about 12 to 30 % of photons are detected), and that the reason night looks dark is because there really aren't that many visible spectrum photons around.
My guess is that this camera is physically enormous.
Edit: Apparently, it's really small! I am dumbfounded. Then again, I guess it could have a hundred times the area of a human pupil and still be pretty small.
Humans can actually see well in the dark once adjusted but generally without colour (due to using rods rather than cones in the eye). The colour is probably what makes this video appear incredible to us as its not how we are used to percieving darkness (even with eg night vision).
As to the technical question according to this you need 90 photons for a 60% chance at a human response
> It's not enough to do crazy quantum stuff in the camera, the light source would have to be similarly manipulated in a quantum way as well.
Not entirely true. Assuming an incoherent source, conventional optics, and a well-defined frame (i.e. you measure for a time t and the source is unchanging for the whole time), you have a bunch of modes of incoming light that would hit each pixel. Measuring these modes in the photon number basis isn't optimal. The ideal measurement is probably something quite different. It may also depend on your exact assumptions avoid the source.
Ultimately you want to count photons to produce an image. And if it's an incoherent source, I don't think there's anything else you can do with it.
With coherent light it can be useful to have the light interact with matter in a way that depends on the phase of the light. But if the light is incoherent then that won't yield anything useful. And this still doesn't beat the shot noise limit - your phase measurements still come from counting photons subject to shot noise.
Then there's polarisation. Two polarisation states. But really that's just like saying there are two kinds of photons. Both polarisation states are subject to number-phase uncertainty principle.
I think that's it. There's the number and phase, which are conjugate variables. You can make number more accurate by squeezing phase, but only if you control the light source. And then there's the two polarisation states.
And the EM field simply doesn't have any more degrees of freedom than that, so I think that's it unfortunately.
It's number-phase squeezing or nothing if you want to count photons more accurately than shot noise.
You are missing something though: Using entangled detectors you can surpass the diffraction limit. Googling "quantum internet and diffraction limit" would bring up some more information. Either way, this would apply to resolution, not to sensitivity (which is what you seem to be covering)
At nonzero temperature, you have the further problem that your sensor has thermally excited electrons, which aren't necessarily a problem AFAIK. More importantly, the sensor glows. If the sensor registers many of its own emitted photons, you get lots of thermal noise.
Good low noise amplifiers for RF that are well matched to their antennas can avoid amplifying their own thermal emissions. I don't know how well CCDs can do at this.
Given that this is a military device, I'd assume the sensor is chilled.