Hacker News new | past | comments | ask | show | jobs | submit | more Neywiny's comments login

As a Gen Z who was a shut in long before COVID, I disagree. I knew plenty of people who loved going out before, during, and after lockdown. I'd guess only a fraction of people who liked going out before found the joy of staying home. But likewise I'm sure there were some who found it miserable at home and after lockdown vowed to be more outgoing.


So the problem with his solution is that he needed a solution to solve a problem?


It works but it's not going to be very robust without a carrier.


Feels like an honor to be able to ask. I've appreciated your work for a few years now especially on the Apollo gear restoration.

Anyway, question on the ones you thought were wrong (I think "just" the trig functions). Is there a running system you can use that can be used to confirm your findings? Especially considering your previous post on the pentium division bug, maybe they got this wrong too?


A running system won't really help me since I'm sure it will give the right answer. I need to know what's happening internally, which remains hidden in a running processor.


If this is your website, as a heads up it doesn't work well on my browser. Firefox on Android, I believe I have a dark mode and the text is still black but on a very dark background.


Apropos.


Looks like at least chromium has web Bluetooth. Could that work?


Maybe there's a misconception, but analog != IQ. Some have IQ interfaces but already digitized and you need to put in the work


Tbh sheets has gotten a lot better over the years. It's not hard to autofill and other things that used to be impossible or too obscure to figure out. I'll stick with the free platform for things just for me


NVMe sits over PCIe. I'd be more inclined to believe they're playing games with their voltage levels to lower power consumption on mobile/embedded (not based on anything but I wouldn't be surprised). Or, if you're then going to an m.2 adapter, something with that.


Despite a number of what look like copy paste articles, I see no actual pictures of what the pictures taken look like. Maybe we'll see at CES but until then these feel like clickbait.


official link: https://www.nikon.com/company/news/2024/1219_01.html

still no photos, but they say more info will come during CES


I assume there are no photos because the actual images on the sensor will look like gibberish - it'll effectively be two different images overlaid, and look a total mess.

However, feed that mess into AI, and it might successfully be able to use it to see both wide and far.


If CAT scan imagery exists, I suppose this sort of image processing shouldn't be impossible to do-- though I readily admit I have no idea what the logic behind it might look like without some sort of wavelength-based filtering that would make a photographer shudder.


Tomographic reconstruction is in principal pretty straightforward (Radon transform). MRI is much (much), fwiw, though RF not optics.

I don't think wavelength filtering will help you here, as you don't control the input at all. Some sort of splitter in the optical chain might, but you'd be halving your imaging photons with all that entails. Or you can have e.g. a telephoto center and a wide fringe. It's an interesting idea.


The intended output is, in essence, a wide-angle photo but with much greater detail in the center as if that portion had been taken by a telephoto lens and placed on top like smartphones do - but with no offset like smartphones normally deal with.

The processed result would be quite uninteresting to look at: it's a wide-angle photo where the outer portion is much more blurry than the center. Considering the current intended application, the picture probably would probably be pretty mediocre.

This is very specific technology that solves very specific problems (and might one day make its way to smartphones), but not something I'd expect to produce glamor shots right now.


As a robotics engineer interested in computer vision and optics, I’d actually like to see the unprocessed image. I can imagine what it would look like, but would appreciate simply seeing it.

As a photographer who worked at a newspaper in college for a few years, I just think including an image makes sense in an article about a novel lens.


This is Nikon (and Mitsubishi). It will really work, but they hold their cards close to their chest, and embargo sample images (they don't like low-quality images getting out). They probably plan something splashy for CES.

CES should be interesting, this year.


You are right.


I mean, a lens is only a third of the camera, the other two being the sensor, and the ISP. A lens doesn't produce a photo in itself.

This would be like someone announcing a new RAM innovation -- and people asking what its Cinebench or Geekbench score is.


A lens is a BIG part of the final image you get. So much so that the common advice in most photography forums is that within a price gap, buy the best lens you can find and an okay camera. Camera tech, especially in large dedicated full-frame and APS-C units, has plateaued since 2018, and most cameras from that period take exceptionally good pictures, even by today's standards. Thus, lens availability, price, and quality, as well as AF tracking, are what fundamentally differentiate modern cameras.

EDIT: I got pulled into the discussion without reading the article. The lens is for industrial uses.


You're missing that this is not designed as a tool for photographers, but rather in a collaboration with Mitsubishi aimed at better situational awareness for vehicle operators. The headline doesn't mention this, but it's impossible to miss in the article.


In the context of the GP, I think the point still stands though which is roughly: “the lens matters a lot”.

Without knowing more about the optics, it’s hard to know how much of a role the sensor/ISP play in the innovation, but those are well established and widely capable across both photographic and industrial use cases.

Very curious to eventually learn more about this and whether it might eventually find its way into traditional cameras.


Sure, I guess. But the whole discussion is so void of subject matter knowledge that it's like trying to argue the pros and cons of different bowling balls in terms of how well they pair with Brie.

Nikon is an optics company that's also made cameras for a long time, and then very nearly didn't; before the Z mirrorless line took off, the company's future as a camera manufacturer was seriously in doubt. But even a Nikon that had stopped making cameras entirely after the D780 would still be an optics company. There is no serious reason to assume the necessity of some sensor/ISP "special sauce" behind the novel optics announced here to make the system work. And considering where Nikon's sensors actually come from, if there were more than novel optics involved here, I'd expect to see Sony also mentioned in the partnership.

Of course that's not to say photographic art can't be made with commercial or industrial equipment; film hipsters notwithstanding, pictorialism in the digital era has never been more lively. But I would expect this to fall much in that same genre of "check out this wild shit I did with a junkyard/eBay/security system installer buddy find", rather than anything you'd expect to see on the other end of a lens barrel from a Z-mount flange.


I couldn't tell from the article: is it for human eyeballs or for computers?

If it's for eyeballs it would be nifty to know what kind of image displays both kinds of information at once.

If it's for computers, what is the advantage over two cameras right next to each other? Less hardware? More accurate image recognition? Something else?


These are questions for their CES presentation next week, not for me.


I would expect a big photo with an ok resolution including inside an area of much higher resolution (aka teleobjective part). That special area can be cropped later to obtain a much bigger photo with all the detail than a tele would bring.


I recommend reading the article if you haven’t already as it mentions this is for vehicles, there isn’t a mention of photographers.


This is not a lens for photographers, it's an industrial piece of technology...


Technically speaking you do not need a lens to capture an image (see pinhole cameras) BUT for most applications a lens is a necessity. It is the first part of the image capture pipeline and has a huge influence over the final image.


aren't there some cameras that have swappable lenses? Does Nikon know anything about those?


Yes. And yes. What’s your point? The use case mentioned is AI/driver assist for vehicles.


Agreed. I'm in the market for an imu and thought "ah the MPU 6050, I've heard of this one a lot even recently" and it's obsolete. This is typical of consumer to adafruit/sparkfun/aliexpress levels where they have countless old stock of cheapy proto boards to buy from, but if you're designing a whole new thing from scratch, that's inexcusable.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: