Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting to consider how, according to my recent reading of some machine vision review literature, the industry's standard process for process-oriented vision tasks has apparently changed so little in 35 years. Already we see high contrast polarization of the image (reduction to black and white/binary), feedback loop, ring lighting, and task-specific image object classification. Of course, there are newer techniques, but for many linear processes incorporating machine vision the same general approach is still taken.



I was thinking that as well; I had a project that needed computer vision and I thought to hack a prototype to give to the real experts. I used OpenCV but with techniques I have learnt begin 90s in university on Sun Sparcstations in C. When I was done I handed it over for the 'real' implementation and that team told me it was done as they would have done it as well. Of course they knew many tricks to make it more optimal but like you say, the basis did not change much. Do the modern neural nets need this level of preprocessing too?


I used OpenCV for the first time around 2 years ago, and I asked one of my professors if he had a good CV text I could use to quickly learn the basic techniques. The book was from the late 90s, but the algorithms it covered were basically the same as those in the OpenCV API. It was quite surprising to me that not much had been added to the state-of-the-art since the publication of the textbook.

For those curious, I was writing some code for 2D object distance estimation for use in a undergrad robotics competition. My team ended up losing, but I think we could have done better had I not started reading up on CV and writing code only a week before the deadline. I'm surprised we were able to even get past the first round!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: