Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So as I understand it this technique rests on a number of assumptions:

- You know the exact parameters used to render the text

- You can render new text with the exact same parameters

- The pixelated image hasn’t been ruined by color quantization or other destructive compression



I could see someone assembling a corpus of sample text images covering commonly available fonts across major operating systems and having a version of this tool that brute forces all of them to find the best match.

It would also be interesting to see how well it worked if there were differences in font rendering or compression as you say - I wonder if you it might still be close enough to make a partial match in some cases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: