Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

First off, this is really impressive. After Opera and Microsoft dropped their engines and adopted Blink, and Mozilla gave up on Servo, I've been becoming increasingly worried for the future of the open web. Kudos for trying to get the matters in your own hands, and for getting this far with your project.

Now for the nit picking. From the FAQ:

> For example the HTTP code has no implementation of features that can be used for tracking (such as ETags).

True, ETags can be used for precise client tracking (just like a cookie with a unique client ID); but they are also useful for caching resources client-side, thus reducing data usage, server resources, client processing, etc.

Since the browser/backend is already using a whitelist approach, I would like to suggest optional support for ETags for websites that the user decides to trust.

Also, unless FixBrowser/FixProxy becomes relevant enough to show up on the pie chart besides Chrome, Firefox, and Safari, individual users can be easily fingerprinted based on e.g. IP ranges and the mere fact that the client behaves differently. This is an uphill battle, but I'm glad that efforts like this even exist.



> and Mozilla gave up on Servo

They didn't "give up" on Servo. Servo was always intended as a test-bed for Firefox engine technologies. It was just some weird false hope in the OSS community that it would be some new "super browser".

They integrated what they were looking for into Quantum and then the community took the browser portion and forked it off+continued development (albeit, without Mozilla sponsorship).


They fired the developers and later gave the project to the Linux foundation. Maybe "give up" is not a perfect fit. But they "abandoned" it for sure.


How is what you said different from what I said? Other than your unrealized personal expectations and emotionally-loaded pedantry.


i'm not disagreeing with you, just wanted to add another point:

tracking is buildin into http/browsers, even without JS. Its just no longer on the client, making it harder for 3rd parties to aggregate the information across domains (e.g. google, meta, etc).

e.g. loading images/tracking pixels on hover (css) to track mouse movements

the only way to make tracking impossible is by only allowing pure HTML with a tiny subset of CSS for styling. like a markdown browser.

while thats a valid technology choice, I'm not sure how many ppl would ever use such a thing, it'd be incompatible with the vast majority of websites.

the only one's that come to my mind would be even better as RSS/Atom Feeds, so - from my perspective - it'd have a bigger potential if it just creates these feeds by parsing the websites. But you'd still have to preload them with a browser on a server somewhere for the sites that actually require JS / SPAs without hydration.

And once you're there, youre already in a market with multiple options




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: