Even lower hanging fruit than optimising compression, is actually choosing the right image format in the first place. Put simply (very simply I may add), if it’s an illustration, use PNG, if it’s a photo, use JPEG. Once we’re over that hurdle, we can go onto managing compression techniques and attempting advanced formats such as SVG and WEBP.
All too often I’ve seen people encoding 8MP photos as PNGs and wondering why they’re all 50MB in size.
Next on the hit list is outputting images at the right size. If the image is never going to be shown greater than 500px wide, save a copy that’s 500px wide. WordPress has this feature built in- learn to use it if that’s your flavour of CMS. Your bounce rate will go down, I guarantee it.
This is not as surprising as it sounds to a web developer. For high quality photography purposes jpeg is bordering on useless, even more so if one isn't using Mozjpeg or guetzli to get images with reduced artifacts. Even for the most basic uses like sharing a photo with friends and family this can be a problem. For example I sent out an album of travel photos and a family member bought me a poster size print of one of them for Christmas ... using the 3200x2400 jpeg photo I had posted to Google Photos. So while if you're a web designer using stuff like big hero or background images jpeg makes sense, there are a lot of ways in which it simply doesn't cut it for most of us.
Fortunately, and _I can't emphasize this enough_, we're now in a situation where the major browsers support Webp or can have fast decoding support easily added. Webp is based on the I-frame encoder of VP8, and for lossy encoding represents a several-generations improvement over Jpeg. And it's here today. Sure, we'll all be using Avif in a HEIF container one of these days, but that isn't relevant to people building systems today.
"For high quality photography, I (and butteraugli) believe that JPEG is actually better than WebP."
That's interesting. Of course, the subjective part of that is one person's take, and the "objective" part of it is pointless because the whole point of Guetzli (the jpeg encoder) is to try to maximize the Butteraugli score, so saying that WebP gets a lower score is not significant.
Personally, WebP looks a lot better to me in the direct tests of equal file size that I've seen. It even looks better than Pik, which is Google's experimental successor to Jpeg that also uses Butteraugli.
And it would be odd, to say the least, if a codec from the early nineties could beat a modern one on Intra-frame coding, which has been a subject of immense research over the years.
Like MP3 encoders, JPEG encoders have only gotten better over the years, perhaps they’ve fewer bugs or compromises for compression? Also, there are newer standards for JPEG including JPEG2000 and JPEG XR, etc. Plus many other alternatives: https://developers.google.com/web/fundamentals/performance/o...
So my advice is encode in multiple formats to achieve the broadest browser support and the best image quality/size trade off that you’re willing to allow. That said... it does sort of bug me that every couple years we have to revisit which codecs we’re using because the implementations keep marching on...
> For high quality photography purposes jpeg is bordering on useless
That's the sort of faux-professional hyper-contrarianism that is "bordering on detrimental".
I just checked Magnum, a somewhat professional photo agency. They use JPEG. So do the NYT, flickr, and probably the vast majority of professional and totally unprofessional websites.
It seems there is some use left in the format. Not being able to print to billboard sizes doesn't diminish this. Because (a) that use case is a rounding error, and (b) WebP is still not safe for such purposes – TIFF is.
TIFF is just a container. Doesnt say about encoding or compression (like AVI).
That you find web using hardcore old style JPEGs when they've been replaced by better formats long ago also doesn't mean that JPEG is really suitable way to store or share photos - especially when your display is good hi-dpi stuff. Also - do you find GIFs technologically OK for sharing animations today when there is stuff like h.265...?
IE and Safari do not support WebP, Edge didn't until Windows 10 1809 (November), Firefox didn't until 65 (January) and Firefox ESR and for Android still don't.
Polyfilling WebP support just for browsers that support Web Assembly but not WebP seems pretty weighty when it's very straightforward to not exclusively use WebP.
The <picture> element is great but using it means not, as you wrote, "Use WebP! For everything!" Maybe you meant "Use WebP! For everything! With Fallbacks!"
So your experience invalidates mine? I have endlessly seen website media folders throughout my 14 year career filled with PNG photographs uploaded by unknowing content editors through a CMS. This isn’t their fault, they’re writers and know no better. CMS tools should be set up by the developers to automatically convert to the right format.
Cases on the flipside includes developers using JPEG for everything. 100KB highly artifacted illustrations used throughout web layouts, which would be infinitely better as clear 10KB PNGs.
No, it doesn't invalidate yours, but I'm with him. I've never seen that, either, and I cannot imagine how that could happen "often" (without doubting your experience).
Because cameras and phones give you JPEGs. The common user doesn't shoot RAW. I cannot see any not-highly-technical user convert JPEGs to anything else.
True, I've seen a lot on some clients CMSs. Classic case would be the ones uploading an screenshot of an image done on their Mac, which by default (as many OSes, anyway) outputs a PNG.
And are you sure they were all real PNGs? You know, displayed suffix means nearly nothing nowadays, you can serve JPEG image as ".png" file with "image/png" mime header and browser would not even raise its eyebrow and process the JPEG properly.
I occasionally archive some imagery from web and while viewing them in Irfan I am frequently confronted with "This image is JPG with .png extension, would you like to rename it?" prompt, without what I'd probably had no idea how frequently this discrepancy occurs.
> All too often I’ve seen people encoding 8MP photos as PNGs...
And now apparently it's some CMS that was doing that... which doesn't make much sense either. Either way this is just such a patently and obviously dumb thing to do that it makes me doubt your experience is anywhere close to being statistically significant.
When I see this it's usually someone getting an image from a stock image site. They provide PNG because they don't know how it's going to be used by the end user and the end user doesn't know enough to understand that's a problem so you get a 4MB background image as a PNG that could be 98% smaller without any great effort.
One thing people don't think about with some of these optimizers is that it will strip color profile and all metadata. The caveat here is that you can lose color accuracy/range for the former, and vital licensing information for the later.
I wrote and maintain imagemin-webpack-plugin for optimizing images during the build process for webpack-based javascript projects. It works using imagemin and the various plugins for it, which themselves are just small wrappers around most image optimizers.
I had a landing page with 37 high quality images spread across 5 pages. Used Squash at first to bring down the aggregated size of the images from ~550MB to ~320MB and voila my bounce rates started going down. A few months later i tried out Cloudinary and the conversion rates improved since the biggest bottleneck of the landing page was the images and they were loading smoothly. IMHO one of the low hanging fruits that is worth solving.
I assume by "high quality" your parent comment is referring to photo imagery on a photography oriented site. Even an 18 MP camera (which some phones have nowadays) will generate single images over 100 MB if they are not compressed at all.
Notice that I said "not compressed at all". My Canon T5i is a few hundred dollars, and when I convert the (compressed) raw to an uncompressed TIFF or PNG, it's over 100MB.
It's high by any standards. A 60 MB webpage would take 5 seconds to load on a 100 Mbps home broadband connection, which is well above the median in the United States. On most mobile connections a 60 MB webpage is going to be borderline unusable.
My grandfathered $140/month mobile plan has 14GB of data; 320MB is 70% of a day's bandwidth, or 2% of the month's.
If I wanted to add another GB I could accrue $100 in overages ($0.10/MB), or pay another $25/month. But Bell will charge for another month in advance if you change the current month's plan.
Strangely enough it was only $5/month to up it from 13GB to 14GB :<
All this despite being able to download at 7MB/s; I could blow my cap in half an hour, and another half hour would cost $1,400 if I didn't up the plan.
Edit: I use my cellphone's data heavily, but 20, 40, 50GB cable internet caps are common with a lot of people not understanding what that means for streaming video / downloading pictures.
Adding more info to the post, all of those images were procedurally generated stereo 360 degree photos directly from Unity(Exported at 5K or 4K res). I had built a PoC for a Game Store of sorts that allowed you to view 360 degree images of games(Via Ansel). So the image sizes were humongous and since the landing page served as a usable demo at the time we did not compress the images at first.
It's maybe not a "good" thought, but it brings to mind the idea of using this technique for a proprietary format (webp with a header change, say) to make images only work on your own website.
Typical users wouldn't be able to view downloaded images; they could only see them through your site.
I'm not sure on the details of web assembly, but if you can obturate the workings then this technique becomes stronger, can web assembly be delivered compiled?
> The ability for PJPEGs to offer low-resolution ‘previews’ of an image as it loads improves perceived performance – users can feel like the image is loading faster compared to adaptive images.
My understanding was that progressive JPEG actually feels slower, since users are less sure when the image has finished loading, and are thus best avoided in most cases.
This is an excellent guide! A must read for web developers.
I am really glad we invested in automated image optimization where I work. We run a couple of large real-estate websites and we store tens of millions of images and process thousands a day. Optimizing all of them from the start was one of the best things we did.
When an image gets uploaded, we re-encode it with MozJPEG and WebP and then create thumbnails in five different sizes and upload them all to S3. They get served through a CDN. We initially did the re-encoding and scaling on the fly and then cache them forever. But MozJPEG is really slow so we changed the system to pre-process everything.
When we first implemented this two years ago, Chrome was the only browser that supported WebP. This investment really paid off as all major browsers now support WebP. The website loads really quickly and images rarely give us problems.
It's great to know you have implemented WebP in your pipeline.
Do check on-the fly optimisation tools like https://www.gumlet.com You can save a huge amount in server costs as well as developer and devops team time. Basically all image operations are done in the cloud and all images are served via super fast CDN - that too with cheaper price than a CDN.
Sort of. We run on Heroku, so we indirectly run on AWS.
We're heavy users of Python and we're using the excellent Pillow-SIMD [1] library to do most of the heavy lifting. We made our own builds to link it to MozJPEG instead of libjpeg and include libwebp.
I have been making Optimage which is currently the only tool that can automatically optimize images without ruining visual quality [1]. It is also the new state of the art in lossless PNG compression.
I have raised a number of issues [2] with this guide. It’s been over a year and they still have not been addressed [3].
Your benchmark is not a legitimate comparison because it does not compare files of equal size. See https://kornel.ski/en/faircomparison for an explanation of the problems with your methodology. The files your closed source tool generates in this benchmark are 56% larger than those created by ImageOptim! Have your program generate smaller files and then post the results.
And noticeably degraded if you compare those images with the originals. Optimage does apply chroma subsampling, the major winner here, when it makes sense.
My goal is automatic image optimization with predictable visual quality, i.e. images have to remain authentic to originals.
> your closed source tool
FYI, ImageOptim API is closed source and way more expensive if that was your point.
> Your benchmark is not a legitimate comparison
If you have a better one,
> then post the results
I did post mine. Just why are you taking them out of context and missing others, e.g. lossless compression results?
Please note that this guide mostly deals with optimizing for network transfer size. As is almost always the case with optimization, there are multiple axes you should be aware of. In this case, a lot of the techniques presented negatively impact CPU usage and, by extension, battery life on mobile devices.
EDIT:
I am not an expert on licensing but it does look like everything is in order from a licensing perspective (the original content is CCA3 and to me it looks like things are credited properly).
Would like to add a practice that may be helpful for someone: I store an uploaded image as an array of objects in the DB. Each object contains format (jpeg, webp, etc.), name, size, hash and other pieces of information about the transformed image. Frontend app chooses which one to render according to a browser (use webp in Chrome for example.) and UI element.
Recently I've tested just compressing images as 1 frame in an av1 video with ffmpeg and it seems to work surprisingly well. Browsers seem to automatically display the first frame of the video without needing to play.
https://www.gumlet.com is one of the service which takes all of this into account and makes all essential image optimisations available via nice and tidy API.
Are there plans for a self-hosted option? Some business might be scared of being so dependent on Gumlet's uptime for their mission critical image processing.
Gumlet seems to cater towards processing images on the fly, which is what a lot of your competitors also do. It might be interested to also cater towards using Gumlet as a service to pre-process images as part of a pipeline.
I am just thinking aloud, I have no real basis for these ideas at the moment. Just my two cents.
I wish you the best of luck and I'll keep Gumlet in mind, the homepage looks very sleek.
That's a good idea. We will consider it if we find appropriate customers for the same. Meanwhile, for uptime, we provide 99.9% uptime SLA which we have been able to maintain since our launch.
Am I able to give an arbitrary URL to an image for processing/display in the browser? I’ve been using a self hosted option called Imaginary [0] but am getting tired of self hosting it.
You can indeed do it. We call that a "web proxy" source. It's easy to setup and get started. Please try it out and ping us in customer chat if you need any help.
Cloudinary is nice, but it gets expensive really fast on a high traffic site that's image heavy and bandwidth optimized. For example, a single image might have 5-6 versions on top of the original -- low quality thumbnail, high quality thumbnail, default, default 2x retina, default 3x retina. Then activate their webp service and perhaps offer another image aspect ratio. Each is a conversion that counts against your quota. Now you've got 15+ (con)versions and you're shelling out hundreds of dollars a month.
Cloudinary, Imgix, Uploadcare (which I' working for) and others save you money because you don't have to develop and maintain these moving parts. You have to constantly check what's happening with browsers, what formats/encoders are available and which are the best etc.
It's classic build vs. buy dilemma. In majority of cases it's much more cost effective to buy.
BTW, Uploadcare doesn't charge for file processing at all, only for CDN traffic. So you can create as much image variants as you need.
All too often I’ve seen people encoding 8MP photos as PNGs and wondering why they’re all 50MB in size.
Next on the hit list is outputting images at the right size. If the image is never going to be shown greater than 500px wide, save a copy that’s 500px wide. WordPress has this feature built in- learn to use it if that’s your flavour of CMS. Your bounce rate will go down, I guarantee it.