I do see value, the value of "lossless" is overrated - users and web devs want clear pictures, transparency and a small size in the first place. Even on desktop I'm sick of PNG, I do a screenshot it takes 2MB, I transcode it to jpeg and it only takes 200kb (10 times less, not bad eh?) with no visual difference, take another 40% off and you get like 140KB WebP, plus don't forget WebP will get transparency in the future which makes it so much better than JPEG _and_ PNG in a lot of cases, so to me it's _very_ attractive.
Problem is, Microsoft will most likely support it either not at all or in several years.
The moment someone suggests any type of change gets shot down. This is why Linux is going nowhere. Because every time someone tries to innovate everyone else complains, but when someone makes a decision that makes little sense nobody does.
Well, it HAS to be.
On the other hand the x264 devs delivered som real results (http://x264dev.multimedia.cx/?p=541), and as it seems VP8 can not even beat JPEG at the moment - a 20 year old codec.
Sure, Google promises to deliver improvements and new features like transparency, but they also promised to improve WebM, and nothing has happened in the last four weeks. I will believe them when I see results.
A new image container without lossless compression (you know, there ARE people who use it) and animations is just plain stupid anyways. The gain over JPEG and GIF is too small.
A lot of what is in h.264 is perfectly free. The vast majority of it, in fact is made up of functions that NOBODY has a claim to.
Actually, no again, that is not how compression works.If you suggest changes, please prove that they make sense. Googles "proof" is downloading a million COMPRESSED still images from the web, compressing them again with VP8, and then claiming the result was smaller.
Well, it HAS to be.
The "recompression" actually begins with a DECOMPRESSION. The RAW image, (i.e. AFTER decompression) is then compressed with the new system. The second compression doesn't gain anything from the first one -- they are NOT cumulative. I.e., in some cases, a "zip of a zip" might be smaller than the first zip. This is not the case here, since an image can only be compressed from raw.
Now here's the funny part of this;
Source compression can actually have some seriously bad effects that crop up with multiple recompressions -- especially if you change the encoding scheme. You know how a photocopy of a photocopy will degrade in appearance? Well it changes even MORE when you change the encoding scheme.... like taking a PICTURE of a PHOTOCOPY, getting a print, and photocopying it. That would end up REAL ugly.
If you are starting with a degraded image and want to maintain it as NOT SIGNIFICANTLY WORSE, you need to compress VERY VERY LIGHTLY! Again, if you're changing encoding schemes, the effects become more pronounced, which means BIGGER STILL!!!
Here's the worst part of it... that x264 page is STARTING with a DEGRADED IMAGE, and subjecting it to three encoding schemes, two of which are the same as the one that initially degraded it!
And then, of course.... this guy cheated with jpeg by applying jpegcrush! Sorry, but NO -- that is not allowed! You need to compress that jpeg more to get the file size down, not apply cheats to one but not the other! Similar cheats are equally possible for VPX, but this guy isn't offering that advantage. You want fair results? Perform a fair test!