Idea for smaller JPEG files?
Idea for smaller JPEG files?
What if instead of just taking the diff of the first coefficient of the DCT transform with the previous, you do that with the entire DCT result, and then encode it with a huffman-alphabet that gives smaller codes for values absolutely closer to zero, but is otherwise unsigned? Would that, theoretically speaking, give a lower file size on most files?
Re:Idea for smaller JPEG files?
Well, I think differentially coding dc coefficients is justified by high correlation between dc values(first value of dct) of dct transformation of neighboring blocks, that is you can often say that average value of one block is very near to average value of the neighboring blocks. That way you can gain some more compression by huffman encoding like you explained.
I do not think that differentially encoding higher order dct coefficients will make you gain a lot, because if so they would have added it into jpeg speciication. do you have any theoretical proof for that?
Ozgun.
I do not think that differentially encoding higher order dct coefficients will make you gain a lot, because if so they would have added it into jpeg speciication. do you have any theoretical proof for that?
Ozgun.
Re:Idea for smaller JPEG files?
In any standard there's usually a bit that isn't perfect, or it would not be a final specification. All specs I've seen up to now have at least one point you could improve on, which would require breaking the spec.Ozgunh82 wrote: I do not think that differentially encoding higher order dct coefficients will make you gain a lot, because if so they would have added it into jpeg speciication.
No, it's just plain a hunch. My reasoning went like, if you have an image, any texture will give a constant mapping to each DCTed square, so if you encode the diff rather than the file itself textures will be compressed further. I'm not sure whether common files have continual textures enough to compensate for it, but I think it'll improve compression on files with lots of changes and slightly enlarge files with few, which will (I think) mainly cause files to be compressed to a more common file size rather than dependant on the file content itself.do you have any theoretical proof for that?
All still a hunch, need to figure out how a DCT works exactly before implementing it, let alone improving it. Just a hunch
Re:Idea for smaller JPEG files?
There are programs that compress jpeg files about 20%
i think that they just replace rle and hufman by better algos
like lzma.
But the problem is that this files are no more jpeg files.
i think that they just replace rle and hufman by better algos
like lzma.
But the problem is that this files are no more jpeg files.
Re:Idea for smaller JPEG files?
This mathematics is beyond my understanding, but my non-expert opinion is: don't use JPEGs!
Re:Idea for smaller JPEG files?
Do you have any motivation for this?NotTheCHEAT wrote: This mathematics is beyond my understanding, but my non-expert opinion is: don't use JPEGs!
Re:Idea for smaller JPEG files?
Probably ideology-motivated because JPEGs are patent-covered.
Every good solution is obvious once you've found it.
Re:Idea for smaller JPEG files?
IIRC, JPEG wasn't patent covered. There was discussion about one patent possibly applying to it.Solar wrote: Probably ideology-motivated because JPEGs are patent-covered.
Re:Idea for smaller JPEG files?
The company was Forgent, patent number is 4,698,672. There has been proof of prior art, but the patent itself hasn't been challenged until but recently.
Every good solution is obvious once you've found it.
Re:Idea for smaller JPEG files?
From TFA:
That would match JPEG kind of yes... but, not specifying the 2-dimensionality would have made it match somehow all lossy compressions nowadays.The patent describes a single-pass digital video compression system which implements a two-dimensional cosine transform with intraframe block-to-block comparisons of transform coefficients without need for preliminary statistical matching or preprocessing.
Each frame of the video image is divided into a predetermined matrix of spatial subframes or blocks. The system performs a spatial domain to transform domain transformation of the picture elements of each block to provide transform coefficients for each block. The system adaptively normalizes the transform coefficients so that the system generates data at a rate determined adaptively as a function of the fullness of a transmitter buffer. The transform coefficient data thus produced is encoded in accordance with amplitude Huffman codes and zero-coefficient runlength Huffman codes which are stored asynchronously in the transmitter buffer. The encoded data is output from the buffer at a synchronous rate for transmission through a limited-bandwidth medium. The system determines the buffer fullness and adaptively controls the rate at which data is generated so that the buffer is never completely emptied and never completely filled.
It was filed 2 oct 1987, so it'll expire in 2007 anyway. MPEG started in 1988, so it might even apply. If you live in the US, that is.