Page 1 of 1

Idea for smaller JPEG files?

Posted: Wed Oct 05, 2005 3:55 am
by Candy
What if instead of just taking the diff of the first coefficient of the DCT transform with the previous, you do that with the entire DCT result, and then encode it with a huffman-alphabet that gives smaller codes for values absolutely closer to zero, but is otherwise unsigned? Would that, theoretically speaking, give a lower file size on most files?

Re:Idea for smaller JPEG files?

Posted: Fri Oct 21, 2005 12:19 pm
by Ozguxxx
Well, I think differentially coding dc coefficients is justified by high correlation between dc values(first value of dct) of dct transformation of neighboring blocks, that is you can often say that average value of one block is very near to average value of the neighboring blocks. That way you can gain some more compression by huffman encoding like you explained.

I do not think that differentially encoding higher order dct coefficients will make you gain a lot, because if so they would have added it into jpeg speciication. do you have any theoretical proof for that?

Ozgun.

Re:Idea for smaller JPEG files?

Posted: Sat Oct 22, 2005 9:48 am
by Candy
Ozgunh82 wrote: I do not think that differentially encoding higher order dct coefficients will make you gain a lot, because if so they would have added it into jpeg speciication.
In any standard there's usually a bit that isn't perfect, or it would not be a final specification. All specs I've seen up to now have at least one point you could improve on, which would require breaking the spec.
do you have any theoretical proof for that?
No, it's just plain a hunch. My reasoning went like, if you have an image, any texture will give a constant mapping to each DCTed square, so if you encode the diff rather than the file itself textures will be compressed further. I'm not sure whether common files have continual textures enough to compensate for it, but I think it'll improve compression on files with lots of changes and slightly enlarge files with few, which will (I think) mainly cause files to be compressed to a more common file size rather than dependant on the file content itself.

All still a hunch, need to figure out how a DCT works exactly before implementing it, let alone improving it. Just a hunch :)

Re:Idea for smaller JPEG files?

Posted: Sun Oct 23, 2005 4:10 am
by octavio
There are programs that compress jpeg files about 20%
i think that they just replace rle and hufman by better algos
like lzma.
But the problem is that this files are no more jpeg files.

Re:Idea for smaller JPEG files?

Posted: Tue Nov 22, 2005 11:28 pm
by NotTheCHEAT
This mathematics is beyond my understanding, but my non-expert opinion is: don't use JPEGs!

Re:Idea for smaller JPEG files?

Posted: Wed Nov 23, 2005 2:27 am
by Candy
NotTheCHEAT wrote: This mathematics is beyond my understanding, but my non-expert opinion is: don't use JPEGs!
Do you have any motivation for this?

Re:Idea for smaller JPEG files?

Posted: Wed Nov 23, 2005 3:07 am
by Solar
Probably ideology-motivated because JPEGs are patent-covered.

Re:Idea for smaller JPEG files?

Posted: Wed Nov 23, 2005 3:21 am
by Candy
Solar wrote: Probably ideology-motivated because JPEGs are patent-covered.
IIRC, JPEG wasn't patent covered. There was discussion about one patent possibly applying to it.

Re:Idea for smaller JPEG files?

Posted: Wed Nov 23, 2005 3:56 am
by Solar
The company was Forgent, patent number is 4,698,672. There has been proof of prior art, but the patent itself hasn't been challenged until but recently.

Re:Idea for smaller JPEG files?

Posted: Wed Nov 23, 2005 5:19 am
by Candy
Solar wrote: The company was Forgent, patent number is 4,698,672. There has been proof of prior art, but the patent itself hasn't been challenged until but recently.
From TFA:
The patent describes a single-pass digital video compression system which implements a two-dimensional cosine transform with intraframe block-to-block comparisons of transform coefficients without need for preliminary statistical matching or preprocessing.

Each frame of the video image is divided into a predetermined matrix of spatial subframes or blocks. The system performs a spatial domain to transform domain transformation of the picture elements of each block to provide transform coefficients for each block. The system adaptively normalizes the transform coefficients so that the system generates data at a rate determined adaptively as a function of the fullness of a transmitter buffer. The transform coefficient data thus produced is encoded in accordance with amplitude Huffman codes and zero-coefficient runlength Huffman codes which are stored asynchronously in the transmitter buffer. The encoded data is output from the buffer at a synchronous rate for transmission through a limited-bandwidth medium. The system determines the buffer fullness and adaptively controls the rate at which data is generated so that the buffer is never completely emptied and never completely filled.
That would match JPEG kind of yes... but, not specifying the 2-dimensionality would have made it match somehow all lossy compressions nowadays.

It was filed 2 oct 1987, so it'll expire in 2007 anyway. MPEG started in 1988, so it might even apply. If you live in the US, that is.