dc.description.abstract |
. The increased reliance on images makes the significance of image compression more pronounced than ever. There is wide range of practical values for today that are dealing with the transmission of a large amount of image data therefore, it warrants image compression as a necessity of time for better handling. For last two decades, image compression is being performed more often than not by using most popular “Time Frequency” wavelet transformation. The Discrete Wavelet Transform (DWT) is a type of wavelet transform in which signal is transformed into discretely sampled wavelets. A number of wavelet based image compression techniques like Embedded Zero-tree Wavelet (EZW) transform and Set Partitioning in Hierarchical Trees (SPIHT) are being used to attain better standards of PSNR and compression ratios. EZW, a computationally simple and very effective technique, is an embedded compression algorithm of its time that works on DWT to predict the absence of significant information by exploiting self-similarities across the scale. However, it lacked the insight about coefficient position, didn‟t cater for intra-band correlation and its performance with single embedded file was not much pronounced. The improvements in EWZ were brought in with the introduction of SPIHT, which is again a fully embedded codec algorithm. It uses principal of partial ordering by magnitude, set partitioning by importance of magnitude of the coefficients, self-similarity across the scale and ordered bit plan transmission.
SPIHT encodes the transformed coefficients with respect to their importance as compared to a given threshold. Statistical analysis have exhibited that the output bit-stream of SPIHT comprises of long series of zeroes which can be further compressed, therefore SPIHT is not recommended to be used as sole mean of compression. To this end, additional compression is being done by making use of different kinds of entropy encoding schemes. One of the entropy encoding scheme which is concatenated with SPIHT for further compression is Huffman encoding. This research is motivated by the requirement of a viable solution for fast transmission and less storage space. This research concentrates on saving comparatively more number of bits without compromising the quality of the image by combining two encodings “Set Partitioning in Hierarchical Tree and Huffman coding. This is done by making deft use Huffman encoding where it yields the optimized results and saves more numbers of bits thereby reducing the storage space and increasing the transmission time. |
en_US |