ECE510

Data Compression
Fall and Spring
Catalog Data: 

Graduate Course Information

==============================================

ECE 510 - Data Compression

Credits: 3.00 

UA Catalog Description: http://catalog.arizona.edu/allcats.html 

Course Assessment: Homework:  5 – 7 assignments

Exams:  1 Midterm Exam, 1 Comprehensive Final Exam 

Grading Policy:

Typically: 15% Homework,

                  40% Exam #1,

                  45% Final.

Course Summary:

Theory and practice of data compression with applications to image and video compression.  Huffman coding, run coding, arithmetic coding, transform theory (KLT, DCT, Wavelets), quantization, rate control, scalable compression, compression standards. 

Prerequisite(s): 
ECE503 or equivalent
Textbook(s): 

“JPEG2000: Image Compression Fundamentals, Standards and Practice by D.S. Taubman and M.W. Marcellin, Kluwer Academic Publishers, 2002.

Course Topics: 

1.     Image Compression Overview (Chapter 1) – Read on your own.

2.     Entropy and Coding Techniques (Chapter 2) – We will discuss entropy as a measure of information. Shannon’s noiseless source coding theorem will be used to show that data can be compressed to achieve a data rate arbitrarily close to the entropy and that further compression without loss is not possible. Practical coding schemes that approach the entropy will be introduced, including: Elias coding, Huffman coding, Golomb coding, and arithmetic coding.

3.     Quantization (Chapter 3) – In many applications, noiseless source coding does not yield enough compression. Popular image and video coding standards such as JPEG and MPEG use noisy source coding to achieve significantly more compression than is possible by noiseless source coding. Quantization is an essential part of noisy source coding. Quantization reduces the precision of data, making them easier to represent. In this chapter we discuss rate distortion theory which gives theoretical bounds on the amount of compression possible for a given amount of distortion. Practical quantizers are presented, including: Lloyd-Max scalar quantizers, entropy coded scalar quantizers, embedded scalar quantizers, deadzone quantizers, vector quantizers, and trellis coded quantization.

4.     Image Transforms (Chapter 4) – For many data sources, vector quantization is sufficient to achieve the theoretical bounds on performance for noisy source coding. However, vector quantization is too complex for most applications. Data transforms are used to exploit correlation in data sources of interest including speech, images and video. In this way performance approaching that of vector quantization is obtained with orders of magnitude less complexity. This chapter provides a rigorous foundation for the theory of linear block transforms, subband transforms, and filter banks.

5.     Rate Control Techniques (Chapter 5) – When different data have different variances, they should be compressed to different bit rates. This is nearly always the case in transform coding systems. In this chapter, we discuss algorithms for optimally distributing available bit-rate among various data sources, such as transform coefficients. We consider both static and dynamic rate allocation.

6.     Filter Banks and Wavelets (Chapter 6) – In this chapter we extend the discussion in Chapter 4 to wavelet transforms. We begin with a review of classical quadrature mirror filters. We then discuss wavelets and multi-resolution analysis, followed by the construction of wavelets. We conclude with a discussion of lifting, reversibility, and boundary handling.

7.     Zero-Tree Coding (Chapter 7) – This chapter details the well known EZW (embedded zero tree coding of wavelets) and SPIHT (set partitioning in hierarchical trees) algorithms. These algorithms were often used for comparison with the state of the art prior to the advent of JPEG2000. Both algorithms are based on the coding of trees (consisting of transform coefficients) which are induced by the resolution hierarchy of subband/wavelet transforms.

8.     Selected topics from JPEG2000 (Chapters 8 – 12) – JPEG2000 is based on the wavelet transform, and is the latest international standard for image compression. It is used in many applications today. It is part of the medical image transfer syntax standard known as DICOM. It is used to transport data to every digital cinema theatre in the world. We review the functionalities of JPEG2000 as well as show how the algorithmic ingredients from Chapters 1 – 7 were combined to create JPEG2000.

9.     JPEG (Chapter 19) – The classical JPEG algorithm is arguably the most successful image compression format in history. It is based on the discrete cosine transform and Huffman coding. It is contained in almost all consumer electronic devices and software that deal with images.

10.  JPEG-LS (Chapter 20) – The JPEG-LS algorithm is designed for lossless compression of imagery (in fact, it can perform “near” lossless compression – defined in a very particular way). It has not found widespread applicability, but is useful for drawing comparisons with state of the art lossless compression systems.

11.  MPEG (Course Notes) – We will briefly review the MPEG family of standards. Some form of MPEG is used in nearly all modern video hardware and software, including digital television (both high definition and standard definition).

Class/Laboratory Schedule: 

Lecture:  150 minutes/week

Prepared by: 
Michael Marcellin
Prepared Date: 
April 2013

University of Arizona College of Engineering