Over the years, advancements in computer technology have allowed for the development and improvement of compression algorithms. In this chapter, we will explore some key advancements that have helped shape the way we compress data today.
In the late 1950s, David Huffman, a graduate student at MIT, created the Huffman coding algorithm. This algorithm uses variable-length codes to compress data, assigning shorter codes to more frequently used symbols and longer codes to less frequently used symbols. This allows for more efficient compression and has become an essential part of many compression methods, including JPEG, MP3, and zip files.
In the early 1970s, Abraham Lempel, Jacob Ziv, and Terry Welch worked separately on a compression algorithm that would eventually be known as Lempel-Ziv-Welch (LZW). This algorithm works by creating a dictionary of frequently occurring sequences in the data and replacing those sequences with shorter codes. This method is particularly useful for compressing text and has been used in popular file formats such as GIF and TIFF.
In the late 1970s, a new compression method called predictive coding was introduced. This method relies on predicting the value of each data point, based on the values of previous data points. The difference between the predicted value and the actual value is then encoded, resulting in a smaller file size. This technique has been used in popular video and image compression formats such as MPEG and JPEG 2000.
These advancements in compression have greatly improved our ability to store and transfer large amounts of data efficiently. Without them, our digital world would not be possible. Next, we will take a closer look at how these algorithms are put into practice in our everyday lives.