Journal Home Online First Current Issue Archive For Authors Journal Information 中文版

Frontiers of Information Technology & Electronic Engineering >> 2019, Volume 20, Issue 4 doi: 10.1631/FITEE.1700833

Vector quantization: a review

1. Department of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan 430074, China
2. Center of Network and Computation, Huazhong University of Science and Technology, Wuhan 430074, China

Available online: 2019-06-05

Next Previous

Abstract

Vector quantization (VQ) is a very effective way to save bandwidth and storage for speech coding and image coding. Traditional vector quantization methods can be divided into mainly seven types, tree-structured VQ, direct sum VQ, Cartesian product VQ, lattice VQ, classified VQ, feedback VQ, and fuzzy VQ, according to their codebook generation procedures. Over the past decade, quantization-based approximate nearest neighbor (ANN) search has been developing very fast and many methods have emerged for searching images with binary codes in the memory for large-scale datasets. Their most impressive characteristics are the use of multiple codebooks. This leads to the appearance of two kinds of codebook: the linear combination codebook and the joint codebook. This may be a trend for the future. However, these methods are just finding a balance among speed, accuracy, and memory consumption for ANN search, and sometimes one of these three suffers. So, finding a vector quantization method that can strike a balance between speed and accuracy and consume moderately sized memory, is still a problem requiring study.

Related Research