William B. Pennebaker's research while affiliated with Columbia University and other places

What is this page?


This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

Publications (39)


MPEG System Syntax
  • Chapter

January 2002

·

4 Reads

Joan L. Mitchell

·

William B. Pennebaker

·

Chad E. Fogg

·

Didier J. LeGall
Share





Variable Quantization

January 2002

·

2 Reads

Variable quantization is the most important parameter for rate control in MPEG. It also is a key to achieving higher picture quality at a given bitrate. The models for variable quantization range from the relatively simple to the quite complex and are characterized by a wide diversity of approaches. All use spatial and/or temporal masking effects and all are based on heuristic, albeit reasonable, assumptions regarding the structure of the masking function. Impressive reductions in bitrate for a given visible distortion from the application of variable quantization are claimed — on the order of 30% or more — in the literature and in personal communications to the authors [NP77, Gon91], Nonetheless, given the current incomplete state of research in this area, significant further improvements are quite possible.






Citations (16)


... The JPEG standard [1] pioneered this technique, employing lossy compression to discard details imperceptible to the human visual system. Following the success of JPEG, video coding standards such as H.261 [2] and MPEG-1/MPEG-2 [3] were introduced to efficiently compress video content for applications like video conferencing and storage media (e.g., VCD/DVD). Over the years, international coding standards mainly pursued high compression performance toward human consumption tasks, where minimizing signal reconstruction error is the critical metric to reflecting human perception. ...

Reference:

Unveiling the Future of Human and Machine Coding: A Survey of End-to-End Learned Image Compression
MPEG Video Compression Standard
  • Citing Book
  • January 1996

... A simple model of early vision (visual information processing in the eye and the visual pathways up to roughly the striate cortex) may consist of (1) retinal units (photoreceptors and spatiotemporal filters including interactions between adjacent units; light adaptation is presumed to be located in these units), 11 (2) noisy channels with limited bandwidth (retinal ganglion cells/optic nerve), 22,23 and (3) pooling of adjacent channel outputs at the level of the cortex (a step that has been shown to be essential for understanding the variability in static perimetry and the relationship between perimetric and structural measures of glaucomatous damage). [63][64][65] For the foveal increment, the logCS versus log luminance curve showed a vertical shift ( Fig. 2A), that is, the difference in logCS between the glaucoma patients and controls was independent of luminance. In terms of the abovementioned model, this implies intact (that is, no impaired light adaptation) retinal units of which the number may be decreased and or the connectivity to the brain lost (as opposed to a horizontal shift, which would point to damaged retinal units). ...

Aspects of Visual Perception
  • Citing Chapter
  • January 2002

... The chromosome population for the GA's represents templates that are used in data compression, where the size of data after compression is assigned back to the chromosome. Table II shows a comparison with two major international standards for data compression: Lempel-Ziv ("compress" command of UNIX) and JBIG (joint bilevel image coding experts group) [21], [22], both available on LSI chips. The EHW chip attained compression ratios for printer images almost twice those obtained by the international standards. ...

An overview of the basic
  • Citing Article

... Finally, the psychovisual redundancy concerns the imperceptible details of the human visual system. Some lossless compression methods that explore coding redundancy are Huffman coding [21], Shannon-Fano coding [22], arithmetic coding [23] and dictionary-based encoding such as LZ78 and LZW [24]. Run-length coding [25], bit plane coding [26] and predictive coding [27] explore interpixel redundancy. ...

DCT-based video compression using arithmetic coding
  • Citing Article
  • June 1990

Proceedings of SPIE - The International Society for Optical Engineering

... A motion vector specifies two features: the direction of movement and the magnitude in terms of pixels per frame. There is motion vector information provided by MPEG encoder [5]. But as Kasturi et al pointed out [4], this motion vector information is not reliable as it is used as motion compensation for video compression, instead of accurately tracking specific objects in the video frames. ...

In Digital Multimedia Standards Series
  • Citing Article
  • January 1997

... Most RAW formats feature some type of lossless compression for the mosaic images. For instance, the Canon, Kodak, Nikon and Adobe digital negative (DNG [15]) formats 1 employ non-adaptive Huffmanbased compression or directly apply the lossless mode of the JPEG [17] standard (not to be confused with the JPEG-LS [18] or JPEG 2000 [19] standards). These approaches offer fast implementations, but at the cost of reduced compression ratios compared to state-of-the-art algorithms for mosaic images from the literature. ...

JPEG: Still image data compression standard
  • Citing Book
  • January 1993

... Progressive image transmission [24,306,374] involves sending a low quality image followed by successively more detail, so the picture quality gradually improves as more data is being received. This method is used on the web, for example, so that the user can cancel a page early in the transmission if it is not of interest. ...

Series/1-based videoconferencing system
  • Citing Article
  • January 1983

IBM Systems Journal

Dimitris Anastassiou

·

Marvin K. Brown

·

Hugh C. Jones

·

[...]

·

Keith S. Pennington