January 2002
·
4 Reads
This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
January 2002
·
4 Reads
January 2002
·
9 Reads
·
1 Citation
January 2002
·
33 Reads
January 2002
·
10 Reads
January 2002
·
3 Reads
January 2002
·
2 Reads
Variable quantization is the most important parameter for rate control in MPEG. It also is a key to achieving higher picture quality at a given bitrate. The models for variable quantization range from the relatively simple to the quite complex and are characterized by a wide diversity of approaches. All use spatial and/or temporal masking effects and all are based on heuristic, albeit reasonable, assumptions regarding the structure of the masking function. Impressive reductions in bitrate for a given visible distortion from the application of variable quantization are claimed — on the order of 30% or more — in the literature and in personal communications to the authors [NP77, Gon91], Nonetheless, given the current incomplete state of research in this area, significant further improvements are quite possible.
January 2002
·
5 Reads
·
1 Citation
January 2002
·
6 Reads
January 2002
·
5 Reads
January 2002
·
2 Reads
·
1 Citation
... The JPEG standard [1] pioneered this technique, employing lossy compression to discard details imperceptible to the human visual system. Following the success of JPEG, video coding standards such as H.261 [2] and MPEG-1/MPEG-2 [3] were introduced to efficiently compress video content for applications like video conferencing and storage media (e.g., VCD/DVD). Over the years, international coding standards mainly pursued high compression performance toward human consumption tasks, where minimizing signal reconstruction error is the critical metric to reflecting human perception. ...
January 1996
... A simple model of early vision (visual information processing in the eye and the visual pathways up to roughly the striate cortex) may consist of (1) retinal units (photoreceptors and spatiotemporal filters including interactions between adjacent units; light adaptation is presumed to be located in these units), 11 (2) noisy channels with limited bandwidth (retinal ganglion cells/optic nerve), 22,23 and (3) pooling of adjacent channel outputs at the level of the cortex (a step that has been shown to be essential for understanding the variability in static perimetry and the relationship between perimetric and structural measures of glaucomatous damage). [63][64][65] For the foveal increment, the logCS versus log luminance curve showed a vertical shift ( Fig. 2A), that is, the difference in logCS between the glaucoma patients and controls was independent of luminance. In terms of the abovementioned model, this implies intact (that is, no impaired light adaptation) retinal units of which the number may be decreased and or the connectivity to the brain lost (as opposed to a horizontal shift, which would point to damaged retinal units). ...
January 2002
... MPEG-1 is a block-based video compression technique which exploits discrete cosine transform (DCT) and motion compensation error to reduce spatiotemporal redundancy in image sequence [19,20]. In MPEG-1 standard, each picture is divided to 16 * 16 subimages called macroblocks (MBs). ...
January 2002
... The chromosome population for the GA's represents templates that are used in data compression, where the size of data after compression is assigned back to the chromosome. Table II shows a comparison with two major international standards for data compression: Lempel-Ziv ("compress" command of UNIX) and JBIG (joint bilevel image coding experts group) [21], [22], both available on LSI chips. The EHW chip attained compression ratios for printer images almost twice those obtained by the international standards. ...
... Finally, the psychovisual redundancy concerns the imperceptible details of the human visual system. Some lossless compression methods that explore coding redundancy are Huffman coding [21], Shannon-Fano coding [22], arithmetic coding [23] and dictionary-based encoding such as LZ78 and LZW [24]. Run-length coding [25], bit plane coding [26] and predictive coding [27] explore interpixel redundancy. ...
June 1990
Proceedings of SPIE - The International Society for Optical Engineering
... To give one example for a typical cause of directional artifacts, consider JPEG compression [83]. While this popular [27,53] lossy compression standard does not favor any direction, common parameter choices do leave directional artifacts. ...
January 1993
... A motion vector specifies two features: the direction of movement and the magnitude in terms of pixels per frame. There is motion vector information provided by MPEG encoder [5]. But as Kasturi et al pointed out [4], this motion vector information is not reliable as it is used as motion compensation for video compression, instead of accurately tracking specific objects in the video frames. ...
January 1997
... DC coefficients and AC coefficients being treated separately for entropy coding. [16] JPEG recommends two encoding methods for entropy coding -Huffman coding [27], primarily used by baseline sequential codec, and arithmetic coding [28]. However, both codecs can be used for all modes of operation. ...
January 1988
... Most RAW formats feature some type of lossless compression for the mosaic images. For instance, the Canon, Kodak, Nikon and Adobe digital negative (DNG [15]) formats 1 employ non-adaptive Huffmanbased compression or directly apply the lossless mode of the JPEG [17] standard (not to be confused with the JPEG-LS [18] or JPEG 2000 [19] standards). These approaches offer fast implementations, but at the cost of reduced compression ratios compared to state-of-the-art algorithms for mosaic images from the literature. ...
January 1993
... Progressive image transmission [24,306,374] involves sending a low quality image followed by successively more detail, so the picture quality gradually improves as more data is being received. This method is used on the web, for example, so that the user can cancel a page early in the transmission if it is not of interest. ...
January 1983
IBM Systems Journal