Figure 1 - uploaded by Ruven Pillay
Content may be subject to copyright.
The real size of the image is much more bigger than the screen size 

The real size of the image is much more bigger than the screen size 

Context in source publication

Context 1
... in digital imaging have resulted in massively increased image resolutions and bit depths as well as in new techniques such as 3D imaging and topographical scanning. These new imaging techniques produce enormous quantities of data that need to be adequately managed, handled and exploited. Apart from the sheer volume of data, other advanced features such as colorimetric accuracy, high dynamic ranges (e.g. 16 bit images) and multi-channel output must also be considered. This problem has been particularly acute at the research and restoration centre at the Louvre Museum (the C2RMF - Centre de Recherche et Restauration des Musées de France) where a diverse range of imaging technologies have been applied in order to assist curators, restorers and researchers. In order to view such data, standard visualisation systems that simply transfer and load an entire image into memory are very inefficient. Not only are the scans large, but also colour calibrated, meaning that colour management must also be handled. The use of these advanced technologies has limited value if their results cannot be easily used or widely exploited. A key technical challenge is, therefore, how to efficiently disseminate this high quality data to people both internally and externally of the research centre. In order to handle this diversity of extremely large images, the Open Source IIPImage[1] system is used. IIPImage is a client-server system designed for the remote viewing of very high resolution images across the Internet. Its architecture gives it the peculiarity to be usable even over a slow dial up connection. The server can be launched as a plug-in that can work with a wide variety of web servers such as: Apache, Lighttpd or MyServer or any other FCGI-enabled http server. Images can be viewed via a feature-rich Java client, a Javascript client embedded within a web page or as JPEG image dynamically generated at the requested size and resolution by the server [2,3]. The system evolved from a prototype used in several European projects to view high resolution images of works of art across the Internet and has been greatly extended and restructured to handle the new imaging techniques described later in this paper. The system is fast and able to work remotely because the client only needs to download the portion of the whole image that is visible on the screen of the user at their current viewing resolution (Fig. 1). This means that it is also bandwidth and memory efficient as users do not need to be able to store or handle massive images on their local machine. Only the required parts of the whole image at the desired resolution need be sent. The image tiles at the requested resolution are extracted from the source file by the server, dynamically compressed with JPEG (or lossless if desired) and sent to the client. The compression level used can be controlled by the client to optimize the transmission depending on the network environment (Fig. 2). This technique makes it possible to view extremely large images of several gigabytes in size in real-time over the Internet. In order for the system to be as efficient as possible, the images are stored in a multi-resolution tiled format, (Fig. 3) which allows the server to extract regions of the full image at different resolutions very quickly with little processing overhead. Although a proprietary file format such as Flash Pix could have been used or a new one especially developed, it was decided to keep the format as open and interoperable as possible. The TIFF format is widely used and is flexible enough to permit this kind of structuring and encoding. Multiple resolutions can be stored within a single file in a pyramid-like structure using the standard TIFF multi-page functionality and each resolution can be tiled. A pyramidal image without compression uses only 33% extra space. However, the tiles can be optionally compressed using either a lossless compression such as Deflate or LZW, or a lossy compression such as JPEG. If space is an issue, JPEG compression is able to reduce the image file size by at least a factor of ten without a significant loss in visual quality [4]. When there are multiple views (e.g. multiple wavelengths or view angles etc), the different images are simply kept as separate files. The classical 8-bit RGB colour space most commonly used today is insufficient for the richness of data acquired by high resolution cameras such as the CRISATEL [5] camera. Colour images are, therefore, stored in the device-independent CIELab colour space, which is able to describe colours that are outside the gamut of RGB. Furthermore, higher dynamic ranges (16-bit) are required to store the full dynamic range of imaging detail that would be invisible or saturated using only 8-bits. The IIPImage system is flexible enough to handle standard 8-bit RGB images, higher dynamic range 16bit images as well as colour spaces such as CIELab and generic multispectral images. Standard monitors are only able, however, to display 8-bits of information per pixel per channel in RGB colour space. In order to visualize 16-bit or CIELab images, the raw data needs to be first processed. In the case of 16-bit images, a contrast control allows the user to navigate within the extra, normally hidden data. For CIELab images, a dynamic conversion is performed by the client into the calibrated RGB space of the monitor. Eventually, it will be possible to use ICC profiles to calibrate the image to the user's monitor. The same group that in 1997 created the FlashPix image format, developed, at the same time, the IIP (Internet Image Protocol), a protocol aimed at Internet-based imaging and optimized for the FlashPix format. The tiled pyramidal TIFF format used by us is similar to the FlashPix standard, but is a widely-used open format and is fully compatible with the protocol. The IIP protocol also allows for a certain amount of customisation which has allowed us to enhance certain features and add panoramic viewing, multispectral and surface topographic height handling to it. One of the advanced technologies used at the C2RMF is the multispectral CRISATEL camera system. Multispectral imaging enables us to accurately evaluate the surface physical state of a work of art. Having an accurate spectral record of the painting allows conservators to locate areas that may be invisible to the naked eye (Fig. 4) or that have been restored or retouched long ago[6]. By making regular scans of works of art, the gradual changes in the physical condition can be monitored and damage addressed quickly. The characterisation of materials used, the evaluation of the conservation state, the localization of previous restoration interventions and continuous monitoring are among the most important topics in the field of study and conservation of artworks. Another important imaging technique at the C2RMF arsenal is colorimetric 3D panoramic imaging. 3D imaging of objects and paintings offers a powerful new analytical tool to conservators, curators and art historians. High- resolution 3D image data contains information that can be used for display, comparison, measurement and ana– lytical applications. A large collection of statues and objects has been scanned using the ACOHIR system [8]. These consist of a sequence of images of an object on a rotating turntable taken with a standard RGB camera. The turntable has ...

Similar publications

Article
Full-text available
We present a comprehensive review of the state of the art in video browsing and retrieval systems, with special emphasis on interfaces and applications. There has been a significant increase in activity (e.g., storage, retrieval, and sharing) employing video data in the past decade, both for personal and professional use. The ever-growing amount of...
Article
Full-text available
Autophagy is an important homeostatic cellular recycling mechanism responsible for degrading unnecessary or dysfunctional cellular organelles and proteins in all living cells. In addition to its vital homeostatic role, this degradation pathway also involves in various human disorders, including metabolic conditions, neurodegenerative diseases, canc...
Conference Paper
Full-text available
This article presents recent developments in CGWorld - a web based workbench for distributed development of a knowledge base of conceptual graphs, stored on a central server. The initial version of CGWorld met many of the needs that motivated its creation. It had excellent browsing, searching and editing features for a KB of CGs. However the suppo...
Article
Full-text available
Abstract—Users in planned purchasing are influenced by several determinants in their process of product selection. A study involving qualitative research among users from several Asian markets such as China, Hong Kong, Taiwan, Korea, and Thailand was conducted to identify the factors influencing product selection in planned purchasing. The study re...
Chapter
Full-text available
Rangelands consist of heterogeneous forage resources that may be abundant but are nutritionally unbalanced. Forage availability on the rangeland is not of value in itself. Instead, its value results from the selection which the goat is free or not to make and from the animal's motivation to ingest coarse forage. The plant species included in the di...

Citations

... Another solution is polynomial texture maps (PTMs), an extension of conventional texture maps. 14 15 aim to offer high resolution images (greater than the maximum resolution allowed by human vision) and tonal control in wide gamut spaces, such as CIELab. However, the absence of the third dimension limits the visualisation of the mesostructure; large files are generated, which are hard to manage on consumer devices by unskilled operators, and a problematic image stitching is required. ...
Article
Colours play a crucial role in the field of architectural heritage. Colour analysis and colour rendition are involved in several critical facets of heritage preservation, conservation and restoration. These aspects are related to accurate documentation and an accurate representation of heritage artefacts and architectural works. The aim of this paper is to describe critical issues and open problems of the processes involved in this field. Documentation is performed in multiple ways, acquiring heterogeneous data ranging from archival images, photographs, drawings using various consumer or professional instruments (eg, digital cameras and spectrophotometers). The reliability of colour acquisition might be influenced by instrumental reasons (the technology used to acquire colour information), by environmental changes (architectural heritage surveys are often performed outdoors), by morphology (complex architectural objects are characterised by concavities and convexities which complicate the reflection evaluation), or by materials (showing different reflection, porosity and transparency indexes). Identification of materials, such as colorants, pigments and dyes, is also a vital process in the heritage field. Colour information could be used as an approach to the identification of materials, but these methods are still under development, and many issues need to be solved to achieve reliable results. Visualisation techniques of a heritage artefact also present the problem of the correctness of the colour representation. Several problems need to be faced in this context: the reliability of the acquisition, colour management of the rendering software, model complexity, and fragmentation of the devices upon which the model is visualised.
... An interactive visualisation of high-resolution hyperspectral data from the Russian icon has been developed and can be viewed online [32]. This facility uses the IIP-Image software suite [33] to provide interactive online access to a subset of the hyperspectral data. It features an image stack including a colorimetric CIE L*a*b* rendering of dataset B under a D65 illuminant and single frames at various specific wavelengths, which the user can select and smoothly blend between. ...
Article
Full-text available
In a study of multispectral and hyperspectral reflectance imaging, a Round Robin Test assessed the performance of different systems for the spectral digitisation of artworks. A Russian icon, mass-produced in Moscow in 1899, was digitised by ten institutions around Europe. The image quality was assessed by observers, and the reflectance spectra at selected points were reconstructed to characterise the icon's colourants and to obtain a quantitative estimate of accuracy. The differing spatial resolutions of the systems affected their ability to resolve fine details in the printed pattern. There was a surprisingly wide variation in the quality of imagery, caused by unwanted reflections from both glossy painted and metallic gold areas of the icon's surface. Specular reflection also degraded the accuracy of the reconstructed reflectance spectrum in some places, indicating the importance of control over the illumination geometry. Some devices that gave excellent results for matte colour charts proved to have poor performance for this demanding test object. There is a need for adoption of standards for digitising cultural heritage objects to achieve greater consistency of system performance and image quality.
... Basically, data allocation can be an actual burden of the navigation while the decoding process may be on the order of 2-5 s, even when decoding a small VS of 9000 Â 12; 000 pixels. There exist some applications using different JPEG2000 implementations, all of them decompressing data at the server side and leaving to the client a purely passive role at receiving the raw decoded information to be displayed, for instance IIPImage [20], Djatoka [21], JVSMicroscope [7] and Web Microscope [22], being the latter a reference in certain academic and clinical institutions. This strategy throws away the JPEG2000 high compression rates since only uncompressed data are transmitted and ignores the potential processing improvement at the client side. ...
... The JPEG2000 decompression process is expensive because of the decoding and the inverse transforming processes, a fact that has limited the JPEG2000 application in VM. A partial remedy to this bottleneck has consisted in assigning the processing responsability to a server which decodes the codestream and sends the resultant raw data [20,21,7,22]. The client acts as a simple information receptor and the potential client resources are never used, overloading the communication channel by transporting uncompressed data. ...
... Different approaches have attempted to improve the degree of granularity. On the one hand, some works [20,21,7] have developed solutions with a lazy client, that is to say, the client just receives the decoded data to be displayed, but these approaches may increase the server loading when attending several clients. On the other hand, some approaches [23][24][25] have opted for tightly-coupled decoder adaptations that enable decoding of speci-fic RoIs, but these solutions can hardly evolve. ...
Article
Real integration of Virtual Microscopy with the pathologist service workflow requires the design of adaptable strategies for any hospital service to interact with a set of Whole Slide Images. Nowadays, mobile devices have the actual potential of supporting an online pervasive network of specialists working together. However, such devices are still very limited. This article introduces a novel highly adaptable strategy for streaming and visualizing WSI from mobile devices. The presented approach effectively exploits and extends the granularity of the JPEG2000 standard and integrates it with different strategies to achieve a lossless, loosely-coupled, decoder and platform independent implementation, adaptable to any interaction model. The performance was evaluated by two expert pathologists interacting with a set of 20 virtual slides. The method efficiently uses the available device resources: the memory usage did not exceed a 7% of the device capacity while the decoding times were smaller than the 200ms per Region of Interest, i.e., a window of 256×256 pixels. This model is easily adaptable to other medical imaging scenarios. Copyright © 2015. Published by Elsevier Inc.
... At the heart of the system is the open source IIPImage 3 image server (Pitzalis et al., 2006). IIPImage is a scalable clientserver system for web-based streamed viewing and zooming of ultra high-resolution raster images. ...
Article
Full-text available
Visualizing and navigating through large astronomy images from a remote location with current astronomy display tools can be a frustrating experience in terms of speed and ergonomics, especially on mobile devices. In this paper, we present a high performance, versatile and robust client-server system for remote visualization and analysis of extremely large scientific images. Applications of this work include survey image quality control, interactive data query and exploration, citizen science, as well as public outreach. The proposed software is entirely open source and is designed to be generic and applicable to a variety of data sets. It provides access to full precision floating point data at terabyte scales, with the ability to precisely adjust image settings in real-time. The proposed clients are light-weight, platform-independent web applications built on standard HTML5 web technologies and compatible with both touch-based and mouse-based devices. We put the system to the test and assess the performance of the system and show that a single server can comfortably handle more than a hundred simultaneous users accessing full precision 32 bit astronomy data.
... The visualization of these images renders significant difficulty in a mobile device. An advanced image visualization server powered by an internet imaging protocol (IIP) and its HTML5-compatible viewer are built in the virtual Linux instance (Pitzalis et al. 2006). The stitched image, when completed by the image analysis service, will be converted into a tiled pyramidal tagged-image file format (TIFF) enabling web access with minimal overhead in mobile devices. ...
Article
Full-text available
Optical imaging techniques have been commonly used in Civil Engineering practice for aiding the archival of damage scenes and more recently for image-based damage analysis. However, an evident limitation in the current practice is the lacking of real-time imaging, computing and analytics capability for team-based visual inspection in a complex built environment. This paper explores a new computing paradigm, called collaborative mobile-cloud computing (CMCC), and proposes a CMCC framework for conducting intelligent civil infrastructure condition inspection. Through software design, this framework synthesizes advanced mobile and cloud computing technologies with three innovative features: (i) context-enabled image collection, (ii) interactive imaging and processing, and (iii) real-time on-demand image analysis and condition analytics. Through field experiments and computational performance evaluation, this paper demonstrates the feasibility of the proposed CMCC framework, which includes verification of real-time imaging, analytics, and particularly, the mobile-cloud computational solution to two representative damage analysis problems considering complex imagery scenes.
... In France, the C2RMF laboratory, connected to the Louvre museum, has digitized more than 300, 000 documents taken from French museums, in high resolution (up to 20, 000 × 30, 000 pixels). The resulting EROS database [149] is for the moment only accessible to researchers whose work is connected with the C2RMF. To widely open the database, the idea is to create a framework to integrate the different security solutions in order to secure the access to the images. ...
Thesis
Full-text available
This habilitation thesis is first devoted to applications related to image representation and coding. If the image and video coding community has been traditionally focused on coding standardization processes, advanced services and functionalities have been designed in particular to match content delivery system requirements. In this sense, the complete transmission chain of encoded images has now to be considered. To characterize the ability of any communication network to insure end-to-end quality, the notion of Quality of Service (QoS) has been introduced. First defined by the ITU-T as the set of technologies aiming at the degree of satisfaction of a user of the service, QoS is rather now restricted to solutions designed for monitoring and improving network performance parameters. However, end users are usually not bothered by pure technical performances but are more concerned about their ability to experience the desired content. In fact, QoS addresses network quality issues and provides indicators such as jittering, bandwidth, loss rate... An emerging research area is then focused on the notion of Quality of Experience (QoE, also abbreviated as QoX), that describes the quality perceived by end users. Within this context, QoE faces the challenge of predicting the behaviour of any end users. When considering encoded images, many technical solutions can considerably enhance the end user experience, both in terms of services and functionalities, as well as in terms of final image quality. Ensuring the effective transport of data, maintaining security while obtaining the desired end quality remain key issues for video coding and streaming. First parts of my work are then to be seen within this joint QoS/QoE context. From efficient coding frameworks, additional generic functionalities and services such as scalability, advanced entropy coders, content protection, error resilience, image quality enhancement have been proposed. Related to advanced QoE services, such as Region of Interest definition of object tracking and recognition, we further closely studied pseudo-semantic representation. First designed toward coding purposes, these representations aim at exploiting textural spatial redundancies at region level. Indeed, research, for the past 30 years, provided numerous decorrelation tools that reduce the amount of redundancies across both spatial and temporal dimensions in image sequences. To this day, the classical video compression paradigm locally splits the images into blocks of pixels, and processes the temporal axis on a frame by frame basis, without any obvious continuity. Despite very high compression performances such as AVC and forthcoming HEVC standards , one may still advocate the use of alternative approaches. Disruptive solutions have also been proposed, and offer notably the ability to continuously process the temporal axis. However, they often rely on complex tools (\emph{e.g.} Wavelets, control grids) whose use is rather delicate in practice. We then investigate the viability of alternative representations that embed features of both classical and disruptive approaches. The objective is to exhibit the temporal persistence of the textural information, through a time-continuous description. At last, from this pseudo-semantic level of representation, texture tracking system up to object tracking can be designed. From this technical solution, 3D object tracking is a logical outcome, in particular when considering vision robotic issues.
... From this particular kind of study, others have been initiated (Abas & Martinez, 2002, 2003, including use of the 'fuzzy' classification to recognise crack patterns. This problem has been particularly acute at the research and restoration centre at the Louvre Museum (the C2RMF -Centre de Recherche et Restauration des Musées de France) where a diverse range of imaging technologies has been applied to assist curators, restorers and researchers (Pitzalis et al., 2006;Pauchard et al., 2007). Another interesting research activity in image processing in the restoration sector has involved the visualisation of pigment distributions in paintings using synchrotron K-edge imaging (Krug et al., 2006). ...
Conference Paper
Full-text available
This work aims to deal with the problem of identifying, analysing, and classifying cracks in figurative works. The recent, considerable progress made in techniques for processing visual information has broadened the number of scientific applications in which the display and graphic processing of data play a fundamental role. Cracks consist of many elements; distinguishing and studying them has made it possible to develop a classification that in some cases can also be used as tool to verify a work's authenticity. RESTART, the system presented here, is deemed suitable for use in numerous and varied settings, such as teaching, conservation, study and research. It is able to investigate, study, research and 'restore' the digital, in accordance with the criteria dictated by knowledge. Also of considerable interest is the investigation of a crack based on such characteristics as origin and pathology, and the possibility of analysing the cracks in a fresco. The use of RESTART for such case examples is investigated and proposed.
... In France, the C2RMF laboratory, connected to the Louvre museum, has digitized more than 300000 cultural items taken from French museums, in high resolution (up to 20000 × 30000 pixels). The resulting EROS database [5] is for the moment only accessible to art researchers whose work is directly connected with the C2RMF. The French TSAR project is designed to open the EROS database in a secure, efficient and user-friendly way that involves cryptography and watermarking as well as compression and region-level representation abilities. ...
... A mapping function was used on the metrics outputs, as recommended by the VQEG Multimedia TEST PLAN. This mapping function allows to rescale the objective measures to fit in the range of the MOS, which in the presented experiment is [1,5], as shown in Table 1. Once the mapping function computed for every quality metric, the predicted MOS (often referred to as the MOSp) may then be compared to the observers MOS. ...
... A subjective quality experiment was conducted specifically for LAR coded images on a set of eight input images, each being distorted at five compression rates. Nineteen observers with correct vision were enrolled and had to rate the quality of the distorted images on a [1,5] quality scale. Furthermore, five quality metrics were tested on the so obtained 40 distorted images. ...
Conference Paper
Full-text available
Quality assessment is of major importance when designing and testing an image/video coding technique. Compression performances are usually evaluated by means of rate-distortion curves. However, the PSNR is commonly employed as the distortion measure. We hereby present a full quality assessment benchmark for the LAR (locally adaptive resolution) coder. We conducted a subjective experiment, where nineteen observers were asked to assess the perceptual quality of LAR coded images under normalized viewing conditions. Furthermore, five objective quality assessment metrics were used in order to determine the most suitable metric for the LAR coder. Finally, both JPEG and JPEG200 images were generated and assessed during the subjective experiment in order to define the optimal quality metric which should be used when comparing the codecs' output images quality.
... In France, the C2RMF laboratory, connected to the Louvre museum, has digitized more than 300, 000 documents taken from French museums, in high resolution (up to 20, 000 × 30, 000 pixels). The resulting EROS database [4] is for the moment only accessible to researchers whose work is connected with the C2RMF. In order to widely open the database, two problems have to be resolved. ...
... Five laboratories are involved, namely the C2RMF (Louvre, Paris), IETR (Rennes), IRCCyN (Nantes), LIRMM (Montpellier) and LIS (Grenoble). The C2RMF has developed the EROS (European Research Open System) database storing digital art documentation [4]. Till now, images have been stored in pyramidal TIFF format involving a bit overhead of 33%. ...
Conference Paper
Full-text available
EROS is the largest database in the world of high resolution art pictures. The TSAR project is designed to open it in a secure, efficient and user-friendly way that involves cryptography and watermarking as well as compression and region-level representation abilities. This paper more particularly addresses the two last points. The LAR codec is first presented as a suitable solution for picture encoding with compression ranging from highly lossy to lossless. Then, we detail the concept of self-extracting region representation, which consists of performing a segmentation process at both the coder and decoder from a highly compressed image, and later locally enhancing the image in a region of interest. The overall scheme provides an efficient, consistent solution for advanced data browsing.
Thesis
The role of museums and libraries is shifting from that of an institution which mainly collects and stores artefacts and works of art towards a more accessible place where visitors can experience heritage and find cultural knowledge in more engaging and interactive ways. Due to this shift, ICT have an important role to play both in assisting in the documentation and preservation of information, by providing images and 3D models about historical artefacts and works of art, and in creating interactive ways to inform the general public of the significance that these objects have for humanity. The process of building a 3D collection draws on many different technologies and digital sources. From the perspective of the ICT professional, technologies such as photogrammetry, scanning, modelling, visualisation, and interaction techniques must be used jointly. Furthermore, data exchange formats become essential to ensure that the digital sources are seamlessly integrated. This PhD thesis aims to address the documentation of works of art by proposing a methodology for the acquisition, processing, and documentation of heritage objects and archaeological sites using 3D information. The main challenge is to convey the importance of 3D model that is "fit for purpose" and that is created with a specific function in mind (i.e. very high definition and accurate models for : academic studies, monitoring conservation conditions over time and preliminary studies for restoration; medium resolution for on-line web catalogues). Hence, this PhD thesis investigate the integration of technologies for 3D capture, processing, integration between different sources, semantic organization of meta-data, and preservation of data provenance.