Figure 3 - uploaded by Vis Madhavan
Content may be subject to copyright.
Camera exposures and timing of the illumination pulses for dual-frame mode of operation. Note that the interframe time cannot be less than the frame-transfer time. 

Camera exposures and timing of the illumination pulses for dual-frame mode of operation. Note that the interframe time cannot be less than the frame-transfer time. 

Source publication
Article
Full-text available
Multi-channel gated-intensified cameras are commonly used for capturing images at ultra-high frame rates. The use of image intensifiers reduces the image resolution and increases the error in applications requiring high-quality images, such as digital image correlation. We report the development of a new type of non-intensified multi-channel camera...

Contexts in source publication

Context 1
... 8. Examples of the pulsing sequence of eight laser pulses at four different wavelengths in order to ( a ) obtain eight frames at a framing rate of 8 MHz using the four dual-frame 2 MHz cameras, ( b ) obtain two groups of four frames at a framing rate of 20 MHz each using the four dual-frame 2 MHz cameras, and ( c ) obtain four pairs of stereo images usable for 3D DIC at a framing rate of 4 MHz using the four dual-frame 2 MHz cameras. Note that the time interval between any two pulses of the same wavelength should be 500 ns, which is the time needed for frame-transfer. Also, though not shown in the figure, it should be realized that the camera corresponding to each of the different wavelengths is triggered slightly before (about 10 ns) the first laser pulse of that wavelength and the first exposure ends shortly after the laser pulse (similar to that shown in figure 3).  ...
Context 2
... achieved by reverse clocking the charges to a drain), allowing charge to accumulate for the required exposure time, followed by frame-transfer of the image to the storage section [6]. A wide variety of high speed cameras, with framing rates ranging from 1 kHz to 200 MHz, are currently available commercially. In common usage, the term ‘high speed camera’ is used for cameras capable of capturing image sequences as well as for single shot cameras. A single shot high speed camera is a camera capable of capturing a high speed image (i.e., an image with a very short exposure time) that appears to freeze the motion of a moving object. The speed of such a camera simply refers to the inverse of the exposure time. On the other hand, high speed cameras being discussed here, which are the most common and practical, are those capable of capturing a sequence of high speed images with very short interframe separation. The speed, or frame rate, of such a camera refers to the inverse of the interframe time, while it is naturally understood that, for all practical purposes, the exposure time is less than, or at most equal to, the interframe time. The major limitation on the maximum frame rate that can be achieved using CCD image sensors is imposed by the time needed to read out the captured image(s) from the image sensor. The read-out speeds of most CCD cameras range between 10 and 40 MHz, with 10 MHz being the most typical. For instance, a 10 MHz read-out speed means that, for a 1 megapixel sensor, it would take about 0.1 s to read a full frame (i.e., the framing rate is 10 Hz). With increasing read- out speeds, the read-out noise also increases, therefore using higher read-out speeds is not necessarily desirable. A variety of techniques can be used to overcome this limitation on the maximum framing rates imposed by the read- out time. One of these techniques relies on reducing the size of the image to be read out (i.e., the pixel resolution) in order to reduce the read-out time and therefore increase the frame rate. Binning, the averaging of neighboring pixels, and windowing, using a subset of the sensor for image capturing and read-out, are two techniques that are used with high resolution CCDs in order to reduce the size of the image and consequently increase the frame rate. Another technique that can be used to achieve much higher framing rates is similar to ‘windowing’, in the sense that a subset of the CCD array is used for each image. However, the pixels representing different images are interleaved; each set of pixels representing an image is exposed at one particular time, and instead of reading out each individual image as it becomes available, the images are kept on the CCD until a number of images are recorded on the CCD and they are all read out together [8, 9]. Furthermore, it is also possible to increase the frame rate by dividing the CCD into multiple regions which are read out simultaneously through separate read-out sections [10]. High speed cameras using a combination of these techniques can acquire image sequences at frame rates of the order of 1 kHz to 100 kHz. However, these techniques cannot be used to increase the read-out speed by several additional orders of magnitude, as would be required to achieve truly high speed capture of high- resolution images. A new type of imaging sensor known as in situ storage image sensor (ISIS) is capable of recording 100 consecutive frames with 312 × 260 pixel resolution at a framing rate of 1 MHz [11]. The concept of the ISIS CCD is similar to that of the interline CCD (figure 2( b )) where it has a local memory interspersed within the image section, but instead of having a single storage element for each pixel, multiple elements are available. During the image-capturing phase, image signals are transferred to the in situ memory without being read out of the sensor, and the number of obtainable frames in a sequence is equal to the number of storage elements installed in each pixel. The storage elements in this type of sensor occupy 87% of the total area of each pixel which means that the photosensitive area of each pixel (i.e., the fill factor) is only 13%. Therefore, there are some concerns that such sensor may not be suitable for applications involving PIV or DIC since there is a high chance that two completely different areas of field of view will be captured in any two successive frames. The use of short duration illumination pulses permits the operation of frame-transfer or interline CCDs in a special mode known as ‘dual-frame’ wherein two images can be recorded in very quick succession [4]. In this mode of operation, the first image is captured at the time the first illumination pulse is incident on the subject, typically close to the end of the exposure time of the first frame, and immediately after the frame is transferred to the storage section, the second illumination pulse is used to expose the subject during the exposure time of the second frame. Figure 3 illustrates the timing for camera exposure and the illumination pulses in this mode of operation. Note that, though the exposure of the first frame can be controlled by the electronic shuttering technique described previously, the exposure of the second frame continues for the entire time it takes for the first image to be read out from the storage section. However, though the exposure time of the second frame is very long, the illumination pulse duration defines the ‘effective’ exposure time. This approach allows the acquisition of a pair of images, one in the image section and the other in the storage section. Thus, in the dual-frame mode of operation the camera can capture two frames in very quick succession, and then has to wait for hundreds of milliseconds (the read-out time of the two frames) before it can capture another pair of images. The minimum interframe separation is limited by the frame- transfer time, which can be as small as 50 ns for the most recent models. Cameras optimized to minimize the frame- transfer time and aimed entirely at this mode of operation are referred to as dual-frame cameras, and a variety of such cameras are available commercially. It should be noted that the illumination should be provided in the form of a pulse that ends before frame transfer begins in order to prevent smearing. Otherwise, smearing during frame- transfer will be significant, since the frame-transfer time is not small when compared with the effective exposure time (i.e., the duration of the illumination pulse) and interframe separation. It should also be apparent that this method of image recording is more suitable for low ambient light conditions since the actual exposure time of the second frame is relatively long. To realize operation at the minimum interframe time that these cameras will allow, namely, the frame transfer time, pulsed illumination has to be provided by sources that are bright enough to adequately expose the CCDs within a time interval that is a small fraction of the frame transfer time, shortly before and shortly after the transfer of the first frame. Pulsed lasers are increasingly being used as the illumination source where they are capable of providing up to 1 J of illumination within pulse duration as short as a few nanoseconds. If a single laser head is used to produce a train of pulses at a very high repetition rate, the energy per pulse will be very small and not sufficient for many imaging applications. Therefore, when multiple pulses with very short inter-pulse separation are needed, multiple synchronized laser heads are typically used. Dual-frame cameras are usually used in conjunction with dual-pulse (dual-cavity) Nd:YAG lasers for PIV in high speed flows. This type of camera is capable of capturing full-resolution images of good quality; however, the maximum frame rate is limited by the frame-transfer time and the number of obtainable frames is limited to two. In order to capture sequences of more than two full-resolution images at ultra high speeds, multiple cameras combined into multi-channel cameras systems are typically used. Multi- channel cameras consist of multiple CCDs or cameras sharing the same viewing axis (using beam splitter(s), rotating mirror, rotating prism, etc), which are triggered in very quick succession to capture a sequence of images [6]. By the use of multiple cameras, the frame rate limitation imposed by the read-out time is eliminated and multiple images, corresponding to the number of internal cameras or CCDs, can be recorded. In the most commonly used type of multi-channel cameras, known as a gated intensified camera, the light collected by the objective lens is delivered to the internal CCDs / cameras using a beam-splitter that splits the image into multiple identical images (multiple beam-splitters arranged in a branching configuration can also be used). Each of the internal cameras has an image intensifier comprising a photocathode screen that emits photoelectrons proportional to the image intensity, a microchannel plate that uses the avalanche effect to amplify the electron current, a scintillator that reforms a visible image and a CCD optically coupled to the scintillator screen to form an electronic image. The ability to switch the microchannel plate on or off rapidly is used as an external shutter to control the exposure time (exposure times down to about 1.5 ns can be achieved) and the exposure sequence of the CCDs / cameras. The shuttering provided by the intensifiers removes the requirement that the illumination be pulsed. Typically illumination is provided in the form of a single flash of light of duration long enough to capture a sequence of frames. The second function of the intensifier is to amplify the received light (gains up to 10 3 are commonly used) which helps reduce the relative importance of read-out noise. Though frame rates in excess of 100 MHz can be achieved with such cameras, the resolution of the images is ...

Similar publications

Article
Full-text available
This work is a culmination of several corresponding studies designed to probe the initiation and reaction of aluminum nanothermite systems. The main diagnostic tool used in this study is a Temperature-Jump/Time-of-Flight Mass Spectrometer (T-Jump/TOFMS), which uses a filament heating method capable of very high heating rates up to 106 degrees C/s,...
Article
Full-text available
We propose a novel method called compressed sensing with linear-in-wavenumber sampling ( k -linear CS) to retrieve an image for spectral-domain optical coherence tomography (SD-OCT). An array of points that is evenly spaced in wavenumber domain is sampled from an original interferogram by a preset k -linear mask. Then the compressed sensing based o...
Article
Full-text available
Shear wave optical coherence elastography (SW-OCE) is a quantitative approach to assess tissue structures and elasticity with high resolution, based on OCT. Shear wave imaging (SWI) is the foundation of shear wave elasticity imaging (SWEI), which is a quantitative approach to assess tissue structures and pathological status. In order to enhance ela...
Article
Full-text available
On-line monitoring of the quality of laser welding is of interest for many industrial applications. For photodiodes the monitoring strategy usually aims at observing whether the signal exceeds a threshold. This well known technique is mainly based on empirical values and the monitoring system has to be trained for each application. For an improved...

Citations

... In order to realize the accurate identification of road damage in the case of a small number of samples, the VGG-19 network model is constructed based on MATLAB. It is necessary to process the obtained road damage images to make the illumination on the images uniform, reduce noise interference, eliminate irrelevant information in the images, restore useful real information, enhance the detectability of relevant information and simplify the data to the greatest extent, obtain samples that can effectively identify road damage parameter information, and construct a VGG-19 network model [28]. ...
Article
Full-text available
In recent years, methods of road damage detection, recognition and classification have achieved remarkable results, but there are still problems of efficient and accurate damage detection, recognition and classification. In order to solve this problem, this paper proposes a road damage VGG-19 model construction method that can be used for road damage detection. The road damage image is processed by digital image processing technology (DIP), and then combined with the improved VGG-19 network model to study the method of improving the recognition speed and accuracy of VGG-19 road damage model. Based on the performance evaluation index of neural network model, the feasibility of the improved VGG-19 method is verified. The results show that compared with the traditional VGG-19 model, the road damage VGG-19 road damage recognition model proposed in this paper shortens the training time by 79 % and the average test time by 68 %. In the performance evaluation of the neural network model, the comprehensive performance index is improved by 2.4 % compared with the traditional VGG-19 network model. The research is helpful to improve the model performance of VGG-19 road damage identification network model and its fit to road damages.
... Hijazi and Madhavan (2008) developed an ultra high-speed polyvalent system 68 capable of recording a sequence of eight images or four pairs of stereo images at 69 frame rates up to 10 6 fps to observe chip formation. Using high magnification70 (0.27 µm·px −1 ) on a small field of view (350 µm× 250 µm) they were able to 71 determine velocity and shear strain rates. ...
Article
Titanium alloys, largely used for aeronautical applications, are difficult to machine. High cutting forces, chip serration and important tool wear reflect this poor machinability, limiting productivity. One way of improving the machinability of titanium alloys consists of controlling their microstructure. In the present work, the impact of the microstructure of the Ti5553 alloy on chip formation and cutting forces is investigated. For this purpose, a novel experimental approach is proposed. Orthogonal cutting tests are performed on eight different microstructures, which allows studying the impact of the alpha phase fraction as well as the size and shape of particles. Also, an original post processing method based on machine learning provides chip morphological information from images recorded with two high speed cameras. Such information is completed with the cutting forces measured with a dynamometer. In contrast with commonly used approaches, the proposed method is not limited to the formation of a few segments, but uses the full dataset acquired during a test. The results obtained for the different microstructures indicate that no direct link can be established between the cutting forces and their hardness as minimal cutting forces are obtained for microstructures with an intermediate hardness. For microstructures providing low hardness, high cutting forces result from a significantly thick chip. In opposition, for the microstructures leading to high hardness, an important flow stress generates high cutting forces. This study also suggests that chip morphology is primarily affected by the alpha phase fraction while the size and morphology of alpha phase particles have little influence.
... Numerical simulation results are mostly compared with the experimental ones at macroscopic level [10][11][12][13] through (i) force components measurement, (ii) chip morphology and microstructure analysis and (iii) temperature measurement at the tool tip [14][15][16][17]. These quantities remain global quantities and limit the understanding of local phenomena such as strain localization [18][19][20]. The highly interest in understanding local phenomena during chip formation has led researchers to develop a dedicated experimental protocol allowing for in-process visualization of the material flow in the cutting zone. ...
Article
Full-text available
Orthogonal cutting on Ti-6Al-4V produces segmented chips which results from localized deformation within the Primary Shear Zone (PSZ). For in situ visualization of the material flow during Ti-6Al-4V chip formation, a new high-speed optical system is proposed. Difficulties arise from the submillimetric size of the cutting zone. Therefore, a dedicated optical system with coaxial illumination is designed allowing for local scale analysis of chip formation. In this paper, the cutting forces were measured highlighting the unsteady nature of Ti-6Al-4V chip segmentation. For kinematic fields measurement, a novel method of Digital Image Correlation (DIC) technique is applied on the recorded images from the cutting zone. It allows for the identification of the localized deformation bands. Then, the level of the cumulative strain fields reached in the PSZ was presented and analyzed. The effect of the rake angle on the strain fields was studied. Using a 0∘ rake angle, the segment chip was found to be more subjected to deformation than in the case using 15∘. Accuracy of the measured strain fields was discussed function of the main source of errors. In addition, the chip morphology and microstructure were investigated with a scanning electron microscopy (SEM). It shows crack opening along the PSZ and high material failure near the tool-chip interface which explains the difficulties of DIC application in the context of kinematic fields measurement during orthogonal cutting. Finally, correlation between the measured strain fields and the mean value of the chip segment width was made.
... Sur le même principe que les études précédemment citées,Hijazi et Madhavan [2008] ont développé une méthode à l'aide de quatre caméras raccordées à un microscope (Figure I.20 (c)). Il est alors possible d'enregistrer soit quatre paires d'images permettant une vision stéréo ou une séquence de huit images successives très rapprochées. ...
Thesis
Les alliages de titane sont largement utilisés pour des applications aéronautiques, notamment grâce à leur importante résistance spécifique. Néanmoins, leur caractère réfractaire rend difficile la mise en forme par enlèvement de matière, engendrant une usure prématurée des outils de coupe et une intégrité de surface dégradée. Si l'optimisation de la micro constitue une voie d'amélioration de l'usinabilité, son rôle sur la coupe reste mal compris. Le présent travail, axé sur l'alliage de titane Ti5553, vise à mieux comprendre l'influence de la micro sur la formation du copeau en coupe orthogonale. Différentes micros, possédant des fractions, des morphologies et des tailles de particules de phase $alpha$ différentes sont alors considérées. Pour chaque micro, des essais de coupe orthogonale sont réalisés pour plusieurs vitesses de coupe à l'aide d'un banc de rabotage instrumenté. Ce dispositif permet, à l'aide de deux caméras rapides, d'observer la formation du copeau de part et d'autre de l'échantillon et d'enregistrer les efforts en pointe d'outil. Une méthode originale d'analyse des images, basée sur une segmentation sémantique par apprentissage profond, est développée. Elle permet textit{in fine} d'obtenir des grandeurs représentatives de la morphologie des copeaux. L'analyse des résultats montre que leur épaisseur est d'autant plus faible que la dureté est importante. Ceci explique l'absence de relation évidente entre dureté et effort de coupe. Il est ainsi important à la fois pour une dureté élevée (limite d'écoulement importante) ou faible (épaisseur importante). Aussi, pour une micro, les fluctuations de l'effort de coupe sont liées à celles des caractéristiques morphologiques du copeau. Ces fluctuations sont très variables selon la micro considérée, la fraction de phase $alpha$ étant le paramètre microstructural prépondérant.
... Numerical simulation results are mostly compared with the experimental ones at macroscopic level [10][11][12][13] through: i) force components measurement, ii) chip morphology and microstructure analysis and iii) temperature measurement at the tool tip [14][15][16][17]. These quantities remain global quantities and limit the understanding of local phenomena such as strain localization [18][19][20]. The highly interest in understanding local phenomena during chip formation has led researchers to develop a dedicated experimental protocol allowing for in-process visualization of the material flow in the cutting zone. ...
Preprint
Full-text available
In-situ visualization of the material flow during orthogonal cutting is achieved using new high-speed optical system. Difficulties arise from the submillimetric size of the cutting zone. Therefore, a dedicated optical system was designed allowing for local scale analysis of chip formation. The Digital Image Correlation (DIC) technique is applied on recorded images from the cutting zone to measure the kinematic fields. Then, the effect of the cutting conditions on chip formation is presented with local scale analysis.
... Today, DIC has been successfully utilized in a very wide variety of applications ranging from mechanical, aerospace, structural, civil, electronics, materials, and manufacturing engineering, to non-destructive testing and evaluation, to biomedical and life sciences [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16]. Also, DIC can be performed using images ranging in scale from microscopic (even scanning electron microscopy) images all the way up to images of full-scale structures, and ranging in capture speed from few frames per second (fps) all the way to more than one million fps [1][2][3][4][5][6][17][18][19][20]. Furthermore, DIC has also found use in high temperature applications using images captured in the ultraviolet spectrum [21]. ...
Article
Full-text available
The two-dimensional digital image correlation (2D-DIC) technique is used for making full-field in-plane deformation/strain measurements on planar surfaces. One of the basic requirements for making measurements using 2D-DIC is to observe the target surface perpendicularly by the camera. Ensuring camera perpendicularity before starting to make measurements using 2D-DIC is important because errors will be induced in the measured displacements/strains if the camera is not oriented properly. During the initial setting of an experimental setup, small camera misalignment angles of one or two degrees can easily go undetected. This paper reports a simple and reliable approach for verifying the camera perpendicularity in 2D-DIC experiments, and for measuring the tilt angle(s) if the camera is not perpendicular to the surface. The approach uses in-plane rigid-body-translation where the strain error(s) obtained from DIC measurements are used to calculate the tilt angle(s). The translation can be either parallel to the target plane (done by moving the target) or parallel to the camera plane (done by moving the camera) where a different set of equations is used for calculating the tilt angles in each scenario. A translation of a known magnitude in any in-plane direction (parallel to the x or y axes of the image, or at any angle in between) is all what is required to calculate the tilt angle(s). The approach is also capable to determine the tilt angles if the target is tilted about any of the two in-plane axes (x or y) or about the two axes simultaneously. Several rigid-body-translation experiments are performed under different conditions to evaluate the validity and accuracy of this approach at tilt angles between 1° and 4°. The results show that tilt angles as small as 1° can be calculated accurately, and that rigid-body-translation as small as 2% of the field-of-view width can be used for making measurements with good accuracy.
... The temporal and spatial resolution of the image-based method is determined by the pixel count and data transmission rate of the image sensor, as well as the performance of algorithms. The design of high-speed cameras makes it available to capture high-resolution images at high frame rates [9]. However, it typically results in huge data throughput for storage and transmission, and an exponential increase in image processing computation comes with it [10,11]. ...
Article
Full-text available
The detection and positioning system of point targets has critical applications in many fields. However, its spatial and temporal resolution is limited for the image-based system due to a large amount of data. In this work, an image-free system with less data and high update rate is proposed for the detection and positioning of point targets. The system uses a digital micromirror device (DMD) for light modulation and a pixel array as the light intensity detector, and the DMD is divided into multiple blocks to selectively acquire the intensity information in the region of interest. The centroid position of a point target is calculated from the intensity on the adjacent rows or columns of the micromirror. Simulation indicates that the performance of the proposed method is close to or better than that of the traditional methods. In static experiments, the centroiding accuracy of the proposed system is about 0.013 pixel. In dynamic experiments, the centroiding accuracy is better than 0.07 pixel in the condition of signal-to-noise ratio (SNR) greater than 35.2 dB. Meanwhile, the built system has an update rate of 1 kHz in the range of 1024×768 pixels, and the method acquires only 8 bytes of data for one-time positioning of a point target, making it applicable to real-time detection and positioning of point targets.
... Since the seminal work of Sutton [19], Digital Image Correlation (DIC) has been the subject of many developments [20][21][22][23][24][25][26] but it is still rarely used in a machine tool context. In this context, most works concern the analysis of the orthogonal cut [27][28][29][30][31][32][33][34][35]. In these studies, full-field measurements are performed in order to observe ship formation during a planing operation or during the cutting of a disc or tube. ...
Article
In the context of the aeronautics industry, aluminum alloy structural parts are manufactured in several stages, from forming processes and heat treatments to final machining. Some process steps may generate residual stresses. Thus, material removal during machining releases these residual stresses, which induces part deformation. Such deformations can lead to geometric nonconformity of the machined part. It is therefore essential to control this phenomenon. Due to the variability in residual stress distribution in each raw part, the modeling approaches must to be coupled with experimental measurements. This article thus aims to define a reliable experimental technique for measuring in-plane deformation of large aeronautical parts during their machining. The backbone of the technique relies on Digital Image Correlation (DIC), which enables the contactless measurement of part deformation during machining. Moreover, DIC provides a full-field measurement and a direct evaluation of part deformations. This work discusses more specifically problems related to the use of DIC during machining, the latter corresponding to a particularly harsh environment. Indeed, optical systems undergo undesirable movement and metal chips hide areas of the observed part. These unwanted events corrupt the results. In order to control these problems and consistently apply DIC part deformation measurement during machining, specific methods are proposed in this paper. Finally, DIC measurements are performed during the same machining sequence of two parts. The excellent agreement of the two measurements confirms the reliability of the technique. Finally, measurements are discussed, emphasizing the contribution they provide to the machining community.
... Indeed, ultrahigh-speed cameras are currently reaching hundreds of megahertz frame rates. 12 The images might be sequentially post-processed by the neural network, achieving high-rate metrology that allows the study of transient dynamic processes in nanostructures. ...
Article
Full-text available
Microscopes and various forms of interferometers have been used for decades in optical metrology of objects that are typically larger than the wavelength of light λ. Metrology of sub-wavelength objects, however, was deemed impossible due to the diffraction limit. We report the measurement of the physical size of sub-wavelength objects with deeply sub-wavelength accuracy by analyzing the diffraction pattern of coherent light scattered by the objects with deep learning enabled analysis. With a 633 nm laser, we show that the width of sub-wavelength slits in an opaque screen can be measured with an accuracy of ∼λ/130 for a single-shot measurement or ∼λ/260 (i.e., 2.4 nm) when combining measurements of diffraction patterns at different distances from the object, thus challenging the accuracy of scanning electron microscopy and ion beam lithography. In numerical experiments, we show that the technique could reach an accuracy beyond λ/1000. It is suitable for high-rate non-contact measurements of nanometric sizes of randomly positioned objects in smart manufacturing applications with integrated metrology and processing tools.
... They indicated that this directionality is likely to be attributed to the difference in the fill factor in the horizontal and vertical directions of the image sensor. Also, it is generally believed that the use of cameras with low fill factors can lead to lower accuracy in photogrammetric measurements [17,18]. There are some studies addressing the effect of fill factor on resolution for infrared (IR) imaging sensors; however, there are no studies addressing the effect of the imaging sensor fill factor on photogrammetric measurements in general and DIC measurement accuracy in particular. ...
... The higher the fill factor, the more sensitive the imaging sensor is to light, increasing its quantum efficiency. The three most commonly known types of CCD sensors are the "full frame" CCD, the "frame transfer" CCD, and the "interline" CCD [17,21]. Figure 1 shows the layout of the full frame CCD and the interline CCD. ...
... The same is true for CMOS imaging sensors where the fill factor will be even lower than that of interline CCDs since each pixel will have charge to voltage conversion, amplification, and digitization circuits onboard. The fill factor for some types of high speed imaging sensors can reach as low as 13% [17]. For the types of imaging sensors typically used in most digital cameras used in DIC measurements, such as the interline CCD and the CMOS imaging sensors, the fill factor is roughly in the range of 40% to 70%. ...
Preprint
Full-text available
The camera's focal plane array (FPA) fill factor is one of the parameters for digital cameras, though it is not widely known and usually not reported in specs sheets. The fill factor of an imaging sensor is defined as the ratio of a pixel's light sensitive area to its total theoretical area. It is generally believed that the lower fill factor may reduce the accuracy of photogrammetric measurements. But nevertheless, there are no studies addressing the effect of the imaging sensor's fill factor on digital image correlation (DIC) measurement accuracy. We report on research aiming to quantify the effect of fill factor on DIC measurements accuracy in terms of displacement error and strain error. We use rigid-body-translation experiments then numerically modify the recorded images to synthesize three different types of images with 1/4 of the original resolution. Each type of the synthesized images has different value of the fill factor; namely 100%, 50% and 25%. By performing DIC analysis with the same parameters on the three different types of synthesized images, the effect of fill factor on measurement accuracy may be realized. Our results show that the FPA's fill factor can have a significant effect on the accuracy of DIC measurements. This effect is clearly dependent on the type and characteristics of the speckle pattern. The fill factor has a clear effect on measurement error for low contrast speckle patterns and for high contrast speckle patterns (black dots on white background) with small dot size (3 pixels dot diameter). However, when the dot size is large enough (about 7 pixels dot diameter), the fill factor has very minor effect on measurement error.