Fig 1 - uploaded by George Papakostas
Content may be subject to copyright.
Original gray-scale image, where the pixels’ intensities are displayed as a number in the range [0,255]. If we consider the gray-scale image of Fig.1, as the resultant of non-overlapped image slices, having specific intensities in their pixels’ positions, then we can decompose the original image to several slices, which can reconstruct it, by applying fundamental mathematical operations. In this sense the image of Fig.1, can be decomposed to the following slices, 

Original gray-scale image, where the pixels’ intensities are displayed as a number in the range [0,255]. If we consider the gray-scale image of Fig.1, as the resultant of non-overlapped image slices, having specific intensities in their pixels’ positions, then we can decompose the original image to several slices, which can reconstruct it, by applying fundamental mathematical operations. In this sense the image of Fig.1, can be decomposed to the following slices, 

Source publication
Chapter
Full-text available
A novel methodology that ensures the computation of 2-D DCT coefficients in gray-scale images as well as in binary ones, with high computation rates, was presented in the previous sections. Through a new image representation scheme, called ISR (Image Slice Representation) the 2-D DCT coefficients can be computed in significantly reduced time, with...

Contexts in source publication

Context 1
... example let us consider the following 4x4 gray-scale image, where the pixels' intensities have been highlighted, so that the decomposition result is clear. If we consider the gray-scale image of Fig.1, as the resultant of non-overlapped image slices, having specific intensities in their pixels' positions, then we can decompose the original image to several slices, which can reconstruct it, by applying fundamental mathematical operations. ...
Context 2
... this sense the image of Fig.1, can be decomposed to the following slices, As a result of the Definition 1, is the following Lemma 1 and 2: ...

Similar publications

Preprint
Full-text available
Multimodal Deep Learning has garnered much interest, and transformers have triggered novel approaches, thanks to the cross-attention mechanism. Here we propose an approach to deal with two key existing challenges: the high computational resource demanded and the issue of missing modalities. We introduce for the first time the concept of knowledge d...
Conference Paper
Full-text available
This paper describes the methodology of The Inception team participation at ImageCLEF Medical 2020 tasks: Visual Question Answering (VQA) and Visual Question Generation (VQG). Based on the data type and structure of the dataset, both tasks are treated as image classification tasks and are handled by using the VGG16 pre-trained model along with a da...
Article
Full-text available
Morphometric measurements of 46 individuals belonging to Frisa Valtellinese e Saanen breed were evaluated and carried out by both traditional tools and opto-informatic system. Differences in the values accounted for 1%, and were never determined to be over 4%. Correlations of measures obtained by the two different systems gave a value of 0.97 (P<0....
Article
Full-text available
This paper studies on image registration using Ant Colony Optimization technique of the medical images. Ant Colony Optimization algorithm has ability of global optimization and facilitates quick search of optimal parameters for image registration. In this paper, a modified Ant Colony Optimization algorithm on prepro-cessed images is proposed to imp...
Article
Full-text available
We present a new deep learning training framework for forecasting significant wave height on the Southwestern Atlantic Ocean. We use the long short-term memory algorithm (LSTM), trained with the ERA5 dataset and also with buoy data. The forecasts are made for seven different locations in the Brazilian coast, where buoy data are available. We consid...

Citations

... Discrete cosine transform of image f(x, y) of size M × N is as follows [22]: ...
... The computational complexity of 2D DCT consists of 10N 2 + 2 multiplications, 3N 2 additions, 2N 2 + 1 divisions and 2N 2 kernel computations by direct method [22]. ...
Article
Full-text available
This paper proposes a blind watermarking scheme that has ability of confidentiality, source authentication, ownership identification, integrity, tampering detection and restoration of the medical images. It embeds the hospital logo, electronic patient record and perceptual hash value of region of interest in the mid-frequency coefficients of discrete cosine transform of the region of non-interest of medical image. The region of interest of the image is restored against attacks including the desynchronization attacks and impulse noise attack, and the severity of tampering due to any attack is measured by comparing the difference between the original perceptual hash value of the region of interest and the extracted perceptual hash value of region of interest of watermarked image. Restoration is done that tampers the region of interest of the images. Number of medical image watermarking schemes that show resilient to many different types of singular and hybrid attacks is very few. The proposed method has inbuilt restoration schemes against different attacks such as rotation, scaling, translation, shearing, horizontal reflection, vertical reflection and impulse noise attacks. Comparison with the latest and state-of-art medical image watermarking schemes shows that the performance of the proposed method is superior to other methods.
... As MFRs of median filtered images (ϵ = 3, 5) are very smooth as compared to MFR of original image (Kang et al. 2013;, it gives us the motivation to analyze the frequency information of original image MFR and median filtered images MFRs using 2-D global discrete cosine transform (2-D GDCT). 2-D GDCT is a popular and computational cost-effective method, used in various image processing applications such as pattern recognition and global watermarking (Cox et al. 1997;Er et al. 2005;Papakostas et al. 2009). 2-D GDCT of an image MFR is defined as ...
Article
Sophisticated image forgeries introduce digital image forensics as an active area of research. In this area, many researchers have addressed the problem of median filtering forensics. Existing median filtering detectors are adequate to classify median filtered images in uncompressed mode and in compressed mode at high-quality factors. Despite that, the field is lacking a robust method to detect median filtering in low-resolution images compressed with low-quality factors. In this article, a novel feature set (four feature dimensions), based on first-order statistics of frequency contents of median filtered residuals (MFRs) of original and median filtered images, has been proposed. The proposed feature set outperforms handcrafted features-based state-of-the-art detectors in terms of feature set dimensions and detection results obtained for low-resolution images at all quality factors. Also, results reveal the efficacy of proposed method over deep-learning-based median filtering detector. Comprehensive results expose the efficacy of the proposed detector to detect median filtering against other similar manipulations. Additionally, generalization ability test on cross-database images support the cross-validation results on four different databases. Thus, our proposed detector meets the current challenges in the field, to a great extent.
... Regarding the DCT computation over 2D domain, different image representation schemes can be employed in order to reduce the overall execution time. The works on [40][41][42][43] consider a different image representation based on slice intensity representation (ISR). Although this representation can be useful for some scenarios such as pattern recongnition [27,29,46], we attain to the usual image representation adopted by the signal processing community [2,7,13]. ...
Article
Full-text available
This paper introduces a new fast algorithm for the 8-point discrete cosine transform (DCT) based on the summation-by-parts formula. The proposed method converts the DCT matrix into an alternative transformation matrix that can be decomposed into sparse matrices of low multiplicative complexity. The method is capable of scaled and exact DCT computation and its associated fast algorithm achieves the theoretical minimal multiplicative complexity for the 8-point DCT. Depending on the nature of the input signal simplifications can be introduced and the overall complexity of the proposed algorithm can be further reduced. Several types of input signal are analyzed: arbitrary, null mean, accumulated, and null mean/accumulated signal. The proposed tool has potential application in harmonic detection, imae enhancement, and feature extraction, where input signal DC level is discarded and/or the signal is required to be integrated.
... The DCT type II coefficients for an image resolution N×N having intensity function f(x,y), are described as follows [Papakostas, 2009]: ...
... After discrete cosine transform (DCT), we have the (p,q) th order DCT coefficient for an NN image having intensity f(x,y) denoted by C pq and supported by the cosine kernel function of the basis D n (t) and a normalization factor ρ(n) [11], where 0  p,q,x,y  N-1. The DCT coefficients distribution resembles Laplacian in some experimental results after testing with the Kolmogorov-Smirnov method [12]. ...
Article
Full-text available
We study the compressive sampling (CS) and its application in a video encoding framework. The video input is firstly transformed into a suitable domain in order to achieve sparser configuration of coefficients. Then, we apply coefficient thresholding to classify which frames are to be sampled compressively or conventionally. For frames chosen to undergo compressive sampling, the coefficient vectors will be projected into smaller vectors using a random measurement matrix. As CS requires two main conditions, i.e. sparsity and matrix incoherence, this research is focused on the enhancement of the sparsity property of the input signal. It was empirically proven that the sparsity enhancement could be reached by applying motion compensation and thresholding to the non-significant coefficient count. At the decoder side, the reconstruction algorithm can employ basis pursuit or L1 minimization algorithm.
... If one has prior knowledge of the noise, then the loop to run the noise estimation can be avoided, and thus computational time can be spared in that way, however, it is difficult to optimize the algorithms for fast Fourier transforms, discrete cosine transforms and their respective inverses. This is an area in which advances have been made, some marginally more efficient that the previous generation of algorithms [20]. However, when the subroutine is evaluated multiple times every second, minute increases in efficiency manifest themselves as real time savings. ...
... Where the function Q L (x, y) is equivalent to a quadratic approximation of the cost function F (x) = f (x) + g(x) L (x, y) = f (y)+ x−y, ∇f (y) + L 2 ||x−y|| 2 +g(x(20) ...
Preprint
Full-text available
Inverse problems frequently arise in areas of seis- mic tomography, medical imaging, underwater acoustics and non-destructive testing, as well as a multitude of other areas. Compressive Sensing methods, a sub�eld of numerical and harmonic analysis, initially proposed in 2005, provide excellent reconstructions while keep- ing computational and memory costs to a minimum. In this paper, images are sampled and compressed simultaneously using a low-coherence Bernoulli matrix and given a sparse representation in an alternate basis of Fourier, Cosine or Daubachies wavelets. The Fast-Iterative Soft Thresholding Algorithm (FISTA) developed by Amir Beck and Marc Teboulle in 2009 is a thresholding function which arrives at an `1-minimal solution through adaptively setting coeficients in an alternate image basis to zero. This algorithm will be compared to the modern Block Matrix 3-D (BM3D) algorithm in de-noising, how- ever, it should be noted that the algorithms have different focuses. FISTA can function as a de-noising algo- rithm, though it is primarily used to compress the im- age and to minimize the sampling required for a proper reconstruction. These two algorithms are compared in a test of de-noising capabilities. Additionally, FISTA is compared with a more modern iteration of itself, namely, the Fast Iterative Soft Thresholding Algorithm with Fast Gradient Projection (FISTA FGP) proposed by Beck and Teboulle.
... Recently, the authors have proposed a new methodology for the acceleration of the moments' computation [25,26] and 2-D DCT transform [27], which also provides a mechanism to form novel moment invariants [28] that improve the pattern recognition rate. This methodology is based on the ISR (Intensity Slice Representation) [25][26][27][28] representation according to which a gray-scale image can be considered as the resultant of nonoverlapped image slices, whose pixels have specific intensities. ...
... Recently, the authors have proposed a new methodology for the acceleration of the moments' computation [25,26] and 2-D DCT transform [27], which also provides a mechanism to form novel moment invariants [28] that improve the pattern recognition rate. This methodology is based on the ISR (Intensity Slice Representation) [25][26][27][28] representation according to which a gray-scale image can be considered as the resultant of nonoverlapped image slices, whose pixels have specific intensities. Based on this representation, we can decompose the original image into several slices, from which we can then reconstruct it, by applying fundamental mathematical operations. ...
... are the coordinates of the block b j , with respect to the horizontal and vertical axes, respectively. The reader can refer to[25][26][27][28] for more detailed information regarding the computation of the slice moments.In this context, the Sliced Momentgram (SMgram) is determined according to the following definition:Definition 2: A Sliced Momentgram is an image that shows the contribution of eachslice's moment in constructing the total moment of the original image. ...
Article
Full-text available
A novel descriptor able to improve the classification capabilities of a typical pattern recognition system is proposed in this paper. The introduced descriptor is derived by incorporating two efficient region descriptors, namely image moments and local binary patterns (LBP), commonly used in pattern recognition applications, in the last decades. The main idea behind this novel feature extraction methodology is the need of improved recognition capabilities, a goal achieved by the combinative use of these descriptors. This collaboration aims to make use of the major advantages each one presents, by simultaneously complementing each other, in order to elevate their weak points. In this way, the useful properties of the moments and moment invariants regarding their robustness to the noise presence, their global information coding mechanism and their invariant behaviour under scaling, translation and rotation conditions, along with the local nature of the LBP, are combined in a single concrete methodology. As a result a novel descriptor invariant to common geometric transformations of the described object, capable to encode its local characteristics, is formed and its classification capabilities are investigated through massive experimental scenarios. The experiments have shown the superiority of the introduced descriptor over the moment invariants, the LBP operator and other well-known from the literature descriptors such as HOG, HOG-LBP and LBP-HF.
... This method can be applied in both gray-scale and binary images and its performance is highly depended on image's intensities distribution. Moreover its applicability is not restricted only on the computation of image moments but it can be extended to the acceleration of the DCT computation [55,56] and to the construction of new moment invariants [57]. ...
Article
This paper discusses possible computation schemes that have been introduced in the past and cope with the efficient computation of the orthogonal image moments. An exhaustive comparative study of these alternatives is performed in order to investigate the conditions under which each scheme ensures high computation rates, for several test images. The present study aims to discover the properties and the behaviour of the different methodologies and it serves as a reference point in the field of moment’s computation. Some useful conclusions are drawn regarding the applicability and the usefulness of the computation strategies in comparison and efficient hybrid methods are proposed to better utilize their advantages.