Elevation of a 5 hectares section of the dataset estimated by In-SAR for noisy and denoised data.

Elevation of a 5 hectares section of the dataset estimated by In-SAR for noisy and denoised data.

Source publication
Preprint
Full-text available
Synthetic Aperture Radar or in short SAR is a type of airborne radar which increases the diameter of the radar aperture by traveling over the target area which consequently will create images with more resolution. Tomo-SAR is the process of creating three-dimensional SAR models of the landscape using multi-pass two-dimensional SAR images. The outpu...

Contexts in source publication

Context 1
... elevation calculated based on the reflectivity profile estimated by In-SAR is shown in Fig. 2. It can be seen how the process of denoising can affect the elevation. SVD is one of the most reliable methods for tomography, which means the resulted elevation is very close to the true value of the elevation. However, it is important to remember that the dimensions of this 3-dimensional model are azimuth, range and ...
Context 2
... elevation calculated based on the reflectivity profile estimated by In-SAR is shown in Fig. 2. It can be seen how the process of denoising can affect the elevation. SVD is one of the most reliable methods for tomography, which means the resulted elevation is very close to the true value of the elevation. However, it is important to remember that the dimensions of this 3-dimensional model are azimuth, range and ...

Similar publications

Article
Full-text available
Synthetic aperture radar (SAR) ship detection based on deep learning has been widely applied in recent years. However, there are two main obstacles hindering SAR ship detection. First, the identification of ships in a port is seriously disrupted by the presence of onshore buildings. It is difficult for the existing detection algorithms to effective...
Article
Full-text available
A passive bistatic ground-based synthetic aperture radar (PB-GB-SAR) system without a dedicated transmitter has been developed by using commercial-off-the-shelf (COTS) hardware for local-area high-resolution imaging and displacement measurement purposes. Different from the frequency-modulated or frequency-stepped continuous wave signal commonly use...

Citations

... One of the most robust algorithms of CS is Compressive Sampling Matching Pursuit (CoSaMP) method which was developed by Needell [25] in 2009. In [20] Khoshnevis discusses the concept of multi-dimensional sparsity and uses a multi-dimensional version of CoSaMP to improve the recovery rates of a three-dimensional radar models. ...
Preprint
Full-text available
The advancement of cellular and internet technologies demands more efficient ways of speech compression. The characteristics of speech allow it to be compressed at a higher rate than an unknown audio signal. The speech signal starts by the deflation of the lungs which vibrates the vocal cords. The vibrating air then passes through the Pharyngeal cavity into the nasal and oral cavity. At each step, the signal is distorted and shaped into the final signal that is recognized as speech. The semi-periodic nature of speech guarantees a sparse representation of the signal. CS depends on the sparsity of the signal to perform compression. The compression phase of this method is relatively fast making it applicable in many fields. Enhancing this method with Kronecker technique improves the accuracy even further. The goal of this study is to apply one of the novels CS methods for compression of clean speech signals and compare the results with the Kronecker enhanced version.
... Baranuik [14] and Ujan et al. [15] have used the CS method for image compression. Khoshnevis et al. have used this method on radar and biomedical signals [16,17]. ...
... One of the most robust algorithms of CS is Compressive Sampling Matching Pursuit (CoSaMP) method which was developed by Needell [21] in 2009. In [16] Khoshnevis discusses the concept of multi-dimensional sparsity and uses a multi-dimensional version of CoSaMP to improve the recovery rates of a three-dimensional radar models. ...
Preprint
The advancement of cellular and internet technologies demands more efficient ways of speech compression. The characteristics of speech allow it to be compressed at a higher rate than an unknown audio signal. The speech signal starts by the deflation of the lungs which vibrates the vocal cords. The vibrating air then passes through the Pharyngeal cavity into the nasal and oral cavity. At each step, the signal is distorted and shaped into the final signal that is recognized as speech. The semi-periodic nature of speech guarantees a sparse representation of the signal. CS depends on the sparsity of the signal to perform compression. The compression phase of this method is relatively fast making it applicable in many fields. Enhancing this method with Kronecker technique improves the accuracy even further. The goal of this study is to apply one of the novels CS methods for compression of clean speech signals and compare the results with the Kronecker enhanced version.
... CS has been extensively used in different areas of signal processing such as biomedical, image, video, and even radar to reduce power consumption, transmission bandwidth, and noise. Khoshnevis et al. have used this method on radar signals and biomedical signals [5,6]. In general, signals must have a sparse representation under a predefined dictionary. ...
Preprint
Full-text available
In structural health monitoring (SHM), sensors intermittently monitor the structure and send the data to a remote server for further processing. Data compression can be used to reduce the required storage, and for efficient use of communication bandwidth, because of the huge volumes of sensor data produced from the monitoring sensors. Recently, compressive sampling (CS) had been introduced as an efficient, fast, and linear method of sampling data. Length of signal for compression has a direct relation to the complexity of the compression system and the quality of the recovered system. In traditional CS approaches, length of signal was experimentally chosen. If we compress the signal in a smaller size, the compression system would be more efficient in terms of required computational complexity and time for compression. On the other hand, if we decrease the length of the signal too much, the quality of the reconstructed signal would be degraded. Very recently, Kronecker technique in CS recovery has been introduced in order to compensate for the loss of accuracy. In this work, we investigate the applicability of Kronecker-based CS recovery for Seismic signal. The simulation results show that this technique in recovery can highly improve the quality, while sensors can compress the signal in very small size. Applying the Kronecker technique in recovery enabled us to recover the original seismic signal with high accuracy up to 7 dB.
... CS is a technique which simultaneously compresses and senses a sparse or compressible signal. CS has many applications in biomedical sensing, enhancement and compression [12], [13]. As ECG signals are compressible, CS has been used as an effective technique for signal compression [14]- [16]. ...
Article
Continuous measurement of Electrocardiogram (ECG) signal is required for detecting various cardiac abnormalities, such as arrhythmia. Wearable devices have become ubiquitous as continuous monitoring devices. Due to power and memory restrictions in wearable devices, signal acquisition may need to be done in smaller segments using compressive sensing (CS) techniques. However such acquisitions may lead to poor recovery of compressed measurements. A Kronecker-based novel recovery technique has been recently proposed to improve the recovery of compressed signals where the recovery is achieved through a single recovery of concatenated compressed signal segments. In this paper, a mathematical reasoning for improvement using the Kronecker-based recovery over standard CS recovery procedure is presented. A detailed investigation of Kroneckerbased recovery technique of compressed ECG signal is presented using ECG signals from MIT-BIH Arrhythmia database. As a part of this investigation, quality of recovery with random and deterministic sensing of ECG signals and impact of choice of sparsifying dictionaries under various compression ratios (CR) are considered. Deterministic sensing with deterministic binary block diagonal (DBBD) matrix and Discrete Cosine Transform (DCT) as sparsifying basis is seen to provide the best recovery for ECG signals. Kronecker-based recovery of noisy ECG signals is possible with DBBD measurement matrix.