PreprintPDF Available

Signals: From Analog to Digital, and Back (CDT-39)

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Except for phenomena taking place at the quantum level, real-world signals are almost invariably continuous along time and intensity. Yet, their respective analysis and processing by using digital computers, requires three important transformations: truncating the observation of the original signal within a finite time interval, sampling along time, and discretizing the respective intensities. Inverse respective operations need to be performed when digitally modified or generated signals are to be applied to the real-world. The present work introduces and discusses the often unavoidable operations of time truncation, sampling as well as intensity quantization from the perspective of signal processing and analysis, especially with the help of frequency domain analysis as allowed by the Fourier transform, which is also briefly discussed regarding both its continuous and discrete versions. Some concepts about the electronics devices involved in analog to digital conversion, and vice-versa, are also included.
Content may be subject to copyright.
Signals: From Analog to Digital, and Back
(CDT-39)
Luciano da Fontoura Costa
luciano@ifsc.usp.br
ao Carlos Institute of Physics – DFCM/USP
1st Oct. 2020
Abstract
Except for phenomena taking place at the quantum level, real-world signals are almost invariably continuous along
time and intensity. Yet, their respective analysis and processing by using digital computers, require three important
transformations: truncating the observation of the original signal within a finite time interval, sampling along time, and
discretizing the respective intensities. Inverse respective operations need to be performed when digitally modified or
generated signals are to be applied to the real-world. The present work introduces and discusses the often unavoidable
operations of time truncation, sampling as well as intensity quantization from the perspective of signal processing and
analysis, especially with the help of frequency domain analysis as allowed by the Fourier transform, which is also briefly
discussed regarding both its continuous and discrete versions. Some concepts about the electronics devices involved in
analog to digital conversion, and vice-versa, are also included.
“Die Stille zwischen den Noten ist genauso wichtig wie die
Noten selbst.”
W. A. Mozart
1 Introduction
We live in a dynamic universe, where myriad events take
place continuously along time and space since immemo-
rial ages. Signals are typically associated with variations
of some type of energy, such as the sound produced by
wind or the light arriving from remote stars. The history
of science as humanity has experienced it is inexorably
related to our ability to build scientific models (e.g. [1]),
which involves acquiring and analyzing signals from na-
ture. Though most natural signals are intrinsically con-
tinuous, the advent of digital computers implied those sig-
nals to be transformed into sequences of finite length num-
bers that can be properly stored in finite digital memories
and processed by digital circuitry. It was also thanks to
these operations that so many recent scientific-technologic
advancements have been obtained, including the internet.
Three main transformations are typically required for
transforming an analog signal into a respective digital
counterpart: (i) time truncation, in the sense that sig-
nals can only be observed along a limited interval of time;
(ii) time sampling, implying the values of a signal to be
sampled at each period of time; and (iii) intensity quan-
tization, required for mapping the intensity of the signal
at each sampling instant into a finite-length numeric rep-
resentation.
Figure 1: The complex exponential, which gives rise to the Fourier
basis functions, needs to be time sampled in order to derive the
discrete Fourier transform. The figure shows the distribution of ωi,
i= 0,1,...,N 1 along the complex exponential for N= 8 time
samples. Observe that, for simplicity’s sake, positive arguments are
assumed for the complex exponential.
The proper understanding of these operations is often
decisive for achieving well-succeed modeling and results.
The present work provides an introduction to them, with
1
two main highlights: extensive use of illustrations, and
consideration of the transformations as characterized in
the frequency domain associated to the Fourier transform,
which is briefly covered in both its continuous and discrete
(see Figure 1) versions. In addition, some consideration
of the electronic devices typically used to implement these
operations are also provided.
We start by presenting the Dirac delta and some of its
properties, and follows by briefly introducing the Fourier
transform, also jointly with some important respective
properties. Then, the concept of convolution, as well
as the associated theorem, are discussed. Having there-
fore covered some of the principal concepts and methods
for approaching time truncation, sampling, and intensity
quantization, these three important transformations are
then subsequently covered and illustrated. We conclude
this work by briefly discussing the basic electronic means
that are typically used to interface the analog and digital
worlds.
It should be observe that the present work complements
another CDT, focused on the concept of convolution [2].
Other related texts that can be of interest include those
addressing phase and periodicity [3], Fourier transform
based edge detection [4] and curvature estimation [5].
Additonal references on related topics can be found in,
e.g. [6, 7, 8].
2 The Dirac Delta Function
The Dirac delta function was conceived mainly as a means
for representing point discontinuities in physics, such as
those implied by infinitesimal particles with mass, charge,
etc. This function, traditionally represented as δ(t) has
the following properties:
δ(t) = 0 for t6= 0
non-defined at t= 0 (1)
Thus, the Dirac delta function assumes null values for
every value of its domain variable t, except at t= 0,
where it is not defined. As a consequence of this unde-
fined value, the Dirac delta function is, strictly speaking,
not a function. Indeed, this mathematical structure is
more formally covered in the mathematical area known
as distribution theory (e.g. [9]).
The Dirac delta can nevertheless be approached in a
simple and intuitive manner, more specifically as the limit
of certain types of functions. For instance, take the par-
ticular type of rectangular function defined as:
r(t) = 0 for t6= 0
1
afor a
2t < a
2
(2)
where ais a positive real value. It follows immediately
that the area of the function is:
A=ˆ
−∞
r(t)dt =ˆ
a
2
a
2
1
adt =a
a= 1 (3)
The Dirac delta can now be approximated as:
δ(t) = lim
a0r(t) (4)
Observe that, as a0, we have that the rectangular
function becomes narrower and narrower, while its height
goes higher and higher. However, given the way r(t) is
constructed, we necessarily have that:
ˆ
−∞
δ(t)dt = 1 (5)
The Dirac delta can be similarly understood as the limit
of several other functions, including the normal distribu-
tion with zero means:
nσ=1
2πσ e1
2(t
σ)2
(6)
More specifically, we can write:
δ(t) = lim
σ0nσ(t) (7)
The Dirac delta can have its ‘height’ (more properly
speaking, its area) generalized as:
(t) (8)
where ais any real value.
It is also possible to shift the Dirac delta along time to
any specific position t0, i.e.:
δ(tt0) (9)
Generally speaking, we have that:
ˆ
−∞
a δ(tt0) = a(10)
for any real values aand t0.
The Dirac delta provides an effective mathematical
manner for representing the time sampling of signals.
This involves the consideration of the so-called sampling
property of the Dirac delta, expressed as:
δ(tt0)g(t) = δ(tt0)g(t0) (11)
Figure 2 illustrates the sampling property of the Dirac
delta with respect to a generic function g(t).
3 The Fourier Transform
The Fourier transform of a complex function g(t) can be
defined as:
G(f) = ˆ
−∞
g(t)exp (j2πf t)dt (12)
2
Figure 2: The sampling property of the Dirac delta. The result of
multiplying a function h(t) by a Dirac delta δ(tt0) corresponds
to a Dirac delta at this same position, but with intensity g(t0).
where j=p(1).
The respective inverse Fourier transform can be ex-
pressed as:
g(t) = ˆ
−∞
G(f)exp (j2πf t)df (13)
When both g(t) and G(f) exist and obey Equations 12
and 13, we can write these two functions as a Fourier
transform pair :
g(t)G(f) (14)
Observe that there are alternative manners to define
the Fourier transform and its inverse, e.g. by inverting the
argument of the complex exponential in both the direct
and inverse transforms.
The Fourier transform of a Dirac delta can be calculated
by using its sampling property as:
G(f) = ˆ
−∞
δ(t)exp (j2πf t)dt =
=ˆ
−∞
δ(t)exp (j2πf 0) dt =
=ˆ
−∞
δ(t)exp (0) dt =
=ˆ
−∞
δ(t)dt = 1
It can be verified that:
δ(t)1 (15)
The symmetry property of the Fourier transform states
that:
G(t)g(f) (16)
Applying this property on Equation 15 and observing
that the Dirac delta is a even function, we derive:
1δ(t) (17)
Taken together, the results in Equations 15 and 17 sug-
gest that the localization of a function in one of the do-
mains (e.g. time or frequency) implies in the delocaliza-
tion of the function at the other domain.
It can also be verified that the Fourier transform G(f)
of a real signal g(t), which is often the case in signal
processing, will have its real part even, i.e. Re(G(f)) =
Re(G(f)), and imaginary part odd, i.e. Im(G(f)) =
Im(G(f)). This can be written more compactly as:
G(f) = G(f) (18)
This property ultimately derives from the symmetry of
the complex exponentials constituting the basis function
of the Fourier transform. Figure 3 illustrates a graphical
means to interpret this fundamental property.
Figure 3: Hermitian symmetry : A complex exponential function
w(t) = exp(j2πf0t) with frequency f0(a), and the function obtained
by its conjugation (b), i.e. w(t) = exp(j2πf0t). Observe that
the conjugation of a complex number in the Argand plan can be
geometrically interpreted as mirroring that number with respect to
the real axis. It immediately follows that inverting the sense of t
will recover the original function, i.e. w(t) = exp(j2πf0(t)) =
w(t). In a sense, the time sign reversal acts in a way that is similar
to the conjugation in the case of the complex exponential.
Another of the Fourier transform properties that is par-
ticularly useful when discussing signal sampling is the
time shifting property, expressed as:
g(tt0)ej2πft0G(f) (19)
which can be verified by substituting g(tt0) into
Equation 12 and applying the variable transformation
u=tt0.
Let’s now consider the interesting function consist-
ing of an infinite sum of Dirac deltas placed at
. . . , 2∆t, t, 0,t, 2∆t . . ., sometimes called Dirac
comb or shah:
ct(t) =
X
i=−∞
δ(tit) (20)
3
By taking into account the linearity of the Fourier
transform, as well as its time shifting property, we can
now write:
Ct(f) = X
i=...,1,0,1,...
ej2πf it(21)
It can be shown, by using the Fourier series of ct(t)
(which is periodic with period ∆t), that:
X
i=−∞
δ(tit)1
t
X
i=−∞
δti1
t(22)
Figure 5 (middle line) illustrates the Dirac comb and
its respective Fourier transform.
4 Convolution
The convolution between two complex functions g(t) and
h(t) can be written as:
g(t)h(t)[τ] = ˆ
−∞
g(t)h(τt)dt (23)
This operation can be conceptually understood as a
blending or matching between the two involved func-
tions [2]. Observe that the convolution is a commutative
operation.
It is interesting to consider the convolution between a
function g(t) and the Dirac delta δ(tt0):
g(t)δ(tt0)[τ] = ˆ
−∞
δ(tt0)g(τt)dt =
=ˆ
−∞
δ(tt0)g(τt0)dt =
=g(τt0)ˆ
−∞
δ(tt0)dt =
=g(τt0)
Taking into account the linearity of the Fourier trans-
form, we can also conclude that convolving a function g(t)
with a Dirac comb can be understood as adding the func-
tion g(t) at each position specified by one of the Dirac
deltas in the respective comb.
Another result that will be especially useful for us is
the convolution theorem, which states that:
g(t)h(t)G(f)H(f)
g(t)h(t)G(f)H(f) (24)
where g(t) and h(t) are two complex functions with
respective Fourier pairs G(f) and H(f).
5 Signal Truncation
The observation of a real-world signal is necessarily con-
strained to an interval of time, which is implied both be-
cause of practical constrains (no experiment can take for-
ever) and limited computational resources (the recording
of a signal demands memory). Therefore, any real-world
signal needs to be truncated (or windowed) along time.
This operation is illustrated in Figure 4, where a cosine
function x(t) has its duration truncated by multiplying
it by a rectangular window w(t). The net effect of this
product is the convolution of the Fourier transforms of
x(t) and w(t), therefore implying the Fourier transform of
x(t) to incorporate an unwanted oscillation caused by the
blending of the original signal with the Fourier transform
of the windowing function.
Observe that this effect can be ameliorated by consid-
ering wider window functions, therefore implying longer
signal observation periods and larger computer memory
capability for respective storage. Another interesting pos-
sibility is to consider other types of windows (e.g. [6, 7])
6 Signal Sampling
We are now in position to approach the interesting and
important issue of sampling continuous signals at specific
time instants. The first point to be kept in mind is that
this sampling can take place at equally spaced intervals
t, or varying intervals. For simplicity’s sake, we shall be
limited to the former possibility.
Figure 5 depicts a signal g(t) being sampled at each
tintervals. This can be mathematically modeled by
using the Dirac delta and its sampling property. More
specifically, we multiply the signal g(t) (a) by the Dirac
comb C(t)with ∆t(c), yielding the time sampled signal
g(t)C(t) (e). By using the convolution theorem, we obtain
that this operation can be understood as the convolution
between the Fourier transform of g(t) (b) and the Fourier
transform of C(t) (d), yielding the result shown in (f).
Observe that this signal is obtained by adding the Fourier
transform of g(t), namely G(f), at each of the positions
occupied by the Dirac deltas in the Fourier transform of
the Dirac comb.
It is important to observe the superimposition that can
occur as a consequence of the interference between subse-
quent adjacent sides of the Fourier transform G(f). This
superimposition is often called aliasing, effectively mean-
ing that the maximum frequency as representable in the
specific discrete Fourier domain has been exceed. Ob-
serve that the aliasing can be reduced or even eliminated
by adopting a smaller ∆t. For such reasons, it is impor-
tant to consider the sampling theorem, presented in the
4
Figure 4: A cosine signal x(t) fitting perfectly within the time range (a), therefore yielding a pair of sharp respective Dirac deltas in the
frequency domain (b). The limitation of the observations of x(t), which can be mathematically modeled by the point-by-point multiplication
with a windowing function w(t) of finite duration (c), whose Fourier transform corresponds to a modulated sinc function (d), yields a
truncated version of x(t) (e). The Fourier transform of this truncated function can be understood as the convolution of the functions in (a)
and (b), yielding the blended result in (f). In summary: the truncation of a function along time typically implies in oscillations being added
to the respective Fourier transform. Observe that only the real parts of the functions and transforms have been shown for simplicity’s sake.
Figure 5: Time sampling a signal g(t) can be mathematically mod-
eled in terms of multiplying that respective function with a Dirac
comb with resolution ∆t. The resulting Fourier transform can be
obtained by convolving the Fourier transform of g(t) with that of the
Dirac comb, yielding a periodical function with period ∆f= 1/t
as result. The superimposition of the lateral portions of G(f) im-
plied by the time sampling is often known as aliasing, which there-
fore limits the maximum representable frequency.
following section.
7 The Sampling Theorem
The results developed in the previous sections allow us to
approach the important problem of identifying the max-
imum frequency fmax that can be represented by a sam-
pling procedure. This can be done immediately by ob-
serving that, as seen in the previous section, by sampling
the signal with time resolution ∆Timplies in obtain-
ing a respectively periodic Fourier transform with period
f= 1/t. Because each of these periods include both
the negative and positive frequency components, we have
that the maximum frequency should correspond to half
the period ∆f. Therefore, we can state the important
sampling theorem as:
fmax =1
2
1
t(25)
8 The Discrete Fourier Transform
Considering that we almost invariably do not have an an-
alytical description of real-world signals to be analysed,
we cannot calculate their Fourier transform analytically
by using continuous expressions such as that in Equa-
5
tion 12, which would require us to know the formula for
g(t). To any extent, the signals to be analysed have been
time sampled along a window, which therefore yields a
respective vector representation ~g = [gi] = g(it) for
i= 1,2, . . . , N . Thus, provided we also time sample the
complex exponential exp (j2πf t) in Equation 12 at the
same time instants as those adopted to obtain ~g, we can
transform the integral of the Fourier transform into a sum,
i.e.:
~
G=Gk=X
i=0,1,...,N1
giexp j ik 2π
N1(26)
Let’s define the Fourier matrix as:
W= [Wi,k] = exp j ik 2π
N1(27)
Observe that 2π
N1effectively acts as an angular resolu-
tion ∆θ, so that we can write:
W= [Wi,k] = exp (j ik θ) (28)
For instance, in the case of N= 8, we have:
W8=
ω0ω0ω0ω0ω0ω0ω0ω0
ω0ω1ω2ω3ω4ω5ω6ω7
ω0ω2ω4ω6ω0ω2ω4ω6
ω0ω3ω6ω1ω5ω1ω4ω7
ω0ω4ω0ω4ω0ω4ω0ω4
ω0ω5ω2ω7ω4ω1ω6ω3
ω0ω6ω4ω2ω0ω6ω4ω2
ω0ω7ω6ω5ω4ω3ω2ω1
Figure 1 illustrates the distribution of the complex val-
ues ωialong the complex exponential for N= 8 (we have
considered positive argument of the exponential, for sim-
plicity’s sake). Observe that the initial value ω0is not
repeated as the sampling completes the period.
The discrete Fourier transform of ~g can be written as:
~
G=W~g (29)
This can be recognized as the general form of a linear
transformation, which is indeed the case of the Fourier
transform and many other transforms including the sta-
tistical Karhunen-Lo`eve transform (e.g. [10]), to which
the Fourier transform can be contrasted.
We also have that:
~g =W1~
G(30)
It can be verified that:
W(W)T=NI (31)
where Iis the identity matrix. Observe that (A)T=
(AT)for any complex matrix A. Thus, by multiplying
both sides of this equation by W1, it follows that:
W1=1
N(W)T(32)
This result indicates that the complex matrix Wis
quasi-unitary. This also means that the matrix defining
the discrete inverse Fourier transform can be obtained
from the direct Fourier transform matrix at low compu-
tational cost. Observe that a complex matrix Ais said to
be unitary iff its inverse is identical to its transposed con-
jugate. Also, the fact of a complex matrix being unitary
is analogous to a real matrix being orthogonal.
It is interesting to observe that, whenever possible, it
is often advantageous to consider calculating the discrete
Fourier transform (DFT) by using some of its efficient
computational implementations known as fast Fourier
transforms — FFTs (e.g. [6, 7]). While the computational
cost of the DFT with Nsamples is of O(N2), even the
simplest FFT implementation will allow a reduction to
O(NLog2N) = O(N LogN). Except for numerical round-
off noise, the results obtained by the DFT and FFT are
nearly identical.
9 Discrete Relationships
Given the extreme sensitivity of the Fourier transform in
the sense that small perturbations in one domain affect
generally the other domain, particular care and attention
are required for indexing and representing time sampled
signals and transforms.
Figure 6 illustrates the quantization of the complex ex-
ponential (a) for N= 4 angular or time samples, as well
as the respectively implied angular (b) and time (c) repre-
sentations of the real part of the sampled complex expo-
nential, which corresponds to a cosine with angular period
2πand time period T0.
Figure 6: Relationships between the angular and time sampled com-
plex exponential, assuming N= 4 samples, and respectively implied
indexing and relationships. See text for explanation.
Observe that the signal quantization into Nvalues
6
needs to be so that:
2π=Nθθ=2π
N(33)
T0=Ntt=T0
N(34)
Let’s consider the real portion of the complex exponen-
tial with period T0, namely cos(2πf0t), with f0=1
T0.
This signal provides a good reference for better under-
standing the discrete indexing in the DFT, as well as the
relationships between the involved variables, because it
will fit exactly in the reference period T0.
One important point to be kept in mind is that this
reference cosine with time period T0(and respective an-
gular period 2π) extends from the first sample t= 0 up
to one point before the period repetition, i.e. (N1)∆t,
which is indicated in the fact that the last sample point in
Figures 6(b) and (c) are represented as not-filled. In case
the signal is allowed to go up to the next point, major
unwanted oscillations will be verified in the DFT.
The first important characteristic to be noticed is that
the interrelationship between the angle θand the time t
variables can be simply expressed as:
2π
T0
= 2πf0=θ
t(35)
Combining the above expression with Equation 34, we
get:
f0=1
T0
=1
Nt(36)
which can be understood as the reference frequency as-
sociated to the reference period T0, with the interesting
property that integer multiples of f0(up to the maxi-
mum representable frequency fmax) will be exactly sam-
pled into the Npoints, therefore avoid the oscillations in
the respective Fourier domain implied by the signal trun-
cation. Actually speaking, it should be observe that it
is not that these oscillations do not exist, for the time
truncation is unavoidable in the DFT, but that the null-
values of the oscillations in the Fourier transform of the
window function coincides with the sampled values for
integer multiples of the reference frequency f0.
We can now express the reference cosine signal, with
T0= 1/f0, as:
cos(2πf0t)cos(iθ) = cos(2πf0it) (37)
where i= 0,1,...N 1.
Now, the following code can be applied for setting the
filter function in the respective vector with Nsamples,
assuming N= 100 and T0= 1::
N <- 100
T0 <-1
t <- seq(0,L,length.out=N+1)
t <- t[1:N]
dt <- t[2]
f0 <- 1/(N*dt)
x <- cos(2 * pi * f0 * t)
Observe the especially important instruction
t <- t[1:N], necessary in order to discard the rep-
etition of the first sampled value.
10 Signal Quantization
By signal quantization we mean the round-off of the real
values of a signal, as observed at each sampling step, re-
quired for obtaining a finite-length numeric representation
with Mbits that can be stored in a computer memory.
This operation is typically performed by the electronic de-
vice known as analog to digital converter (see Section 11).
There are several ways to quantize the intensities of a
signal. Figure 7 illustrates the transfer function of one of
the possible schemes, with respect to 33 possible discrete
levels uniformly distributed from 1 to 1. This function is
applied to each sampled value in the signal x(t), yielding
the sample and quantized signal y(t).
Figure 7: The transfer function of the floor quantization scheme
considering 33 discrete values from 1 to 1.
Unlike the already discussed time truncation and sam-
pling operations, we have no simple means to mathemat-
ically model the effect of signal quantization with respect
to the properties, especially in the frequency domain, of
the thus obtained signals.
However, the quantization of a signal can be under-
stood as adding high frequency (related to smaller scale
details) to the original signal. As such, it is often inter-
esting to consider low-pass filtering of quantized signals,
as illustrated in Figure 12.
7
Figure 8: A cosine signal quantized using only 10 discrete levels (a), and its respective discrete Fourier transform (b), which is characterized
by intense baseline noise implied by the quantization. By having the quantized signal to be smoothed by convolving it with a gaussian
function, yielding (c), it is possible to reduce the high frequency quantization noise, thus achieving an improved respective Fourier transform
(d). This smoothing operation can be understood in the context of regularization theory, reflecting a respective hypothesis that the signal
was originally smooth. The input signal is assumed to be normalized between 1 and 1.
The gaussian, or rather its normalized version as nor-
mal distribution (unit area), is often adopted for smooth-
ing (low-pass filtering) a signal. Interestingly, the Fourier
transform of a gaussian with standard deviation σis also
a gaussian, but with standard deviation 1. As such, by
convolving the signal x(t) with a normal function as in
Equation 6, effectively we perform a low-pass filter in the
frequency domain, therefore reducing the quantization ef-
fects that take place at a relatively small scale. Observe
that the normal function in this operation acts as a filter.
Because of the periodicity of the discrete Fourier trans-
form, special care and attention are required while set-
ting the filter function up in the frequency domain. More
specifically, the negative frequency portion of the filter
function needs to be shifted to the right-hand side of the
vector representing the filter function in the frequency
domain, so that this function be kept coherent with the
sampling of the complex exponential as traditionally con-
sidered in the discrete Fourier transform (see also [5]).
Considering a discrete Fourier transform with Nsam-
pled points, the following procedure may be considered
for mounting the filter function in the frequency domain.
First, we determine the two quantities NL and NR from
the number of sample Nas follows:
NR = floor N
2(38)
NL =(NNR 1) (39)
Now, the following code can be applied for setting the
filter function in the respective vector with Ncompo-
nents:
g <- matrix(0,1,N)
for (i in seq(0,NR))
{ g[i+1] <- exp(-0.5 *(i*dt/sig)^2) }
for (i in seq(NL,-1))
{ g[i+N+1] <- exp(-0.5 *(i*dt/sig)^2) }
This piece of conde assumes that the vectors are in-
dexed starting at index 1.
Figure 9 illustrates a normal function with σ= 0.02
and N= 300 as mounted in the respective filter vector
by using the above described procedure.
Interestingly, the above described smoothing procedure
acts not only on the quantization levels, but also on the
sampling of the signal (especially in the case of non-
uniform sampling schemes). Indeed, both these opera-
tions can be understood as implying loss of information
about the original signal, and we aim at trying to recover
some of the lost information. In this sense, filtering of a
8
Figure 9: A normal function with σ= 0.02 and N= 300 as mounted
into the respective filter vector by using the suggested procedure.
This set-up is required so that the filter function be kept coherent,
along the frequency axis, with the way in which the basis functions
of the discrete Fourier transform, namely complex exponentials with
successive frequencies, are traditionally sampled in implementations
of that transform. Observe that the value g(0) is not repeated at
the end of the vector, which represents exactly one complete period
in the frequency domain.
quantized signal can be understood from the perspective
of the interesting research area known as regularization
theory (e.g. [11]). In this area, hypotheses about the orig-
inal signal (e.g. smoothness) are assumed in order to im-
pose respective constraints on the incomplete signal. In
our case, the main hypothesis is that the original signals
are relatively smooth, so that we try to recover the respec-
tive smoothness through the low-pass filtering operation.
11 Basic Electronics of Signal
Conversion
We have thus far discussed the three important transfor-
mations required for representing continuous signals pre-
dominantly from the mathematical perspective. In this
section, we provide some clues about how those operations
can be performed through electronic means (e.g. [12]).
Figure 10 depicts a digital to analog – DA – converter.
In this type of device, the output is an analog, though
limited to some possible values, signal x(t) proportional
to the binary input [bMbM1. . . b2b1b0] at that time in-
stant. As implied by their names, DA converters are in-
tended to convert a binary value into an analog value.
Observe that the resolution provided by Mbits is propor-
tional to 2M. For instance, a DA converter with B= 12
bits will be capable of producing 4096 distinct values.
The counterpart of DA converter is the analog to digital
converter, AD, which is illustrated in Figure 11. Here, an
analog signal x(t) is input, and the obtained output is
a binary number proportional to the input value at each
instant. Similarly to a DA converter, the resolution of an
AD device is defined by the number of bits B.
Figure 12 illustrates a digital signal processing (DSP)
system, including its interface with the analog environ-
Figure 10: A typical DA converter. The output x(t) is proportional
to the binary value [bMbM1. . . b2b1b0] at a given time instant.
Figure 11: A typical AD converter. The input x(t) is converted to
a proportional binary value [bMbM1. . . b2b1b0] at a given time
instant.
ment through respective AD and DA converters. Also
shown in this figure is a sample-and-hold (s/h) device,
which acts as a kind of analog memory, preserving the
input signal value present at each clock pulse. This de-
vice, which is sometimes incorporated into the respective
AD converter, is necessary in order to keep the analog
value as much constant as possible during the respective
conversion.
Figure 12: An overall representation of a digital signal processing
system (DSP) applied to analyze the input signal x(t), yielding a
respective reply y(t). The time sampling is performed under control
of the clock (clk) signal and sample-and-hold. The signal intensity
quantization if obtained through the AD converter, while the op-
posite operation is performed by a DA converter after the signal is
processed in the DPS unit. The analog low-pass filter incorporated
at the DA output is often adopted in order to reduce the quantized
levels of the signals generated by the DA.
In order to reduce the high frequency noise implied by
the quantized, sampled nature of the signals generated
by the DA, a suitable analog low-pass filter (lpf ) can be
incorporated, yielding the smoother signal y(t) as output
(e.g. [8]).
It should be observed that systems as in Figure 12 in-
9
volves the integration of the two main areas of electronics,
namely analog and digital circuit design.
12 Concluding Remarks
Science and technology have progressed a long way since
the mechanical computers of the XIX century, redefining
to a great extent the human context and experience, es-
pecially through the internet. A substantial deal of these
advances and results have depended, and continue to de-
pend, on translating signals from the analog to the digital
domain, processing them in digital manner, and them de-
riving digital results that need to be transformed back to
respective analog counterparts.
The present work aimed at providing in an introduc-
tory way the main mathematical concepts and methods
for better understanding the three main transformations
involved, namely: time truncation, sampling, and inten-
sity quantization. Having introduced important concepts
such as the Dirac delta sampling property, the continu-
ous and discrete Fourier transform, as well as convolution
operation and its respective theorem, we got in a posi-
tion that allowed us to better understand not only the
effects of signal conversion, but also to consider possible
means for reducing the respective unwanted effects, such
as aliasing.
There is much more to be learned regarding the inter-
face between the analog and digital worlds (e.g. [6, 7, 8,
12]), especially regarding the analog and digital involved
concepts and approaches. It is expected that the present
work have motivated the reader to probe further in this
interesting area.
Acknowledgments.
Luciano da F. Costa thanks CNPq (grant
no. 307085/2018-0) and FAPESP (grant 15/22308-
2).
References
[1] L. da F. Costa. Modeling: The human
approach to science. Researchgate, 2019.
https://www.researchgate.net/publication/
333389500_Modeling_The_Human_Approach_to_
Science_CDT-8. [Online; accessed 1-Oct-2020.].
[2] L. da F. Costa. Convolution! Researchgate, 2019.
https://www.researchgate.net/publication/
336601899_Convolution_CDT-14. [Online; accessed
09-March-2020.].
[3] L. da F. Costa. Sine, cosine, periodicity, phase,
sine, ... Researchgate, 2020. https://www.
researchgate.net/publication/341722757_
Sine_Cosine_Periodicity_Phase_Sine_CDT-33.
[Online; accessed 1-Oct-2020.].
[4] L. da F. Costa. When less is more: De-
tecting edges in images. Researchgate, 2020.
https://www.researchgate.net/publication/
343862629_When_Less_is_More_Detecting_
Edges_in_Images_CDT-37. [Online; accessed 1-Oct-
2020.].
[5] L. da F. Costa. What can curvature tell
us about shape? Researchgate, 2020.
https://www.researchgate.net/publication/
343651830_What_Can_Curvature_Tell_us_About_
Shape_CDT-35. [Online; accessed 1-Oct-2020.].
[6] E. O. Brigham. Fast Fourier Transform and its Ap-
plications. Pearson, 1988.
[7] A. V. Oppenheim and R. Schafer. Discrete-Time Sig-
nal Processing. Pearson, 2009.
[8] P. Horowitz and W. Hill. The Art of Electronics.
Cambridge University Press, 2015.
[9] G. van Dijk. Distribution Theory. De Gruyter Grad-
uate Lectures, 2013.
[10] F. Gewers, G. R. Ferreira, H. F. Arruda, F. N.
Silva, C. H. Comin, D. R. Amancio, and L. da F.
Costa. Principal component analysis: A natural
approach to data exploration. Researchgate, 2019.
https://www.researchgate.net/publication/
324454887_Principal_Component_Analysis_
A_Natural_Approach_to_Data_Exploration.
accessed 1-Oct-2020.
[11] S. Lu and S. V. Pereverzev. Regularization Theory
for Ill-posed Problems Selected Topics. De Gruyter,
2013.
[12] M. Pelgrom. Analog to Digital Conversion. Springer,
2016.
10
Costa’s Didactic Texts – CDTs
CDTs intend to be a halfway point between a
formal scientific article and a dissemination text
in the sense that they: (i) explain and illustrate
concepts in a more informal, graphical and acces-
sible way than the typical scientific article; and
(ii) provide more in-depth mathematical develop-
ments than a more traditional dissemination work.
It is hoped that CDTs can also incorporate new
insights and analogies concerning the reported
concepts and methods. We hope these character-
istics will contribute to making CDTs interesting
both to beginners as well as to more senior
researchers.
Each CDT focuses on a limited set of interrelated
concepts. Though attempting to be relatively
self-contained, CDTs also aim at being relatively
short. Links to related material are provided in
order to provide some complementation of the
covered subjects.
Observe that CDTs, which come with absolutely
no warranty, are non distributable and for non-
commercial use only.
Please check for new versions of CDTs, as
they can be revised. Also, CDTs can be
cited, e.g. by including the respective DOI.
The complete set of CDTs can be found
at: https://www.researchgate.net/project/
Costas-Didactic-Texts-CDTs.
11
ResearchGate has not been able to resolve any citations for this publication.
Article
Principal component analysis (PCA) is often applied for analyzing data in the most diverse areas. This work reports, in an accessible and integrated manner, several theoretical and practical aspects of PCA. The basic principles underlying PCA, data standardization, possible visualizations of the PCA results, and outlier detection are subsequently addressed. Next, the potential of using PCA for dimensionality reduction is illustrated on several real-world datasets. Finally, we summarize PCA-related approaches and other dimensionality reduction techniques. All in all, the objective of this work is to assist researchers from the most diverse areas in using and interpreting PCA.
Book
This textbook is appropriate for use in graduate-level curricula in analog-to-digital conversion, as well as for practicing engineers in need of a state-of-the-art reference on data converters. It discusses various analog-to-digital conversion principles, including sampling, quantization, reference generation, nyquist architectures and sigma-delta modulation. This book presents an overview of the state of the art in this field and focuses on issues of optimizing accuracy and speed, while reducing the power level. This new, third edition emphasizes novel calibration concepts, the specific requirements of new systems, the consequences of 22-nm technology and the need for a more statistical approach to accuracy. Pedagogical enhancements to this edition include additional, new exercises, solved examples to introduce all key, new concepts and warnings, remarks and hints, from a practitioner’s perspective, wherever appropriate. Considerable background information and practical tips, from designing a PCB, to lay-out aspects, to trade-offs on system level, complement the discussion of basic principles, making this book a valuable reference for the experienced engineer. · Covers the most relevant developments in analog-to-digital conversion, in a pedagogical framework suited for both graduate-level courses and professionals; · Updates the second edition of this book to address recent technology developments, includes a new chapter on time-interleaved conversion, as well as a chapter on lay-out and other practical aspects; · Includes new exercises, as well as solved, step-by-step examples to help introduce each new concept.
Article
This monograph is a valuable contribution to the highly topical and extremely productive field of regularisation methods for inverse and ill-posed problems. The author is an internationally outstanding and accepted mathematician in this field. In his book he offers a well-balanced mixture of basic and innovative aspects. He demonstrates new, differentiated viewpoints, and important examples for applications. The book demonstrates the current developments in the field of regularization theory, such as multi parameter regularization and regularization in learning theory. The book is written for graduate and Phd students and researchers in mathematics, natural sciences, engineering, and medicine.
Modeling: The human approach to science. Researchgate
  • L Da
  • F Costa
L. da F. Costa. Modeling: The human approach to science. Researchgate, 2019. https://www.researchgate.net/publication/ 333389500_Modeling_The_Human_Approach_to_
Convolution! Researchgate
  • L Da
  • F Costa
L. da F. Costa. Convolution! Researchgate, 2019. https://www.researchgate.net/publication/ 336601899_Convolution_CDT-14. [Online; accessed 09-March-2020.].
Sine, cosine, periodicity, phase, sine
  • L Costa
L. da F. Costa. Sine, cosine, periodicity, phase, sine,...
When less is more: Detecting edges in images. Researchgate
  • L Da
  • F Costa
L. da F. Costa. When less is more: Detecting edges in images. Researchgate, 2020. https://www.researchgate.net/publication/ 343862629_When_Less_is_More_Detecting_
What can curvature tell us about shape? Researchgate
  • L Da
  • F Costa
L. da F. Costa. What can curvature tell us about shape? Researchgate, 2020. https://www.researchgate.net/publication/ 343651830_What_Can_Curvature_Tell_us_About_
The Art of Electronics
  • P Horowitz
  • W Hill
P. Horowitz and W. Hill. The Art of Electronics. Cambridge University Press, 2015.