Sample and filter are two common operations. In this chapter, we introduce the basics of the two operations. For engineering details, please refer to the signal processing section of this note.
Sample
Sample is the process of converting an analogs signal (continuous signal) to a digital signal (discreet signal).
x [ n ] = x ( n T ) x[n] = x(nT) x [ n ] = x ( n T )
Where T T T is called the sampling interval.
Sometimes we may need to perform fourier transform on a periodic signal. However, the result isn't easy to get via normal means, because the you will see delta function in the result.
We suppose,
f ( t ) → F F ( ω ) f(t) \xrightarrow{\mathcal{F}} F(\omega) f ( t ) F F ( ω )
We know that periodical signal can be expanded using fourier series,
f ( t ) = ∑ k = − ∞ + ∞ a k e j k Ω t f(t) = \sum_{k=-\infty}^{+\infty} a_k e^{j k \Omega t} f ( t ) = k = − ∞ ∑ + ∞ a k e jk Ω t
Where Ω = 2 π T \Omega = \frac{2\pi}{T} Ω = T 2 π .
Since δ ( t − a ) f ( t ) = δ ( t − a ) f ( a ) \delta(t-a)f(t) = \delta(t-a)f(a) δ ( t − a ) f ( t ) = δ ( t − a ) f ( a ) ,
f ( t ) = ∫ − ∞ + ∞ ( ∑ n = − ∞ n = + ∞ a n δ ( k − n ) ) e j k Ω t d k f(t) = \int_{-\infty}^{+\infty} (\sum_{n=-\infty}^{n=+\infty} a_n\delta(k - n)) e^{j k \Omega t} \mathrm{d}k f ( t ) = ∫ − ∞ + ∞ ( n = − ∞ ∑ n = + ∞ a n δ ( k − n )) e jk Ω t d k
We know that fourier inverse transform is,
f ( t ) = 1 2 π ∫ − ∞ + ∞ F ( ω ) e j ω t d ω f(t) = \frac{1}{2\pi} \int_{-\infty}^{+\infty} F(\omega) e^{j \omega t} \mathrm{d}\omega f ( t ) = 2 π 1 ∫ − ∞ + ∞ F ( ω ) e jω t d ω
We consider fitting the above integration into this form,
f ( t ) = ∫ − ∞ + ∞ ( ∑ n = − ∞ n = + ∞ a n δ ( k − n ) ) e j k Ω t d k = ∫ − ∞ + ∞ ( ∑ n = − ∞ n = + ∞ a n δ ( Ω k − Ω n ) ) e j k Ω t d ( Ω k ) = 1 2 π ∫ − ∞ + ∞ ( ∑ n = − ∞ n = + ∞ 2 π a n δ ( ω − Ω n ) ) e j ω t d ω f(t) = \int_{-\infty}^{+\infty} (\sum_{n=-\infty}^{n=+\infty} a_n\delta(k - n)) e^{j k \Omega t} \mathrm{d}k \\
= \int_{-\infty}^{+\infty} (\sum_{n=-\infty}^{n=+\infty} a_n\delta(\Omega k - \Omega n)) e^{j k \Omega t} \mathrm{d}(\Omega k) \\
= \frac{1}{2\pi} \int_{-\infty}^{+\infty} (\sum_{n=-\infty}^{n=+\infty} 2\pi a_n \delta(\omega - \Omega n)) e^{j \omega t} \mathrm{d}\omega \\ f ( t ) = ∫ − ∞ + ∞ ( n = − ∞ ∑ n = + ∞ a n δ ( k − n )) e jk Ω t d k = ∫ − ∞ + ∞ ( n = − ∞ ∑ n = + ∞ a n δ ( Ω k − Ω n )) e jk Ω t d ( Ω k ) = 2 π 1 ∫ − ∞ + ∞ ( n = − ∞ ∑ n = + ∞ 2 π a n δ ( ω − Ω n )) e jω t d ω
Since,
f ( t ) = 1 2 π ∫ − ∞ + ∞ F ( ω ) e j ω t d ω f(t) = \frac{1}{2\pi} \int_{-\infty}^{+\infty} F(\omega) e^{j \omega t} \mathrm{d}\omega f ( t ) = 2 π 1 ∫ − ∞ + ∞ F ( ω ) e jω t d ω
We can conclude that,
F ( ω ) = ∑ n = − ∞ n = + ∞ 2 π a n δ ( ω − Ω n ) F(\omega) = \sum_{n=-\infty}^{n=+\infty} 2\pi a_n \delta(\omega - \Omega n) F ( ω ) = n = − ∞ ∑ n = + ∞ 2 π a n δ ( ω − Ω n )
For periodical signals, we need to first do fourier series decomposition, then we can get the fourier transform result.
Sampling Process
Let's consider,
S ( t ) = ∑ n = − ∞ + ∞ δ ( t − n T ) S(t) = \sum_{n=-\infty}^{+\infty}\delta(t - nT) S ( t ) = n = − ∞ ∑ + ∞ δ ( t − n T )
And instead of x [ n ] x[n] x [ n ] , we analyze the signal,
x s ( t ) = x ( t ) S ( t ) x_s(t) = x(t) S(t) x s ( t ) = x ( t ) S ( t )
Because,
x [ n ] = ∫ n T − 0 ( n + 1 ) T − 0 x s ( t ) d t x[n] = \int_{nT-0}^{(n+1)T-0} x_s(t) \mathrm{d}t x [ n ] = ∫ n T − 0 ( n + 1 ) T − 0 x s ( t ) d t
x s ( t ) x_s(t) x s ( t ) is exactly the distribution of a discrete signal x [ n ] x[n] x [ n ] .
Suppose,
x ( t ) → F X ( ω ) x(t) \xrightarrow{\mathcal{F}} X(\omega) x ( t ) F X ( ω )
According to fourier transform,
x s ( t ) → F 1 2 π X ( ω ) ∗ F ( ∑ n = − ∞ ∞ δ ( t − n T ) ) x_s(t) \xrightarrow{\mathcal{F}} \frac{1}{2\pi} X(\omega) \ast \mathcal{F}(\sum_{n=-\infty}^{\infty}\delta(t - nT)) x s ( t ) F 2 π 1 X ( ω ) ∗ F ( n = − ∞ ∑ ∞ δ ( t − n T ))
∑ n = − ∞ ∞ δ ( t − n T ) \sum_{n=-\infty}^{\infty}\delta(t - nT) ∑ n = − ∞ ∞ δ ( t − n T ) is periodical with T T T , thus if we want its fourier transform, we should firstly calculate the fourier series coefficients,
Suppose, Ω = 2 π T \Omega = \frac{2\pi}{T} Ω = T 2 π ,
a k = 1 T ∫ − 0 T − 0 ( ∑ n = − ∞ ∞ δ ( t − n T ) ) e − j k ω t d t = 1 T ∫ − 0 T − 0 δ ( t ) e − j k ω t d t = 1 T a_k = \frac{1}{T} \int_{-0}^{T-0} (\sum_{n=-\infty}^{\infty}\delta(t - nT)) e^{-j k \omega t} \mathrm{d}t \\
= \frac{1}{T} \int_{-0}^{T-0} \delta(t) e^{-j k \omega t} \mathrm{d}t = \frac{1}{T} a k = T 1 ∫ − 0 T − 0 ( n = − ∞ ∑ ∞ δ ( t − n T )) e − jkω t d t = T 1 ∫ − 0 T − 0 δ ( t ) e − jkω t d t = T 1
Previously, we had,
F ( ω ) = ∑ n = − ∞ n = + ∞ 2 π a n δ ( ω − Ω n ) F(\omega) = \sum_{n=-\infty}^{n=+\infty} 2\pi a_n \delta(\omega - \Omega n) F ( ω ) = n = − ∞ ∑ n = + ∞ 2 π a n δ ( ω − Ω n )
Thus,
F ( ∑ n = − ∞ ∞ δ ( t − n T ) ) = ∑ n = − ∞ n = + ∞ 2 π T δ ( ω − Ω n ) = Ω ∑ n = − ∞ n = + ∞ δ ( ω − Ω n ) \mathcal{F}(\sum_{n=-\infty}^{\infty}\delta(t - nT)) \\
= \sum_{n=-\infty}^{n=+\infty} \frac{2\pi}{T} \delta(\omega - \Omega n) \\
= \Omega \sum_{n=-\infty}^{n=+\infty} \delta(\omega - \Omega n) \\ F ( n = − ∞ ∑ ∞ δ ( t − n T )) = n = − ∞ ∑ n = + ∞ T 2 π δ ( ω − Ω n ) = Ω n = − ∞ ∑ n = + ∞ δ ( ω − Ω n )
Initially, we wanted,
x s ( t ) → F 1 2 π X ( ω ) ∗ F ( ∑ n = − ∞ ∞ δ ( t − n T ) ) x_s(t) \xrightarrow{\mathcal{F}} \frac{1}{2\pi} X(\omega) \ast \mathcal{F}(\sum_{n=-\infty}^{\infty}\delta(t - nT)) x s ( t ) F 2 π 1 X ( ω ) ∗ F ( n = − ∞ ∑ ∞ δ ( t − n T ))
F ( x s ( t ) ) = 1 2 π X ( ω ) ∗ F ( ∑ n = − ∞ ∞ δ ( t − n T ) ) = 1 2 π X ( ω ) ∗ ( Ω ∑ n = − ∞ n = + ∞ δ ( ω − Ω n ) ) = 1 T ( ∑ n = − ∞ n = + ∞ X ( ω − Ω n ) ) \mathcal{F}(x_s(t)) = \frac{1}{2\pi} X(\omega) \ast \mathcal{F}(\sum_{n=-\infty}^{\infty}\delta(t - nT)) \\
= \frac{1}{2\pi} X(\omega) \ast (\Omega \sum_{n=-\infty}^{n=+\infty} \delta(\omega - \Omega n)) \\
= \frac{1}{T} (\sum_{n=-\infty}^{n=+\infty} X(\omega - \Omega n)) F ( x s ( t )) = 2 π 1 X ( ω ) ∗ F ( n = − ∞ ∑ ∞ δ ( t − n T )) = 2 π 1 X ( ω ) ∗ ( Ω n = − ∞ ∑ n = + ∞ δ ( ω − Ω n )) = T 1 ( n = − ∞ ∑ n = + ∞ X ( ω − Ω n ))
Nyquist Theorem
From the previous section, we know that the sampled signal in frequency domain is,
1 T ( ∑ n = − ∞ n = + ∞ X ( ω − Ω n ) ) \frac{1}{T} (\sum_{n=-\infty}^{n=+\infty} X(\omega - \Omega n)) T 1 ( n = − ∞ ∑ n = + ∞ X ( ω − Ω n ))
If the signal x ( t ) x(t) x ( t ) is continuous (in real life, it usually is), there must exists a maximum frequency ω m \omega_m ω m such that,
X ( ω ) = 0 ∀ ω > ω m X(\omega) = 0 \quad \forall \omega > \omega_m X ( ω ) = 0 ∀ ω > ω m
Consider,
X ( ω − Ω n ) X(\omega - \Omega n) X ( ω − Ω n )
This spans across Ω n − ω m \Omega n - \omega_m Ω n − ω m to Ω n + ω m \Omega n + \omega_m Ω n + ω m .
If we want no overlapping at all, so that we can decompose X ( ω ) X(\omega) X ( ω ) , and further, revert back to the original signal. For all n n n , X ( ω − Ω n ) X(\omega - \Omega n) X ( ω − Ω n ) shouldn't overlap each other. That is to say,
ω m + Ω n < − ω m + Ω ( n + 1 ) \omega_m + \Omega n < -\omega_m + \Omega (n + 1) ω m + Ω n < − ω m + Ω ( n + 1 )
Ω > 2 ω m \Omega > 2 \omega_m Ω > 2 ω m
Ω > 2 ω m \Omega > 2 \omega_m Ω > 2 ω m is called the Nyquist theorem. It tells us that, if we want to flawlessly sample an analog signal, and the maximum frequency of the analog signal ω m \omega_m ω m , our sampling frequency must be greater than 2 ω m 2 \omega_m 2 ω m .
Filter
I'll continue the section on Filters, maintaining the same technical and educational style:
Filter
Filtering is a process of modifying or enhancing specific frequency components of a signal. Filters can be used to remove unwanted frequencies, enhance certain frequency bands, or modify the phase characteristics of a signal.
Basic Filter Types
There are four fundamental types of filters:
Low-pass Filter: Allows frequencies below a cutoff frequency to pass while attenuating higher frequencies
High-pass Filter: Allows frequencies above a cutoff frequency to pass while attenuating lower frequencies
Band-pass Filter: Allows frequencies within a specific range to pass while attenuating others
Band-stop Filter: Blocks frequencies within a specific range while allowing others to pass
Transfer Function
The transfer function H(ω) of a filter describes how the filter modifies the amplitude and phase of input frequencies. For a linear time-invariant system, we know that,
Y ( ω ) = H ( ω ) X ( ω ) Y(\omega) = H(\omega)X(\omega) Y ( ω ) = H ( ω ) X ( ω )
Ideal Filters
An ideal filter has a perfect rectangular frequency response. For example, an ideal low-pass filter has the transfer function:
H ( ω ) = { 1 , ∣ ω ∣ ≤ ω c 0 , ∣ ω ∣ > ω c H(\omega) = \begin{cases}
1, & |\omega| \leq \omega_c \\
0, & |\omega| > \omega_c
\end{cases} H ( ω ) = { 1 , 0 , ∣ ω ∣ ≤ ω c ∣ ω ∣ > ω c
Where ω c \omega_c ω c is the cutoff frequency.
However, ideal filters are not physically realizable because:
They require infinite impulse response
They are not causal (require future inputs)
They have perfect cutoff characteristics which violate the uncertainty principle
Practical Filters
Real filters approximate ideal characteristics with:
Passband: Frequency range where signals pass through with minimal attenuation
Stopband: Frequency range where signals are heavily attenuated
Transition band: Region between passband and stopband
Ripple: Small variations in gain within passband or stopband
We will introduce digital filters in detail in the signal processing part.