Prediction-based Filter Design

Basics

In this section we will discuss how linear prediction can be used for designing filters with an arbitrary frequency response. Since linear predictors are used in a variety of applications (e.g. speech coding) various implementations exist, that solve the so-called normal equations in a robust and efficient manner. These schemes can be reused to implement real-time filter design applications that can work also on very simple hardware.

 

Application Examples

Remark:

Before we start with the derivation of the filter design itself, the following applications should motivate the design process.

Prediction in general means to forecast signal samples that are not yet available (forward prediction) or to reestablish already forgotten samples (backward prediction). With this capability predictors play an important role in signal processing wherever it is desirable, for instance, to reduce the amount of data to be transmitted or stored. Examples for the use of predictors are encoders for speech or video signals.

However, linear prediction can also be used for several other applications:

 

  • Loudspeaker Equalization

    To improve the playback quality of loudspeakers equalization filters might be placed before the DA converters of playback devices (see Fig. 1). These filters are designed such that the frequency response of the system consisting of the loudspeaker itself and the equalization filter should be close to a predefined curve.

     
      Loudspeaker equalization
    \( v(n) \)
    \( y(n) \)
    Equalization filter
    Loudspeaker
    Fig. 1: Loudspeaker equalization. 
     

    If more than one loudspeaker should be equalized often additional restrictions such as linear phase behaviour (constant group delay) are desired. Fig. 2 shows an example of such a desired frequency response together with a non-equalized loudspeaker and its equalized counterpart.

     
      Loudspeaker equalization
      Fig. 2: Magnitude responses of the non-equalized and the equalized loudspeaker.
     

     

  • Low-delay Noise Suppression

    Whenever a desired signal is superimposed by noise signal enhancement techniques can be applied (see Fig. 3). Usually, statistically optimized, time-variant filters such as so-called Wiener filters [HS 2004] are utilized here.

     
      Noise suppression
    \( v(n) \)
    \( y(n) \)
    Time-varying, minimum-phase filter
    Fig. 3: Low-delay noise suppression.
     

    Those approaches are usually realized in the short-term Fourier domain. However, if the delay that is inserted by the Fourier transforms is too large, time-domain approaches with low-order minimum-phase filters might be an alternative solution. The design of these filters can be prediction-based [LV 2005, LV 2006]. Fig. 4 shows an example of a noisy speech signal (filter input) and the corresponding noise-reduced signal (filter output).

     
      Example signals for noise suppression
      Fig. 4: Signal before and after noise suppression.
     

     

  • Signal Generation

    Remark:

    At the end of this section you can find a Matlab example for designing a signal generator with an arbitrary output signal spectrum.

    As a last application so-called general purpose noise or signal generators can be mentioned. They are usually build by a white noise generator (either with Gaussian or unifrom amplitude distribution) and a succeeding shaping filter for adjusting the power spectral density (PSD) of the output filter (see Fig. 5).

     
      Signal generation
    \( v(n) \)
    \( y(n) \)
    Noise generator
    Shaping filter
      Fig. 5: Signal generation.
     

    Since the input PSD is constant (white noise) the shaping filter must be designed such that its frequency response (respectively the squared magnitude of it) is the same as the desired PSD. Fig. 6 shows an example of in input and output PSDs (in blue) together with the desired PSD (in grey).

     
      Power spectral densities
      Fig. 6: Power spectral density before and after the shaping filter.
     

 

Basics of Linear Prediction

As mentioned above a linear prediction filter is utilized to predict future samples of a signal. For our purposes we will focus now on the prediction of one sample into the future. This can be realized using an FIR filter structure:

\begin{eqnarray}\hat v(n) \,&=&\, \sum\limits_{i\,=\,0}^{N-1}\,\, p_i\,v(n-1-i).\end{eqnarray}

The coefficients \(p_i\) of this filter are obtained by minimizing the following (mean squared error) cost function:

\begin{eqnarray} \textrm{E}\Big\{e^2(n)\Big\} \,&=&\, \textrm{E}\Big\{ \big[ v(n) - \hat v(n) \big]^2 \Big\} \,\, \longrightarrow \,\,\textrm{min}.\end{eqnarray}

The definition of the prediction error signal

\begin{eqnarray} e(n) \,&=&\,v(n) - \hat v(n) \end{eqnarray}

and the prediction itself can be combined in a so-called prediction error filter structure (see Fig. 7).

 
  Prediction error filter
\( v(n) \)
\( z^{-1} \)
\( p_i \)
\( v(n\!-\!1) \)
\( \hat v(n) \)
\( e(n) \)
\( v(n) \)
\( h_i \)
\( e(n) \)
Fig. 7: Structure of an prediction error filter.
 

Please note that the prediction error filter has still FIR structure. The coefficients can be grouped into vectors - for the prediction filter we obtain:

\begin{eqnarray} \bf{p} \,&=&\,\Big[ p_0,\,p_1,\, ...,\,p_{N-1}\Big]^{T}.\end{eqnarray}

In a similar way we obtain for the prediction error filter:

\begin{eqnarray}\bf{h} \,&=&\,\Big[ h_0,\,h_1,\, ...,\,h_{N-1},\,h_{N}\Big]^{T} \\[2mm] \,&=&\, \Big[1,\,-p_0,\,...-p_{N-2},\,-p_{N-1} \Big]^{T}.\end{eqnarray}

To obtain the (optimal) predictor coefficients \(\tilde p_i\) (the optimum is indicated by the tilde symbold) we differentiate the cost function with respect to \(p_k\) and set the result to zero:

\begin{eqnarray} 0\,&=&\,\displaystyle{\frac{\partial}{\partial \, p_k}}\,\textrm{E}\big\{e^2(n)\big\} \bigg|_{p_k\,=\,\tilde p_{k}} \nonumber \end{eqnarray}

... exchanging the order of the differentiation and the expectation operator ...

\begin{eqnarray} \quad\! &=&\,\textrm{E}\bigg\{ \displaystyle{\frac{\partial \, e^2(n)}{\partial \, p_k}} \bigg\} \Bigg|_{p_k\,=\,\tilde p_{k}} \nonumber \end{eqnarray}

... separating the inner and the outer differentiation ...

\begin{eqnarray} \quad\! &=&\,\textrm{E}\bigg\{ 2\,e(n)\,\displaystyle{\frac{\partial \, e(n)}{\partial \, p_k}} \bigg\} \Bigg|_{p_k\,=\,\tilde p_{k}} \nonumber \end{eqnarray}

... inserting the definition of the error signal ...

\begin{eqnarray} \quad\! &=&\,\textrm{E}\Bigg\{ 2\,e(n)\,\displaystyle{\frac{\partial}{\partial \, p_k} \bigg[ v(n) - \sum\limits_{i=0}^{N-1} p_i\,v(n-1-i)\bigg]} \Bigg\} \Bigg|_{p_k\,=\,\tilde p_{k}} \nonumber \end{eqnarray}

... solving the inner differentiation ...

\begin{eqnarray} \quad\! &=&\,\textrm{E}\Big\{ 2\,e(n)\,\big(-v(n-1-k)\big) \Big\} \bigg|_{p_k\,=\,\tilde p_{k}} \nonumber \end{eqnarray}

... dividing both sides by -2 ...

\begin{eqnarray} \quad\! &=&\, \textrm{E}\big\{ e(n)\,v(n-1-k) \big\} \Big|_{p_k\,=\,\tilde p_{k}} \nonumber \end{eqnarray}

... inserting once again the definition of the error signal ...

\begin{eqnarray} \quad\! &=&\, \textrm{E}\Bigg\{ \bigg[ v(n) - \sum\limits_{i\,=\,0}^{N-1} \tilde p_{i}\,v(n-1-i)\bigg]\,v(n-1-k) \Bigg\} \nonumber \end{eqnarray}

... exchanging the order of the expectation and the summation operator ...

\begin{eqnarray} \quad\! &=&\, \textrm{E}\big\{ v(n)\,v(n-1-k)\big\} - \sum\limits_{i=0}^{N-1} \, \tilde p_{i}\,\textrm{E}\big\{ v(n-1-i)\,v(n-1-k) \big\} \nonumber \end{eqnarray}

... assuming stationarity and inserting the abbreviation \(\textrm{E}\{ v(n)v(n+k)\} = s_{vv}(k)\) ...

\begin{eqnarray} \quad\! &=&\,s_{vv}(k-1) - \sum\limits_{i=0}^{N-1}\, \tilde p_{i}\,s_{vv}(i-k) \end{eqnarray}

Now we have \(N\) equations (\(k\in\{0,\,...,N-1\}\)) for the \(N\) unknown filter coefficients. In order to get a more compact solution (in matrix-vector notation), we can first write all equations together and get

\begin{eqnarray} \begin{array}{ccccccc} s_{vv}(-1) & \!\!=\!\! & s_{vv}(0)\,\tilde p_{0} & \!\!+\!\! & s_{vv}(1)\,\tilde p_{1} & \!\!+\!\! & \cdots & \!\!+\!\! & s_{vv}(N-1)\,\tilde p_{N-1}, \\ s_{vv}(-2) & \!\!=\!\! & s_{vv}(-1)\,\tilde p_{0} & \!\!+\!\! & s_{vv}(0)\,\tilde p_{1} & \!\!+\!\! & \cdots & \!\!+\!\! & s_{vv}(N-2)\,\tilde p_{N-1}, \\ \vdots & \!\!=\!\! & \vdots & & \vdots & & \ddots & & \vdots \\ s_{vv}(-N) & \!\!=\!\! & s_{vv}(-N+1)\,\tilde p_{0} & \!\!+\!\! & s_{vv}(-N+2)\,\tilde p_{1} &\!\!+\!\! & \cdots & \!\!+\!\! & s_{vv}(0)\,\tilde p_{N-1}. \end{array} \end{eqnarray}

Assuming that we have a real and stationary input process \(v(n)\), we can exploit the symmetry proberty of the autocorrelation \(s_{vv}(k) = s_{vv}(-k)\) and obtain:

\begin{eqnarray} \begin{array}{ccccccc} s_{vv}(1) & \!\!=\!\! & s_{vv}(0)\,\tilde p_{0} & \!\!+\!\! & s_{vv}(1)\,\tilde p_{1} & \!\!+\!\! & \cdots & \!\!+\!\! & s_{vv}(N-1)\,\tilde p_{N-1}, \\ s_{vv}(2) & \!\!=\!\! & s_{vv}(1)\,\tilde p_{0} & \!\!+\!\! & s_{vv}(0)\,\tilde p_{1} & \!\!+\!\! & \cdots & \!\!+\!\! & s_{vv}(N-2)\,\tilde p_{N-1}, \\ \vdots & \!\!=\!\! & \vdots & & \vdots & & \ddots & & \vdots \\ s_{vv}(N) & \!\!=\!\! & s_{vv}(N-1)\,\tilde p_{0} & \!\!+\!\! & s_{vv}(N-2)\,\tilde p_{1} & \!\!+\!\! & \cdots & \!\!+\!\! & s_{vv}(0)\,\tilde p_{N-1}. \end{array} \end{eqnarray}

Introducting now definitions for an autocorrelation vector with starting lag \(l\)

\begin{eqnarray} {\bf s}_{vv}(l)\, &=&\,\Big[s_{vv}(l),\, s_{vv}(l+1),\,...,\,s_{vv}(l+N-1) \Big]^{T}\end{eqnarray}

and for an autocorrelation matrix

\begin{eqnarray} {\bf S}_{vv}\, &=&\, \left[ \begin{array}{cccc} s_{vv}(0) & s_{vv}(1) & \dots & s_{vv}(N-1) \\ s_{vv}(1) & s_{vv}(0) & \dots & s_{vv}(N-2) \\ \vdots & \vdots & \ddots & \vdots \\ s_{vv}(N-1) & s_{vv}(N-2) & \dots & s_{vv}(0) \end{array} \right], \end{eqnarray}

allows us to write the system of \(N\) equations in a compact form:

\begin{eqnarray} {\bf s}_{vv}(1)\, &=&\,{\bf S}_{vv}\,{\bf \tilde p}. \end{eqnarray}

Multiplying both sides with the inverse of the autocorrelation matrix gives us the solution

\begin{eqnarray} {\bf \tilde p}\, &=&\,{\bf S}_{vv}^{-1}\,{\bf s}_{vv}(1). \end{eqnarray}

This equations system can be sovled very efficiently and robusty (concerning numerial limitations) using the so-called Levinson-Durbin recusion. This recursion and its derivation can be found here. In addition to the optimal Filter coeffients also the remaining error power is computed within the recursion. This error power is also necessary for the filter design that we will use soon.

Without the recursion the direct computation of the error power would be:

To do ...

 

Linear Prediction and Filter Design

To do ...

 

Different Filter Structures

To do ...

 

Matlab Design Eamples

Matlab code example for an abritrary signal generator

%**************************************************************************
% Main parameters
%**************************************************************************
N_pred = 500;

%**************************************************************************
% Auxillary parameters
%**************************************************************************
N_FFT     =   512*4;
f_s       = 44100;
LineWidth =     1;

%**************************************************************************
% Desired frequency response
%**************************************************************************
H_des                   = zeros(N_FFT/2+1,1)- 30; 
H_des(1)                = 30;
H_des(500:600)          = 40;
beta                    = 0.95;
H_des                   = filtfilt(1-beta,[1 -beta],H_des);
H_des_mag               = 10.^(H_des/20);

%**************************************************************************
% Minimum-phase FIR filter
%**************************************************************************
S_hh                     = 1./[H_des_mag; conj(H_des_mag(end-1:-1:2))].^2;
s_hh                     = ifft(S_hh);
[b_fir_minphas, pow_err] = levinson(s_hh,N_pred-1);
b_fir_minphas            = b_fir_minphas / sqrt(pow_err);
a_fir_minphas            = 1;

x = randn(4000,1)*2;
y = filter(b_fir_minphas,a_fir_minphas,x);

%**************************************************************************
% Result analysis
%**************************************************************************
S_xx = pwelch(x,hanning(N_FFT),N_FFT*(1-1/16),N_FFT);
S_yy = pwelch(y,hanning(N_FFT),N_FFT*(1-1/16),N_FFT);

f = 0:1/N_FFT:0.5;
f = f * 2;

fig = figure(2);
set(fig,'Units','Normalized');
set(fig,'Position',[0.1 0.1 0.8 0.5]);

subplot('Position',[0.1 0.12 0.8 0.4]);
plot(f,H_des,'r', ...
     f,10*log10(S_yy),'b','LineWidth',LineWidth);
axis([0 1 -45 45])
xlabel('Normalized frequency \Omega/\pi')
grid on;

ylabel('dB')
legend('Desired magnitude response', ...
       'Estimated power speactral dessity of the filter output'); 
   
subplot('Position',[0.1 0.57 0.8 0.4]);
plot(f,H_des,'r', ...
     f,10*log10(S_xx),'b','LineWidth',LineWidth);
axis([0 1 -45 45])
grid on;
set(gca,'XTickLabel','');
ylabel('dB')
legend('Desired power spectral desnity', ...
       'Estimated power speactral dessity of the filter input');

  

 

 

References

  [HS 2004]   E. Hänsler, G. Schmidt: Acoustic Echo and Noise Control, Wiley, 2004.
  [LV 2005]   H. Löllmann, P. Vary: Low Delay Filter for Adaptive Noise Reduction, Proc. IWAENC '05, Eindhoven, The Netherlands, pp. 205 - 208, 2005.
  [LV 2006]   H. Löllmann, P. Vary: A Filter Structure for Low Delay Noise Suppression, Proc. ITG-Fachtagung '06, Kiel, Germany, 2006.

Basic Idea of RED

RED (robust and efficient DSP) is a collection of algorithms that should help engineers to implement basic signal processign tasks in - as the name states - a robust and efficient way. This includes mainly numerical aspects.

 

 

Here you can find the chapter on prediction-based filter design as a single pdf file. If you would like to download all chapters please go to the RED overview page and get the pdf document that contain all chapters from there.

 

Related Sections/Chapters:

 

Related Downloads:

Pdf file of this section
   
Matlab script for a signal generator with an arbitrarily specified power spectral density
   
Matlab script on yyy

Contact

Prof. Dr.-Ing. Gerhard Schmidt

E-Mail: gus@tf.uni-kiel.de

Christian-Albrechts-Universität zu Kiel
Faculty of Engineering
Institute for Electrical Engineering and Information Engineering
Digital Signal Processing and System Theory

Kaiserstr. 2
24143 Kiel, Germany

Recent News

Jens Reermann Defended his Dissertation with Distinction

On Friday, 21st of June, Jens Reermann defended his research on signals processing for magnetoelectric sensor systems very successfully. After 90 minutes of talk and question time he finished his PhD with distinction. Congratulations, Jens, from the entire DSS team.

Jens worked for about three and a half years - as part of the collaborative research center (SFB) 1261 - on all kinds of signal ...


Read more ...