ISI in the transmission through band-limited non ideal channels. The optimum employ maximum likelihood detection. The sub optimum employ linear equalizer. In our design of the equalizer we assume that we know at the receiver the impulse response of the channel or the frequency response. In most communication channels this response is varying as a function of time. In this case we use an equalizer to adapt for this change. In this paper we will automatically adjust the equalizer coefficient to adapt for channel changes. We also analyze the performance characteristics including convergence rate and computational complexity. I. Adaptive Linear Equalizer n the case of linear equalizer we will consider two methods to adjust the coefficient values ck. One method is to minimize the peak distortion at the output. The second method is to minimize the mean square error at the output. Now we will talk about each method. # II. The Zero-Forcing Algorithm The peak distortion D(c) is minimized by selecting the appropriate ck. In general it is not easy computation. Except for the case D0 < 1. In this case D(c) is minimized by forcing qn = 0 and q0 = 1. We will use the zero forcing algorithm to achieve this condition. The zero forcing algorithm is achieved by equating the cross correlation between the error and the desired sequence to zero. We have (1) We assume that information symbols are uncorrelated. And that it is uncorrelated with the estimate (2) (3) This mean that q0=1 and qn=0 for n not equal to zero. When the channel response is not known the cross correlation in equation 1 can not be found. To circumvent this problem we send a known training sequence Ik to the receiver. This way we can estimate the cross correlation by using time averages. After we send this training sequence for samples length equal or more than equalizer length we can find the equalizer coefficient that satisfy equation 3. A simple recursive algorithm (4) Were c j k is the j th coefficient at time k T, e k is the error at time k T, and delta is a scaling factor for rate adjustment. This is the zero forcing algorithm. The term e(k)I(k-j) is the cross correlation estimate of E (e(k) I(j-k)). This is accomplished by recursive first order difference equation which is a simple discrete time integrator. This is due to second fundamental theory of calculus. When we use a training sequence we are in the training mode. We do this until the coefficient of the equalizer converge to the optimal. At this point the decision of the output is sufficiently reliable. Now we go to the adaptive mode were we use the estimated output to continue adjusting the coefficients. In this case the cross correlation equation to update c is (5) This is similar to least mean square LMS algorithm. We will talk about MSE in detail next. # III. The lms Algorithm First we calculate the error. We find that to minimize the error C has to satisfy the set of linear equations. (6) Were T is the covariance metric of the input v k. C is the quantizer coefficients and E is the input times d the desired response. An iterative algorithm can be used that avoid metric inversion to calculate C opt. the simplest is the method of steepest decent. In which we choose any C to start and it will converge by time to C opt. our start C will correspond to some point on the MSE surface curve. The gradient G is computed at this point. Using this gradient we move to the next estimate closer to the minimum. (7) The gradient is (8) Were Ck is the coefficients at the k iteration. E k is the error at k iteration. V is the vector of the input. Delta is a small positive number that will ensure convergence of the iterative process. If the minimum MSE is reached at some k then Gk=0 so that Ck stay constant. This method is slow but it is good to explain things. The basic difficulty of this method is the lack of knowledge of the gradient vector G. G depend on the covariance metric T and the vector E of cross correlation. These depend on the coefficient of the equivalent discrete channel model in which the receiver do not know. To overcome this difficulty an estimate of G is used. The coefficient can be found using this estimate as (9) Were we have an estimate of the gradient and an estimate of the coefficients We find that (10) The equation is (11) This is the basic LMS algorithm for recursively adjusting the weight coefficients described by Widrow (1966). It is figure 2 Global Journal of Researches in Engineering ( ) The training sequence is usually used to be periodic. The period is N=2k+1. In this case the gradient is averaged over the period length as in equation 17. The weights adjustment can be by making a decision on the received data and use this decision to calculate the error. As long as the receiver operate at low error rate the convergence algorithm will be ok. If the channel response changes, the error will change. This mean that the weights will change according to equation 11. We will also have a change if the information sequence or the noise statistics change. Thus, the equalizer is adaptive. F Volume XIV Issue # a) Convergence Properties of the LMS Algorithm The convergence properties given in equation 11 is governed by the step size parameter. We will choice the step size parameter to ensure convergence of the steepest descent algorithm equation 7 using the exact value of the gradient. We have (20) This can be modeled by the closed loop control system in figure 3. The autocorrelation matrix in 20 is coupled. In order to solve the equations in 20 we have to decouple this matrix. This is also done to find the convergence property. The appropriate linear transformation to decouple this matrix is The convergence is fast if (1-delta?) is small. This mean that the pole is far from the unit circle. As we can see from the equations ? max/? min determine the convergence rate. # IV. Excess mse Due to Noisy Gradient Estimates The receiver when adjusting the coefficients use a noisy estimate of the gradient vector. the noise result in random fluctuation from the optimal value. So we have j = j min +j delta were j delta is the variance of noise. The total MSE at the output is The degradation in the output SNR of the equalizer due to excess MSE is less than 1 dB. The analysis of the excess error is for assuming that C have converged to optimum. The convergence has been studied by many researchers. This is an example were we have 11 coefficients. We start with all of them are zero. We see that all of them will converge to the right value of the channel. We use MS algorithm. The Matlab code is given. As we can see the error converge to zero as the number of iteration increase. We also add noise to the input to the equalizer. The choice of delta is very critical in the LMS algorithm. To be practical we design the most significant bits for an estimate of the coefficient and the least significant bits for delta so that the LMS algorithm find its way exactly to C opt. It is important to mention that the LMS algorithm is able to track slowly time varying system. In other words j min is a function of time and the MSE surface is moving with time index n. The LMS method try to track j min but always behind it. This mean that we have another error which is the lag error. The total error is (34) Where j l is the lag error. Now we try to plot j delta and j lag as a function of delta. They will behave as in the figure below. J delta decrease and j lag increase as a function of delta. The total will have a minimum which is the optimum choice for delta. If the time varying system is fast the j lag will dominate the error. In this case the LMS will not be appropriate and we must use recursive least squares algorithms. Figure 5 V. # Accelerating the Initial Convergence rate in the lms Algorithm As we see the convergence rate is controlled by delta. It is also strongly related to the spectral characteristics of the channel. By that I mean the ratio of lambda max to lambda min. if this ratio is close to one the convergence will be fast. If not the convergence will be slow. In this case we say that the channel has poor spectral characteristics. Researchers investigated this topic from all angels. One simple way is to start with large delta and reduce it to the optimum as time pass by. To accelerate initial convergence methods have been studied in 1971 and 1977. One method is to replace delta by the weighting matrix W. the form is (35) Where W ideally is the inverse of the autocorrelation matrix of the input data. The autocorrelation can be estimated and the inverse can be found. If the training sequence is periodic, W can be seen as a single row and the block diagram of the system will be as in figure 6. # Global Journal of Researches in Engineering ( ) F Volume XIV Issue Figure 6 We can estimate the weights from the received signal. The basic steps are: First we collect one period of N samples in the equalizer delay line. Then we compute N-point DFT (Rn). Then we compute the power spectrum (Rn^2) and we add to it N times the estimate of the noise variance. Then we compute the inverse DFT. This yields wn in figure 6. To adjust the taps we have (36) # VI. Conclusion As in the case of linear adaptive equalizer the coefficient of decision feedback equalizer can be adjusted recursively. This is based on minimization of MSE. (37) We should note that this will converge and for the expected value we can use (epsilon V). (38) This is the LMS algorithm. First we use a training sequence then we switch to dissection mode. ![Figure 1](image-2.png "") 2![Figure 2 Equation 11 has been used in commercial adaptive equalizers that are used in high speed modems. Three versions are used using only the sign information.](image-3.png "Figure 2 Equation") ![The second algorithm is to compare the received data Global Journal of Researches in Engineering ( ) F Volume XIV Issue VII Version I 9 Year 2014 © 2014 Global Journals Inc. (US) (18) with the transmitted data assuming it is provided by a probe initially for training.](image-4.png "") 3![Figure 3This form has a matrix with diagonal equal to the Eigen values of the system.We put 21 into 20 and we have](image-5.png "3") ![Figure 4 ](image-6.png "") ![](image-7.png "") ![](image-8.png "") © 2014 Global Journals Inc. (US)Adaptive Filters ## This page is intentionally left blank Global Journal of Researches in Engineering ( ) * APapoulis Probability, Random Variables, and Stochastic Process 2002 * Proakis, Digital Communications J 2001 * Engineering Analysis RJSchilling 1988 * Van Trees, Detection, Estimation, and Modulation Theory HL 1968 * J Proakis, Inroduction to Digital Signal Processing 1988 * CChen Linear System Theory and Design 1984 * SHaykin Communication System 1983 * Introduction to System Analysis THGlisson 1985 * Airborne Doppler Radar MartinSchetzen 2006 * The Volterra & Wiener Theories of Nonlinear Systems MartinSchetzen 2006 * Discrete System using Matlab MartinSchetzen 2004 * ArvinGrabel Microelectronics 1987