# Introduction a) Signal processing ignals commonly need to be processed in a variety of ways. For example, the output signal from a transducer may well be contaminated with unwanted electrical "noise". The electrodes attached to a patient's chest when an ECG is taken measure tiny electrical voltage changes due to the activity of the heart and other muscles. The signal is often strongly affected by "mains pickup" due to electrical interference from the mains supply. Processing the signal using a filter circuit can remove or at least reduce the unwanted part of the signal. Increasingly nowadays, the filtering of signals to improve signal quality or to extract important information is done by DSP techniques rather than by analog electronics.DSP and Analog Signal Processing are subfields of Signal Processing. # b) Analog signal processing Analog signal processing refers to the form of signal processing that is carried out on analog signals and by the use of analog means. The concepts established in analog electronics are used in order to implement the mathematical algorithms that process the analog signals. Author ? ? ?: 123 Faculty of RBS Engineering college Bichpuri Agra. e-mail : asarswat005@gmail.com Mathematical values are represented as a continuous physical quantity such as voltage levels, electric current values or electric charge. Small errors or noise that may interfere with such physical quantities can result in corresponding errors in the representation of signals related to these physical quantities. # c) Digital Signal Processing DSP, or Digital Signal Processing, as the term suggests, is the processing of signals by digital means. A signal in this context can mean a number of different things. Historically the origins of signal processing are in electrical engineering, and a signal here means an electrical signal carried by a wire or telephone line, or perhaps by a radio wave. More generally, however, a signal is a stream of information representing anything from stock prices to data from a remote-sensing satellite. The term "digital" comes from "digit", meaning a number (you count with your fingers -your digits), so "digital" literally means numerical; the French word for digital is numerique. A digital signal consists of a stream of numbers, usually (but not necessarily) in binary form. The processing of a digital signal is done by performing numerical calculations. Digital signal processing (DSP) is the study of signals in a digital representation and the processing methods of these signals. DSP includes subfields like: audio and speech signal processing , sonar and radar signal processing, sensor array processing, spectral estimation, statistical signal processing, image processing, signal processing for communications, biomedical signal processing, etc. Since the goal of DSP is usually to measure or filter continuous real-world analog signals, the first step is usually to convert the signal from an analog to a digital form, by using an analog to digital converter. Often, the required output signal is another analog output signal, which requires a digital to analog converter. The algorithms required for DSP are sometimes performed using specialized computers, which make use of specialized microprocessors called Digital Signal Processors (also abbreviated DSP). These process signals in realtime and are generally purpose-designed application-specific integrated circuits (ASICs). When flexibility and rapid development are more important than unit costs at high volume, DSP algorithms may also be implemented using field-programmable gate arrays (FPGAs). # d) Analog and digital signals In many cases, the signal of interest is initially in the form of an analog electrical voltage or current, produced for example by a microphone or some other type of transducer. In some situations, such as the output from the readout system of a CD (compact disc) player, the data is already in digital form. An analog signal must be converted into digital form before DSP techniques can be applied. An analog electrical voltage signal, for example, can be digitised using an electronic circuit called an analog-to-digital converter or ADC. This generates a digital output as a stream of binary numbers whose values represent the electrical voltage input to the device at each sampling instant. DSPs can also be embedded within complex "systemon-chip" devices, often containing both analog and digital circuitry. The great advantage of these systems is that their function is easily specified by software. There is an enormous literature on DSP algorithms many of which are of great importance in their own right (e.g. the Fast Fourier Transform). Performance or these systems is usually limited by the performance (i.e. speed, resolution and linearity) of the analogue-to-digital converter. # a) Field-programmable gate array A field-programmable gate array is a semiconductor device containing programmable logic components called "logic blocks", and programmable interconnects. Logic blocks can be programmed to perform the function of basic logic gates such as AND, elements, which may be simple flip-flops or more complete blocks of memories. A hierarchy of programmable interconnects allows logic blocks to be interconnected as needed by the system designer, somewhat like a one-chip programmable breadboard. Logic blocks and interconnects can be programmed by the customer or designer, after the FPGA is manufactured, to implement any logical function-hence the name "fieldprogrammable". With the increasing use of computers the usage and need of digital signal processing has increased. In order to use an analog signal on a computer it must be digitized with an analog to digital converter (ADC). Sampling is usually carried out in two stages, techniques to be used in a much wider range of applications. However, general-purpose ( Discretization and Quantization. In the Discretization stage, the space of signals is partitioned into equivalence classes and Discretization is carried out by replacing the signal with representative signal of the corresponding equivalence class. In the Quantization stage the representative signal values are approximated by values from a finite set. In order for a sampled analog signal to be exactly reconstructed, the Nyquist-Shannon sampling theorem must be satisfied. This theorem states that the sampling frequency must be greater than twice the bandwidth of the signal. In practice, the sampling frequency is often significantly more than twice the required bandwidth. The most common bandwidth scenarios are: DC -BWx ("baseband"); and Fc +/-BWx, a frequency band centered on a carrier frequency ("direct demodulation"). A digital to analog converter (DAC) is used to convert the digital signal back to analog. The use of a digital computer is a key ingredient into digital control systems Discretization . A solution to a discretized partial differential equation, obtained with the finite element method. In mathematics, discretization concerns the process of transferring continuous models and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers. In order to be processed on a digital computer another process named quantization is essential. # Euler discretization Zero order hold Discretization is also related to discrete mathematics, and is an important component of granular computing. In this context, discretization may also refer to modification of variable of category granularity, as when multiple discrete variables are aggregated or multiple discrete categories fused. # Quantized signal In digital signal processing, quantization is the process of approximating a continuous range of values (or a very large set of possible discrete values) by a relatively-small set of discrete symbols or integer values. More specifically, a signal can be multi-dimensional and quantization need not be applied to all dimensions. Discrete signals (a common mathematical model) need not be quantized, which can be a point of confusion. A common use of quantization is in the conversion of a discrete signal (a sampledcontinuous signal) into a digital signal by quantizing. Both of these steps (sampling and quantizing) are performed in analog-to-digital converters with the quantization level specified in bits. A specific example would be compact disc (CD) audio which is sampled at 44,100 Hz and quantized with 16 bits (2 bytes) which can be one of 65,536 (i.e. 216) possible values per sample. # III. # Dsp Domains In # b) Frequency domain Signals are converted from time or space domain to the frequency domain usually through the Fourier transform. The Fourier transform converts the signal information to a magnitude and phase component of each frequency. Often the Fourier transform is converted to the power spectrum, which is the magnitude of each frequency component squared. The most common purpose for analysis of signals in the frequency domain is analysis of signal properties. The engineer can study the spectrum to get information of which frequencies are present in the input signal and which are missing. There are some commonly used frequency domain transformations. For example, the cepstrum converts a signal to the frequency domain through Fourier transform, takes the logarithm, then applies another Fourier transform. This emphasizes the frequency components with smaller magnitude while retaining the order of magnitudes of frequency components. frequency domain is a term used to describe the analysis of mathematical functions or signals with respect to frequency. Speaking non-technically, a time domain graph shows how a signal changes over time, whereas a frequency domain graph shows how much of the signal lies within each given frequency band over a range of frequencies. A frequency domain representation can also include information on the phase shift that must be applied to each sinusoid in order to be able to recombine the frequency components to recover the original time signal . Autocorrelation A plot showing 100 random numbers with a "hidden" sine function, and an autocorrelation of the series on the bottom. Autocorrelation is a mathematical tool used frequently in signal processing for analysing functions or series of values, such as time domainsignals. Informally, it is a measure of how well a signal matches a timeshifted version of itself, as a function of the amount of time shift. More precisely, it is the cross-correlation of a signal with itself. Autocorrelation is useful for finding repeating patterns in a signal, such as determining the presence of a periodic signal which has been buried under noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies IV. # Analog Signal Processing vs dsp There are capabilities which are present in analog signal processing which are absent in digital signal processing, though the latter is often considered to be far more powerful and cheaper. If the original data to be processed is in the analog form, it is expensive and complicated to use devices such as an ADC (analog-to-digital converter) and a DAC (digital-toanalog converter). These devices are required in order to carry out the conversion of signals from their analog form to the digital form for processing by a digital signal processor (DSP) and back to the analog form for the user to interpret. Instead of deploying such means, it is recommendable to use an analog signal processor. In addition to the added overall complexity of the system, the digital propagation delay that may accompany the processing may reach levels unacceptable for a highspeed system. Greater levels of efficiency can be achieved by processing analog signals, even if the output data required has to be in digital form. This benefit is manifest in the measurement of alternating current power. In the case of a complex, reactive load, it may become necessary to over-sample the voltage and current signals in order to measure power. An analog multiplier, which is driven by the voltage and current in the load, results in an output that is proportional to the instantaneous power. An integrated and sampled output can be obtained from the output. Analog processing is used in a number of engineering applications. Analog signal processors are used when the signal manipulation has to be carried out in a simple manner, unlike other complicated methods. Analog multipliers and dividers provide easy gain control and are useful in applications like continuous power measurement. They are used in ratiometric functions and have a considerably high accuracy rate of the order of 1%. Digital Signal Broadcasting of Acoustic Signal Using VHDL b) Applications of DSP DSP technology is nowadays commonplace in such devices as mobile phones, multimedia computers, video recorders, CD players, hard disc drive controllers and modems, and will soon replace analog circuitry in TV sets and telephones. An important application of DSP is in signal compression and decompression. Signal compression is used in digital cellular phones to allow a greater number of calls to be handled simultaneously within each local "cell". DSP signal compression technology allows people not only to talk to one another but also to see one another on their computer screens, using small video cameras mounted on the computer monitors, with only a conventional telephone line linking them together. In audio CD systems, DSP technology is used to perform complex error detection and correction on the raw data as it is read from the CD. Although some of the mathematical theory underlying DSP techniques, such as Fourier and Hilbert Transforms, digital filter design and signal compression, can be fairly complex, the numerical operations required actually to implement these techniques are very simple, consisting mainly of operations that could be done on a cheap four-function calculator. The architecture of a DSP chip is designed to carry out such operations incredibly fast, processing hundreds of millions of samples every second, to provide real-time performance: that is, the ability to process a signal "live" as it is sampled and then output the processed signal, for example to a loudspeaker or video display. All of the practical examples of DSP applications mentioned earlier, such as hard disc drives and mobile phones, demand real-time operation. The major electronics manufacturers have invested heavily in DSP technology. Because they now find application in mass-market products, DSP chips account for a substantial proportion of the world market for electronic devices. Sales amount to billions of dollars annually, and seem likely to continue to increase rapidly. V. # Applications The main applications of DSP are audio signal processing, audio compression, digital image processing, video compression, speech processing,speech recognistion digital communication, # a) Speech processing Speech processing is the study of speech signals and the processing methods of these signals. The signals are usually processed in a digital representation whereby speech processing can be seen as the intersection of digital signal processing and natural language processing. Speech processing can be divided in the following categories: ? Speech recognition, which deals with analysis of the linguistic content of a speech signal. ? Speaker recognition, where the aim is to recognize the identity of the speaker. ? Speech synthesis: the artificial synthesis of speech, which usually means computer generated speech # b) Video compression Video compression refers to reducing the quantity of data used to represent video images, and this is almost always coupled with the goal of retaining as much of the original's quality as possible. Compressed video can effectively reduce the bandwidth required to transmit digital video via terrestrial broadcast, via cable, or via satellite services. Most video compression is lossy, i.e. it operates on the premise that much of the data present before compression is not necessary for achieving good perceptual quality. For example, DVDs use a video coding standard called MPEG-2 that can compress ~2 hours of video data by 15 to 30 times while still producing a picture quality that is generally considered high quality for standard-definition video. Video compression, like data compression, is a tradeoff between disk space, video quality and the cost of hardware required to decompress the video in a reasonable time. However, if the video is © # c) Audio compression It has been suggested that this article or section be merged with voice compression. (Discuss) Audio compression can mean two things: ? Audio data compression -in which the amount of data in a recorded waveform is reduced for transmission. This is used in CD and MP3 encoding, internet radio, and the like. ? Audio level compression -in which the dynamic range (difference between loud and quiet) of an audio waveform is reduced. This is used in guitar effects racks, recording studios, etc. # d) Audio signal processing Audio signal processing, sometimes referred to as audio processing, is the processing of a representation of auditory signals, or sound. The representation can be digital or analog. The focus in audio signal processing is most typically a mathematical analysis of which parts of the signal are audible. For example, a signal can be modified for different purposes such that the modification is controlled in the auditory domain. The parts of the signal are heard and which are not, is not decided merely by physiology of the human hearing system, but very much by psychological properties. These properties are analysed within the field of psychoacoustics. # e) Application areas Processing methods and application areas include storage, level compression, data compression, transmission, enhancement (e.g., equalization, filtering, noise cancellation, echo or reverb removal or addition, etc.) # f) Audio Broadcasting Audio broadcasting (be it for television or audio broadcasting) is perhaps the biggest market segment (and user area) for audio processing products -globally. Traditionally the most important audio processing (in audio broadcasting) takes place just before the transmitter. Studio audio processing is limited in the modern era due to digital audio systems (mixers, routers) being pervasive in the studio. In audio broadcasting, the audio processor must ? prevent overmodulation, and minimize it when it occurs ? maximize overall loudness ? compensate for non-linear transmitters, more common with medium wave and shortwave broadcasting are key in saving and protecting valuable computer data. Uninterruptible power systems equipment provides power conditioning, power regulation, and -in case of a power outage -provides the crucial backup power needed for an orderly shutdown of computer processes and files. Originally designed for mathematically and computationally intensive motor drive control processes, DSPs now have expanded capabilities such as faster machine-cycle speeds and enhanced programming instruction sets. Digital signal processors now also offer peripheral functionality such as onboard counters and timers, analog-to Crowded data centers and racks filled from top to bottom with storage devices, monitors, servers, communications devices and other equipment are driving the need for UPS technology with increased power efficiency, within a compact and sleek form factor. For end users and facilities managers, "thin is in", and so UPS designers continually strive for smaller products, fewer parts, lower cost and less weight. Digital signal processor (DSP) controllers are an enabling technology for meeting the challenge of such design requirements. Digital signal processors are now propelling many of the advances in UPS design. As illustrated in the block diagram, the DSP controller manages many UPS functions, including: © # Development of dsp The development of digital signal processing dates from the 1960's with the use of mainframe digital computers for number-crunching applications such as the Fast Fourier Transform (FFT), which allows the frequency spectrum of a signal to be computed rapidly. These techniques were not widely used at that time, because suitable computing equipment was generally available only in universities and other scientific research institutions. # a) The DSP Advances UPS Design When electrical utility power fails or drops to an unacceptable level, uninterruptible power systems (UPS) ? sensing and controlling input and output voltage and current levels, ? setting and controlling the rectifier (a boost converter) for input power-factor correction and for regulating the dc voltage into the inverter, ? setting and controlling the inverter (a buck converter) for output voltage and frequency regulation, ? controlling the battery charger, ? interfacing with power management software through communication port cards, and \ ? switching to electronic bypass digital converters, pulse-width-modulation outputs, flash memory, and controller-area network communications. The similarities between motor drive controls and UPS controls, combined with the enhanced functionality of DSPs, contribute to making the UPS a "natural" application for DSPs. Lower-cost, high-performance DSP controllers provide an improved and cost-effective solution for UPS design. Digital signal processors allow UPS designers to replace bulky transformers, relays and mechanical bypass switches with smaller, more intelligent functional equivalents. Digital signal processor implementations also facilitate other design benefits, including increased power efficiency and increased power density -smaller product footprint with less weight -a necessity in spaceconstrained data centers. In UPS applications, the DSP has integrated functions selected for sophisticated embedded controls. These functions, previously available only through more expensive microcontrollers and off-board peripheral circuitry, include protection circuitry, clocks and serial communications, in addition to the peripheral DSP functionality previously mentioned. Except for signal conditioning and actuators that provide the interface between the DSP and the power circuitry, all the control implementations become digital. Multiple control algorithms can execute almost simultaneously and at high machine-cycle speeds for unprecedented dynamic performance. The DSP implementation also has fewer parts, increased reliability and greater immunity to noise than predecessor microcontroller implementations. Since the DSP feedback and control loops are implemented digitally, compensation for component tolerances and temperature variations of feedback elements is no longer necessary. Digital signal processor technology provides a cost-effective alternative for controlling multiple power converters, either individually or in combination, to meet the demands of advanced power topologies. # b) Advances in Hearing aid Technology i. How do hearing aids work? Hearing aids use electronic circuits to help compensate for your hearing loss. They selectively make The main advantage of digital hearing aids is that they allow for flexibility in processing sound that is not possible with analogue technology. Some of these advantages are described below: ? Adaptive directional microphones: Directional microphones have been available in hearing aids, however only digital aids hearing allow for adaptive directional microphone capabilities. Directional microphones can help to reduce the sounds coming from behind a person by turning on a second microphone in the hearing aid. Adaptive directional microphones do the same things, but can move around to find the loudest noise source or in some cases reduce multiple noise sources at the same time. ? Digital feedback suppression (DFS): Feedback, or whistling, is monitored by this DFS system while you wear your hearing aid and is selectively reduced or eliminated without reducing the gain in the hearing aid. This system is especially helpful for hearing aid users who experience feedback while chewing or talking. ? Digital noise reduction: Digital signal processing allows the hearing aids to monitor the environment for steady noise sources and reduce the level of these noises. This system can help to reduce the annoyance caused by noise sources and possibly improve speech understanding. iii. What can I expect from my hearing aids? Unlike eyeglasses, hearing aids cannot provide complete correction for the impairment. No hearing aid will restore your hearing to normal or provide a perfect substitute for normal hearing. The benefits derived from wearing hearings aids, even the most technologically advanced, will vary from person to person. Digital signal processing makes the most of your hearing capacity, improves sound quality, and can be "fine-tuned" to help meet your individual listening needs. ii. What is digital signal processing? Most hearing aids dispensed at Cleveland Clinic are digital hearing aids. Simply put, a digital hearing aid converts the sound coming into the microphone into a signal that can be processed or changed by a digital computer chip. The signal is then converted back intosound and is delivered to the ear. Like a CD player, digital hearing aids deliver a crisp, it is. In addition to being a crisp, clean sound, this can also make the digital hearing aid more comfortable to listen to. # DSPs in 2007 Today's signal processors yield much greater performance. This is due in part to both technological and architectural advancements like lower design rules, fast-access two-level cache, (E)DMA circuit and a wider bus system. Of course, not all DSPs provide the same speed and many kinds of signal processors exist, each one of them being better suited for a specific task, ranging in price from about US$1.50 to US$300. A Texas Instruments C6000 series DSP clocks at 1 GHz and implements separate instruction and data caches clean sound. These hearing aids automatically adjust the amount of gain given to a sound based on how loud as well as a 8 MiB 2nd level cache, and its I/O speed is rapid thanks to its 64 EDMA channels. The top models are capable of even 8000 MIPS (million instructions per second), use VLIW encoding, perform eight operations per clock-cycle and are compatible with a broad range of external peripherals and various buses (PCI/serial/etc). Another big signal processor manufacturer today is Analog Devices. The company provides a broad range of DSPs, but its main portfolio is multimedia processors, such as codecs, filters and digital-analog converters. Its SHARC-based processors range in performance from 66 MHz/198 MFLOPS (million floating-point operations per second) to 400 MHz/2400MFLOPS. Some models even support multiple multipliers and ALUs, SIMD instructions and audio processing-specific components and peripherals. Another product of the company is the Blackfin family of embedded digital signal processors, with models like the ADSP-BF531 to ADSP-BF536. These processors combine the features of a DSP with those of a general use processor. As a result, these processors can run simple operating systems like ?CLinux, velOSity and Nucleus RTOS while operating relatively efficiently on real-time data. Most DSPs use fixed-point arithmetic, because in real world signal processing, the additional range provided by floating point is not needed, and there is a large speed benefit and cost benefit due to reduced hardware complexity. Floating point DSPs may be invaluable in applications where a wide dynamic range is required. Product developers might also use floating point DSPs to reduce the cost and complexity of software development in exchange for more expensive hardware, since it is generally easier to implement algorithms in floating point. General purpose CPUs have borrowed concepts from digital signal processors, exemplified by many new instructions present in the MMX and SSE extensions to the IntelIA-32 architecture instruction set (ISA). Generally, DSPs are dedicated integrated circuits, however DSP functionality can also be realized using Field Programmable Gate Array chips. Embedded general-purpose RISC processors are becoming increasingly DSP in functionality. For equipment that is extremely fast, portable, and flexible. To meet those needs, designers are facing more pressures than ever, but they also have more options than ever to address them. Careful evaluation of each option clearly shows several viable alternatives for embedded applications. For implementing today's realtime signal processing applications, however, DSP is very often the best choice. No digital technology has more strengths than DSP nor better meets the stringent criteria of today's developer. Certainly, other digital options can address any one of these relevant problems well, but only with clear trade-offs. DSP gives designers the best combination of power, performance, price, and flexibility and allows them to deliver their realtime applications quickly to the market. # Global Journal of Researches in Engineering # Conclusion Realtime signal processing is taking the digital revolution to the next step, making equipment that is more personal, more powerful, and more interconnected than most people ever imagined possible. Over the years, different technologies have powered the most innovative creations from the mainframe and minicomputer eras to the PC and today's Internet era. Consumers are driving realtime functionality, demanding ![(dsps The introduction of the microprocessor in the late 1970's and early 1980's made it possible for DSP microprocessors such as the Intel x86 family are not ideally suited to the numerically-intensive requirements of DSP, and during the 1980's the increasing importance of DSP led several major electronics manufacturers (such as Texas Instruments, Analog Devices and Motorola) to develop Digital Signal Processor chipsspecialised microprocessors with architectures designed specifically for the types of operations required in digital signal processing. (Note that the acronym DSP can variously mean Digital Signal Processing, the term used for a wide range of techniques for processing signals digitally, or Digital Signal Processor, a specialised type of microprocessor chip). Like a general-purpose microprocessor, a DSP is a programmable device, with its own native instruction code. DSP chips are capable of carrying out millions of floating point operations per second, and like their better-known general-purpose cousins, faster and more powerful versions are continually being introduced.](image-2.png "") ![Digital Signal Broadcasting of Acoustic Signal Using VHDL and XOR, or more complex combinational functions such as decoders or simple mathematical functions. In most FPGAs, the logic blocks also include memory b) Signal sampling](image-3.png "") ![DSP, engineers usually study digital signals in one of the following domains: time domain (onedimensional signals), spatial domain (multidimensional signals), frequency domain, autocorrelation domain, and wavelet domains. They choose the domain in which to process a signal by making an informed guess (or by trying different possibilities) as to which domain best represents the essential characteristics of the signal. A sequence of samples from a measuring device produces a time or spatial domain representation, whereas a discrete Fourier transform produces the frequency domain information, that is the frequency spectrum. Autocorrelation is defined as the crosscorrelation of the signal with itself over varying intervals of time or space. Year 2015 Digital Signal Broadcasting of Acoustic Signal Using VHDL c) Quantization Quantization is the procedure of constraining something to a discrete set of values, such as an integer, rather than a continuous set of values, such as a real number. Quantization in specific domains is discussed in: a) Time domain Time domain is a term used to describe the analysis of mathematical functions, or physical signals, with respect to time. In the time domain, the signal or function's value is known for all real numbers, for the case of continuous time, or at various separate instants ? ? in the case of discrete time. An oscilloscope is a tool commonly used to visualize real-world signals in the time domain.](image-4.png "F") ![of analog signal processing](image-5.png "") ![2015 Global Journals Inc. (US) Version I Digital Signal Broadcasting of Acoustic Signal Using VHDL VI.](image-6.png "") ![Digital Signal Broadcasting of Acoustic Signal Using VHDL sounds louder to improve your ability to hear and understand speech. They work by increasing the pitch range of sounds. Pitch is the quality of the sound that enables you to classify it as high or low.](image-7.png "F") ![Digital Signal Broadcasting of Acoustic Signal Using VHDL example, ARM Cortex-A8 has a 128-bit wide SIMD unit that can have impressive 16-and 8-bit performance for industry standard benchmarks.VII.](image-8.png "") Digital Signal Broadcasting of Acoustic Signal Using VHDLYear 201511XV Issue V Version Iovercompressed in a lossy manner, visible (and( ) Volume F Journal of Researches in EngineeringRADAR, SONAR, seismology, and biomedicine. Specificsometimes distracting) artifacts can appear.Globalexamples are speech compression and transmission indigital mobile phones, room matching equalisation ofsound in Hifi and sound reinforcement applications,weather forcasting, economic forcasting,seisemic dataprocessing, analysis and control of industrial processes,computer-generated animations in movies ,medicalimaging such as CAT scans and MRI, imagemanupulation, high fidelity loudspeaker crossovers andequalization, and audio effect for use with electric guitaramplifiers © 20 15 Global Journals Inc. (US) © 2015 Global Journals Inc. (US) * Discrete-Time Signal Processing RonaldWSchafer Prentice-Hall Signal Processing Series 2nd Edition * Modern Spectral Estimation: Theory and Application/Book and Disk (Prentice-Hall signal processing series Steven M. Kay