Site hosted by Build your free website today!







2.0       Introduction


2.1       DSP Concept


     2.1.1 Definition

     2.1.2 Features


2.2       Neural Network Concept


     2.1.1 Structure of neuron

     2.1.2 Neural Net


2.3       Summary



2.0       Introduction


There are some DSP topics that used in this project. One is the spectral analysis. This is an idea of transformations Ė mathematical tools that allow moving between two descriptions of signal. This is how we determine the frequencies are present in a signal.


Other topics such as filtering (to modify the frequency content of a signal), synthesis (generate tones, speech) and correlation (to identify periodicity in a signal). However, it does not apply to the project. The DSP algorithms are those essential things that would be discussed in next phase. [Dale Grover and John R.Deller, 1999]


DSP needs electrical engineering background; it uses the language of electrical engineering and many concepts from electronics and signals. Therefore the author sometimes met difficulties in understanding the terms. There are no fixed formulas for coefficients; and an experimental approach is required.  This project also required mathematical basis to deal with DSP application.  Although we donít need to be expert in every aspect of DSP but due to the contribution of each area, the author find that horizons widen as got deeper into DSP.


2.1       DSP Concept

Digital Signal Processing (DSP) is used in a wide variety of applications, and it is hard to find a good definition that is general, hence the author start by dictionary definition of the words. Digital means operating by the use of discrete signals to represent data in the form of numbers. Signal is a variable parameter by which information is conveyed through an electronic circuit. And finally, processing is some kind of operations which performed on data according to programmed instructions. Therefore, in short, DSP is changing or analyzing information which is measured as discrete sequences of numbers.  

DSP is used in a very wide variety of applications but most share some common features:


a) They require a lot of math (multiplying and adding signals)

b) They deal with signals that come from the real world.

c) They require a response in a predetermined time period.           




2.1.1 Definition


According to, Digital signal processing (DSP) refers to various techniques for improving the accuracy and reliability of digital communications. The theory behind DSP is quite complex. Basically, DSP works by clarifying, or standardizing, the levels or states of a digital signal. A DSP circuit is able to differentiate between human-made signals, which are orderly, and noise, which is inherently chaotic.


All communications circuits contain some noise. This is true whether the signals are analog or digital, and regardless of the type of information conveyed. Traditional methods of optimizing S/N ratio include increasing the transmitted signal power and increasing the receiver sensitivity. Digital signal processing dramatically improves the sensitivity of a receiving unit. The effect is most noticeable when noise competes with a desired signal. A good DSP circuit can sometimes seem like an electronic miracle worker. But there are limits to what it can do. If the noise is so strong that all traces of the signal are obliterated, a DSP circuit cannot find any order in the chaos, and no signal will be received.


If a received signal is digital, for example computer data, then the ADC and DAC are not necessary. The DSP acts directly on the incoming signal, eliminating irregularities caused by noise, and thereby minimizing the number of errors per unit time.

(Quoted from http://www.whatis.techtarget.comdefinition/0,,sid9_gci213898,00.html)



2.1.2 Features


The features of DSP are common to many digital systems are shown in the following table:







a) Digital systems can be reprogrammed for other applications (at least where programmable DSP chips are used)

b)  Digital systems can be ported to different hard-ware (for example a different DSP chip or board level product)




a) Digital system responses do not drift with temperature.

b)  Digital systems can be easily duplicated.

c) Digital systems do not depend on strict component tolerances.




Some things can be done more easily digitally than with analog systems



2.2       Neural Network Concept


Neural networks are simple a class of mathematical algorithms, since a network can be regarded essentially as a graphic notation for a large class of algorithm. Such algorithms produce solutions to a number of specific problems. Artificial neural networks have undoubtedly been biologically inspired, but the close correspondence between them and real neural systems is still rather weak. Architectures and capabilities of both networks are very much different. No models have been successfully in duplicating the performance of the human brain. Therefore, the brain has been and still is only a metaphor for a wide variety of neural network configurations that have been developed. [Durbin, 1989]


The area of Neural Networks probably belongs to the borderline between the Artificial Intelligence and Approximation Algorithms. Think of it as of algorithms for "smart approximation". The NNs are used in universal approximation (mapping input to the output), tools capable of learning from their environment, tools for finding non-evident dependencies between data and so on.

Some of the Neural Networking algorithms are modeled after the brain and processes the information. The brain is a multi layer structure (think 6-7 layers of neurons) that works as a parallel computer capable of learning from the "feedback" it receives from the world and changing its design by growing new neural links between neurons or altering activities of existing ones. The brain is composed of neurons, interconnected. [Jarek M., 1992]



2.2.1 Structure of neuron

Figure 2.1 Structure of neuron

(adopted from

Our "artifficial" neuron will have inputs (all N of them) and one output: Set of nodes that connects it to inputs, output, or other neurons, also called synapses.

A Linear Combiner, which is a function that takes all inputs and produces a single value. A simple way of doing it is by adding together the dInput (a "d" prefix means "double", we use it so that the name (dInput) represents the floating point number) multiplied by the Synaptic Weight dWeight:

for(int i = 0; i < nNumOfInputs; i++)
    dSum += dInput[i] * dWeight[i];

We do not know what the Input will be. The human ear can function near the working jet engine and in the same time. If it was only ten times more sensitive, we would be able to hear a single molecule hitting the membrane in our ears! It means that the input should not be linear. When we go from 0.01 to 0.02, the difference should be comparable with going from 100 to 200.

By applying the Activation function. It will take ANY input from minus infinity to plus infinity and squeeze it into the -1 to 1 or into 0 to 1 interval. Then we can get non-linear input.

Finally, we have a treshold. What the internal activity of a neuron should be when there is no input? Should there be some treshold input before we have the activity? Or should the activity be present as some level (in this case it is called a bias rather than a treshold) when the input is zero?

For simplicity, we will replace the treshold with an EXTRA input, with weight that can change during the learning process and the input fixed and always equal (-1). The effect, in terms of mathematical equations, is exactly the same, but the programmer has a little more breathing room. [Jarek M., 1992]

2.2.2 Neural Net


Figure 2.2 Neural Net

(adopted from

A single neuron by itself is not a very useful pattern recognition tool. The real power of neural networks comes when we combine neurons into the multilayer structures, is called neural networks.

There are 3 layers in our network. There are N neurons in the first layer, where N equals number of inputs. There are M neurons in the output layer, where M equals number of outputs. For example, building the network capable of predicting the stock price, we might want the (yesterday's) hi, lo, close, volume as inputs and close as the output.

We may have any number of neurons in the inner ("hidden") layers. The quality of a prediction will drop if the net doesn't have enough "brains". But if we make it too many, it will have a tendency to "remember" the right answers, rather than predicting them. Then the neural net will work very well on the familiar data, but will fail on the data that was never presented before. Finding the compromise is more of an art, than science.

The NN receives inputs, which can be a pattern of some kind. In the case of an image recognition software, it would be pixels from the photo sensitive matrix of some kind, in the case of a stock price it would be the "hi" (input 1), "low" (input 2) and so on. [Jarek M., 1992]

The basic unit of the brain is the neuron. Similarly the brain as a neuron or processing element at itís core. The first of these was developed by McCulloch & Pitts. The unit has a number of inputs and a number of outputs. Each input and output has a weight. The unit sums the product of the input value and its weight. The unit then has a function to determine whether or not to output a value and if so what that value should be. A neural net consists of lots of there neuron arranged in different ways depending on the architecture of the net.

The weights that the interconnections have are of crucial importance this is what gives the net memory. There are three different ways that the weights can be obtained: Fixed weight networks where no learning is required and weight are assigned and not changed. Supervised learning consists of many pairs of input and output training patterns the error computed, which is the difference between the desired response to an input pattern and the actual response, is used to determine the appropriate changes to be made. Unsupervised learning involves training without any teacher; there are no output patterns to compare against. The network learns to adapt based on experiences collected through the previous training patterns.

The type of network used in this project should be a multilayer network with three layers. An input layer with six inputs for the six harmonics will be used to determine the instrument type. The hidden layer with four elements and an output layer with two outputs are used to determine if the harmonics are for a guitar or not.

2.3       Summary


This project focuses on implementation of DSP. After studying the material, the author discovered that DSP has several drawbacks. One obvious disadvantage is the increased system complexity in the digital processing of analog signals. Another disadvantage is the limited range of frequencies available for processing. For instance, an analog continuous-time signal must be sampled at a frequency that is at least twice the highest frequency component present in the signal. The third disadvantage is that digital systems (active devices) are consuming electrical power. However, algorithms can be implemented using passive circuit employing inductors, capacitors, and resistors that do not need power. Moreover, the active devices are less reliable compared to the passive component.


The author has gained the basic idea and background knowledge to do the project through literature review. In the next chapter, the author is going to study DSP algorithm and neural network to implement in the project.



 Home   Chapter 1 Chapter 2 Chapter 3   Chapter 4