Signal Processing for Machine Learning
Signals are ubiquitous across many research and development domains. Engineers and scientists need to process, analyze, and extract information from time-domain data as part of their day-to-day responsibilities. In a range of predictive analytics applications, signals are the raw data that machine learning systems must be able to leverage for the purpose of creating understanding and for informing decision-making.
In this video, we present an example of a classification system able to identify the physical activity that a human subject is engaged in, solely based on the accelerometer signals generated by his or her smartphone. We use consolidated signal processing methods to extract a fairly small number of highly-descriptive features, and we finally train a small neural network to map the feature vectors into the six different activity classes of a prerecorded dataset. We show how the joint use of MATLAB® and library functions help deliver high-performance results with few design iterations and concise, clear code.
The topics discussed include:
- Signal manipulation and visualization
- Design and application of digital filters
- Frequency-domain analysis
- Automatic peak detection
- Feature extraction from signals
- Training and testing of simple neural networks
Hello, everybody, and welcome to this webinar on signal processing techniques for machine learning using MATLAB. My name is Gabriele Bunkheila, and I am a senior application engineer at MathWorks. A big part of my job is about helping MATLAB users in the area of signal processing, which is where my background is.
So when people talk about signals, they usually refer to some specific type of data that represent values varying over time. In this webinar, I'll discuss some standard techniques available in MATLAB to take quantitative measurements of signals and to use them within wider data analysis workflows, including, for example, machine learning algorithms, like clustering or classification.
By the end of this webinar, I hope you will have gained some familiarity with a few standard techniques starting from simple things like basic signal manipulations and visualization, including plotting and inspecting and selecting portions of signals, moving on to simple statistical estimation, and then on to more advanced or more specific signal processing topics like, using digital filters, for example, to separate individual components of a signal or computing frequency domain transformations to get further insight into signal variations over time, and finally, to automate signal measurements and extract group sensitive features from time-domain signals.
The idea here is distill information from just real waveforms and enable the user further algorithms that produce understanding of the data, typically in the area of machine learning. The techniques in this list are important because they are common to many data analysis and algorithm design workflows. And despite being relevant to the work of many engineers, many of them often feel they are challenging. Even with actual facts, they are within quite easy reach if one uses the right tools.
Now when I have some more slides to show you—for most of this webinar, I'm going to discuss one single practical example hands on in MATLAB. Let me switch to MATLAB for a minute, then, and describe quickly what this example is about. In this continuously updating plot, we're looking at three accelerometer output signals captured using a phone—one like the smartphone you may happen to have in your pocket right now.
The signals that we are seeing correspond to different physical activities carried out by a person or a subject that's wearing the smartphone. In this case, we happen to know the ground truth. But we are also trying to automatically understand what activity this is using computational methods. So that's purely based on measurements of the signals.
As you can see, most of the time we are successfully guessing what the activity actually was. Now, just a quick note to say that the data that I'm using in this example is recorded, so we don't have to wait for the new data to be ready. And we can proceed to the next buffer as soon as we're done with the previous one.
Because I'll be carrying out all the required computations very efficiently, on my laptop this is making it run just under 100 times faster than real time. That is pretty fast when you think about it. Something else that I want to mention is that you could do what I've just shown also perfectly well using like data.
MATLAB has long been able to connect to a variety of professional external hardware for the purpose of acquiring real-world signals. And these days it has become increasingly capable to also connect to mobile and low-cost devices. For example, File Exchange on MATLAB Central™ offers free downloads to stream sensor signals into MATLAB from both iPhones and Android™ based smartphones. So please take a look in there if this is something you're interested in.
Now looking, again, at my example here, while this happens to be accelerometer data, the techniques that I'll discuss are broadly relevant to most types of sample signals or time series even if we had to think about applications that share a similar classification twist.
The examples that I collected in this short list here already span a number of different industries, like, for example, electronics, aerospace, automotive, finance, defense, or aerospace again. So these are applications that I personally came across during my career at MathWorks. But again, a comprehensive list will be much longer.
Now once again, the reason why I put this concept together is that even if signal analysis is common to many applications and industries, many people still don't do it well. One reason is that if they didn't study signal processing at uni, then the individual techniques relevant to signal analysis often sound daunting. And studying them often implies having to learn a lot of domain-specific jargon.
Some other times it's not clear ahead of time exactly what type of analysis will give you the answer that you're looking for, making the problem open-ended. Finally, problems like computational efficiencies, lack of extensive algorithmic libraries, or rigid frameworks make some common tools that many people use out there not adequate for completing tasks of even medium complexity.
I'm hopeful that by the end of the session you'll get a taste of why MATLAB can address these challenges and be a perfect fit for this type of work. Now before I go back to MATLAB, let me review once again what our example is about. We're using three components acceleration signal coming from a smart accelerometer.
And solely based on an automated analysis of those three scalar signals, the objective is to understand what activity the person that's wearing the phone is actually doing as a choice between six different options or classes—walking, walking upstairs, walking downstairs, sitting, standing, and laying.
To do that, we use a classification algorithm. This is a class of algorithm that can tell to which class a new data sample belongs to based on previous knowledge of a reasonable set of similar data samples. The way it works is that at first the algorithm is exposed to a large set of known cases and trained or optimized so that it can recognize those known cases as accurately as possible.
Then it can be run on new, unknown data samples, for example, in this case, a new single buffer. If reached, new single buffering can formulate a guess on the right class based on its previous experience. Now it turns out that if the data used—and here I am talking about both training and test steps—were the actual raw waveforms, the job of the classification algorithm would be very hard or often impossible.
In practice, one very important step to happen ahead of the actual classification has to do with extracting infinite set of characterizing measurements from the wave forms. In this case, for example, these measurements should be able to capture quantitative descriptions able to differentiate the signals produced by a given activity from those produced by a different one.
In the language of machine learning, this type is called feature extraction. The features are a set of measured values from the signals. The main aim of this webinar will be to identify good characterizing features based mainly on signal processing techniques and also to automate the measurement using the MATLAB language.
Now final note—in order to select the right set of features, this is common to use a known data set—for example, in this case, one coming from a controlled experiment where the activity is known for every buffering signal samples available as we've seen. The knowledge of the data is key to the initial exploratory phase of feature selection.
For this example, I'm using a good data set made available by two research groups respectively from Catalonia in Spain and Genoa in Italy. In case you're interested, you can get hold of the data set yourself at this address below.
So I hope the general problem is clear enough by now. Let's see in a bit more detail how we can use MATLAB to develop a system that addresses a similar challenge. To explore this example, I'm going to use a MATLAB script. I'm going to assume that you're familiar with the scripts and functions in MATLAB. If you're not, though, you don't need to worry too much. These are easy concepts to grasp.
And I'm hopeful you'll be able to get the core ideas of this presentation anyway. All that you see here colored in green are comments explaining what that code is doing. You can see that to turn a line into a comment, you must use a percent sign at the beginning of it. Two consecutive person signs, as I have, for example, here, create a cell of code that can be executed in isolation and highlighted in the editor as the cursor is placed inside it.
In this script, I have many cells, which I'm going to execute and discuss consecutively one at a time. The first cell of the script was the one that launched my completed application. So I'm not going to execute it again. The following cell loads a portion of the data and plots it.
Here I have a function which I previously wrote that reads some data from our data set and returns the specific set of variables. As a result, we now have a vector x containing the samples of the vertical acceleration of subject number one over a period of time. It is worth noting that the area set itself comes with recordings from 30 different subjects.
We know we have 50 samples per second of our acceleration signal, because the sampling frequency of 50 Hertz was used as indicated here by this variable fs. We also have the time vector t corresponding to the acceleration vector. The two t and x have exactly the same length, which allows us to plot the former against the latter.
If I look at this plot and how it was achieved, this is a very easy plot to realize in MATLAB. And secondly, there are only a single short line of code to produce it—plot x variable, y variable. By the way, if you're not confident with using the MATLAB language right away, this could have also been achieved through pointing and clicking—for example, by first selecting t, holding down Control, inserting x. And then I right-click in, selecting like plot or go into the Plots tab up here and clicking on Plot.
The plot shows us that the acceleration for the subject was recorded for almost eight minutes, which would be 480 seconds. And it is also worth noting that it, in a way, this is a simple case, you know, the time samples are already regularly spaced over time. And all of them are available in a number of real-world applications. Some samples may be missing. So I should at least mention that there are other techniques in MATLAB to regularize and reprocess those types of signals.
Now going back to the plot—if you're familiar with plotting in MATLAB, you'll know that plots can be customized extensively both interactively and programmatically. I won't go through that process myself here. And in the following code section, I'll just use a function that I've previously written to produce a slightly more insightful plot.
Besides having now sound axis labels, a title, and a legend, this plot is actually also using additional information on the data that was available in my workspace, specifically the variable Act ID, which is a shorthand for activity ID. That's telling us what activity the subject was engaged in for each single data sample as an integer number between one and siz. We can interpret the meaning of those integers by looking at the remaining variable act labels.
If we look back at the plot that we just produced, this looks very similar to our final objective, which is about guessing the activity of every new portion of signal. Or remember that in this case, this is known data. And in here we're not yet guessing but only visualizing some knowledge that is already available.
Now the real question is what about if we didn't know what each activity was? How would we possibly work it out based on numerical analysis of the signal? I think this plot is already quite useful as it is definitely showing that this acceleration signal does somewhat look different when it comes from different activities.
Just by inspecting this plot visually, I think we can already identify some patterns. For example, all activity where the upper part of the body is rendered vertically seemed to have an offset or average on the value of 10 meters per second squared. That's pretty close to g, whose theoretical value realize around 9.81 meters per second squared.
So I think I can confidently say that that's due to the vertical component of the gravitational field. A slight exception to that seems to happen when the subject is sitting. But then how everybody always sits up right probably depends on how comfortably you're sitting. You may be bending forward or backwards, which explains lower average values.
Another pattern is that the signal coming from walking activities—either plain walking, walking upstairs, walking downstairs—is oscillating much more around its value than for more static activities in a vertical position like standing or sitting. Based on those considerations, if we wanted to work things out for ourselves, in some cases, that may be straightforward.
For example, we could easily distinguish between laying down and walking by computing the average of the samples in our buffering compared to a threshold. If it's lower than, say, five, it's laying. Otherwise, it's one of the others.
We can also quantify this remark a bit more rigorously by looking at the statistics of the signal. In this case, a single histogram will prove the point—the histogram is showing the number of value occurrences within a finite set of intervals. In passing, let me make a point of that—making an instrument plot like this from scratch would require quite a bit of effort. For example, one could have to go through the data, for each data sample finding which interval the value falls in and increment this to encounter for that interval.
Instead underneath this function of mine, the built-in function histogram is doing all the hard work. The rest of the code is simply arranging the two plots on top of each other and customizing the appearance. So you know the function histogram was introduced and released R2014b of MATLAB, and it provides a new, more efficient way of plotting histograms.
And by the way, also assume that computing the mean value, root mean squared value of some deviation will be easy enough. Because of MATLAB it is. If you're using a different environment or a general purpose programming language with no mathematical support, you may find that you have easy operations like these may require quite a bit of clicking around or at least fresh memories about basic maths and starts to call these back from scratch.
Now come back to similar ways of easily distinguishing different activities. We could easily separate things like standing and walking based on other measurements, like, for example, standard deviations or RMS as for root mean squared values as this other histogram shows.
But what about if we had to work out the difference between plain walking and walking upstairs in this case? This is what we see. For the two, I would say that this shows a similar average value and a similar standard deviation in this case. If you know about statistics, you may be thinking that things like higher order moments might give us more information. And I guess you probably want to continue your enthusiasm, because from a statistical point of view, if you look across the various subjects here, you'll soon realize that these two things are almost the same.
So for me, the main takeaway here is that to discriminate signals like these two statistical analysis is not sufficient. And really what we need to do is to look at how things vary over time, because that would allow us, for example, to measure things like the rate of oscillation of the signal acceleration signal.
This could be useful on the assumption that people move faster, for example, when they walk downstairs compared to when they walk upstairs or even the shape of the oscillation itself. This would be relevant if we thought that the type of movement done while engaging in plain walking is different from that captured, for example, when walking downstairs.
To prepare ourselves to analyze variations over time, there is one important point that needs to be made. I believe we already established that we have two main types of causes that contribute to the acceleration signals in the data set. One is the alignment of the subject with respect to the gravitational field and the other is the energy generated by their body movements.
One big difference between the two is that the gravity contribution is almost constant. If we wanted to be less assertive, we might say that its bearings more slowly. Instead, the body movements are the faster ones. When we focus on how signals vary over time, naturally we'd like to restrict our attention to the contributions generated by body movements, because that's what we're trying to classify.
Then their relevant question is, is there any way to separate out the two contributions which are now blended together into a single signal so that we could look at, analyze each one of them separately? While there are cases where this task is pretty difficult, for a wide range of practical situations, a standard method is to design and apply linear digital filters or simply digital filters, for short, so the data.
Digital filters work particularly well when the signal components that we want to isolate or remove are well-defined in terms of the rate of variation over time or to use some more specific jargon, by their so-called spectral frequency domain components. In this case for example, we want to keep only the contributions due to the body movements. Let's say those varying more quickly than about one oscillations per second. That's about the number of steps per second of an average walker.
And we want to discard contributions with slower variations. In the single processing jargon, this translates to willing to design and apply to a data an appropriate high-pulse filter. I'll repeat these ideas as we go through the process.
Now if I have to use a general purpose language without specific signal processing libraries, the task of designing and applying a digital filter will be pretty daunting. The designing phase in particular requires quite a bit of maths and a lot of domain-specific knowledge. In MATLAB, there are many different ways in which one could design a digital filter.
For example, you may choose to do it entirely programmatically, which means using MATLAB comments or essentially using built-in apps. Let's first take a look at what the latter would look like. Using an app is generally a great idea when you approach a problem for the first time. To do that, I go to the Apps tab of the MATLAB tool strip, and I scroll down to the signal processing and communications area.
In here, we'll pick the filter design and analysis tool. For more advanced filter design, you may also want to try the Filter Builder app. The filter design analysis tool is composed of several sections. For example, this filter specification pane will help us specify the right requirements for our filter. Down here to the left is where we start to define what we're looking to achieve. In this case, I'll select high pass filter, but you can see that a lot of our choices are also possible. This is through the standard low pass, band pass, band stop filters, plus a range of more advanced designs.
Now further down here, we're asked to choose between FIR and IIR. These are the two main big families of digital filters. If you know about digital filters, you probably have a good idea about which of the two to select and the design methods listed here will resonate quite a bit.
Here I'll skip the details, and I'll just use this option. Then I move over to the right, and I keep capturing comments with the help of the specification pane above. The things I have to say include, we're using a sampling frequency of 50 Hertz. We want to keep unaltered—that is, we want to attenuate by a factor of 1 or 0 DB—all signal components oscillating more quickly than one type per second or 1 Hertz.
Let's actually be slightly more generous and set this fs value to 0.8 Hertz. We also want to make sure that everything to the left of this file is smaller than f pass called f stop is attenuated by at least a given number of DBs. I'll set this to say 0.4 and correspondingly this a stop to 60 DB. This ensures that all oscillations slower than 0.4 hertz or times per second will be made 1,000 times smaller by the filter.
Finally, by pressing design, we realize a filter that satisfies our requirements, and we have a set of analysis tools available right within this app to verify that the filter is behaving as expected. For example, now we're looking at what's called managed response of a frequency. If I need to confirm that this is honoring the specification, I can overlay specification mask and zoom in to check with the filter owners are requirements. You can see 0.4, 0.8.
Or if I want to understand the transit behavior, by this press of a button, I can have access to things like the impulse response or the step response. Once my filter is designed, what I really want to do is to apply it to my signal. Remember that in this case, the objective was to get rid of the slowly varying contributions due to the alignment of my accelerometer with the gravitational field.
To use this filter with my MATLAB code, I can choose between two types of approaches. I can go to File and export the filter into my MATLAB workspace as one or more variables. Or I can generate some MATLAB code that realizes all that I've just done interactively through a programmatic approach.
The code that you see here has just been generated automatically. However, something to notice is that I could have just as well decided to use similar comments independently on my own. And having this generated automatically for me can also help me gain some insight, so that the next time around I could more quickly design my filter programmatically.
But more importantly, it now gives me a quick way of realizing the filter from my code just by using this function code. I'm not going to discard this function, because I have previously saved version already available in my Work It folder called HB Filter.
You can see that this looks just exactly like the one that we had just generated. Going back to my script, you can see that I'm creating a filter via my presaved function using one line of code. And in the next line, I'm playing the fields to my vertical acceleration. That creates a new signal, where we hope to find only the contributions due to the body movements. If I execute the section,
I'm also plotting the new filter signal against the original one. In the block, we can see that the new signal is now all centered around 0 with no offset due to gravity. You can also see some transient behavior every time that new activity kicks in. This is totally normal. And that can be quantified in detail while designing the filter as we saw a few minutes ago.
So now we're at a point where we can restart to analyze the behavior of the signal over time. Let's recap once again what we're trying to do. We're trying to select a suitable set of measurements that can capture the differences between signals generated by different activities.
In order to investigate what techniques are more likely to be effective, a helpful thing to do is to look at individual activities separately. I would like to show you a very effective method to select portions of signals in MATLAB based on what we call logical indexing. When I look at this plot, for example, I want to isolate this portion of signal relative to walking.
This information of which once those samples are stored in my vector ID here in my workspace. Because here we have more than one instance of every activity, we may say that we only want those samples with time, say, less than 250 milliseconds. If we formalize what I've just said in plain English, that translates into this line of code.
The result here is a vector of the same length as my signal with ones in the region of interest and zeros elsewhere. When we use this vector to index into our signal or the actual time vector, the result is a single portion of signal that we are interested in.
Now we can more easily look at this walking segment in more detail. We can zoom in and confirm that the signal oscillates fairly regularly. Roughly speaking, one can say the signal is ordinal periodic.
Now a good question would be, how can I measure how quickly this is oscillating or even capture some quantitative description about the shape of this oscillation? And a good answer would be by transforming the signal into the frequency domain rather by looking at its spectral representation.
For example, a lot of people that I know at this time will throw in the idea of computing NFFT, which is the shorthand for fast Fourier transform. In actual facts, the results delivered by a bare FFT algorithm taken in isolation would still be a few steps away from becoming actually useful.
More in general here, what you would be looking for is called spectral density or power spectral density. Now do you know how to compute that from scratch? You may do, perhaps based on the availability of an FFT function, which comes with the basic MATLAB installation. But more generally, if you know the name of an operation or an algorithm that you need to use, then you can search the MATLAB documentation or this function browser over here. Typing spectral density here brings up a lot of names of functions that do that.
For example, here I recognize the name of a method for special density estimation that I remember from uni called the Welsh method. For some quick guidance on how to use this function, I can hover over its name and browse this context guide on the left here.
Or I can follow the link to the full documentation. Here along with a list of syntax is available. I can also find explanations about the algorithms used and links to related pages discussing technical topics more in general, as for example here, a page in spectral analysis, which includes an introduction to the topic, the list of methods available, and a discussion on when each particular method is more appropriate.
Now going back to my script, just running this P Welsh function on my signal and specifying the sampling frequency gives me some insightful results pretty quickly. What is produced is something like this plot. On the x-axis, I have frequency from 0 to half of my sampling frequency, which was 50 Hertz. And on the y-axis, I have the power density in units of dB over Hertz.
And the region when the values of this plot are higher is likely to carry the information that I'm after. In this case, this pattern of peaks between 0 and 10 Hertz is holding a lot of measurable information on the rate and shape of our time domain oscillations.
For those of you who are at least a bit familiar with signal theory, it may be useful to draw a parallel with signals produced by musical instruments even if in this case this is not even a sound signal. Here you would talk about a fundamental frequency roughly around 1 Hertz and a number of harmonics at positions multiple of that frequency.
The distance in frequency between these peaks tells us about the rate of oscillation of our signals and the relative amplitudes of the peaks are closely related to the shape of the oscillations, a bit like what is referred to as the timber for musical signals. To validate these statements, let me plot the spectrum from walking on top of the one for walking upstairs and restrict the view to the range between 0 and 10 Hertz.
What I notice here is that the peaks for walking upstairs are closer together and push to the left, telling me that the rate of oscillation is lower in this case. Also, the amplitude of the peaks to the right of the fundamental decreases very quickly, telling me that the shape of the oscillation for walking upstairs is less abrupt, almost smooth, being more similar to a simple sinusoidal signal, which is ideally formed of just one single peak.
So the positions and amplitude of those peaks carry descriptive quantitative information, which if measured, would constitute good descriptive features. To further convince myself, I could also compare how the spectrum looks like for all working signals across the 30 subjects in the data set, which I'm doing here. Despite the different scale used here for the vertical axis and the resulting plot, you can see that, in fact, the location of the first few peaks and their relative scaling is fairly similar across the 30 recordings available. That's the 30 different subjects of my data set.
Now going back to our spectrum for the walking signal, our aim here is not simply to inspect this plot visually, but to put in place a programmatic mechanism so that we can automate the process of taking measurements for every new portion of signal that the system is presented with.
You can see that extracting this kind of information on positions and amplitude of these peaks from this plot is easy, but if you tried at least once in your life, probably you realized that it's actually not as easy as it may initially appear. For example, one could quickly get position and amplitude of the highest peak by using the MATLAB function max for maximum value, but then moving off from there is less trivial.
Luckily, the signal processing tool box for MATLAB has a function called fine peaks that is built to do just that. Now if we use this function fine peaks without providing any other information but our raw special density, then this is what it returns. This is the complete set of local peaks found in my plot. None of the rest of code was just taking care of plotting.
While this is not yet what we were exactly looking for, if we put some more effort into defining that—for example, telling the function how many peaks it should return and what is the peak prominence that we require or what is the minimum distance between nearby peaks that we expect—then the results are a lot more encouraging.
And just using a few lines of code, we now have a programmatic measurement approach that can be automated and is highly descriptive on our signal characteristics. The speed measurement approach that I used here for the spectral density can also be used for other types of analysis.
An example that I have in mind is the autocorrelation function, which is particularly useful to estimate fundamental frequencies that are very low compared to the single sampling frequency. I'm just going to show a quick example of that here.
This is how the autocorrelation of our walking signal would look like. The autocorrelation is always symmetric with a high peak in the middle, representing the energy of the signal. And for periodic signals, the location of the largest peaks to the right of the central one defines the fundamental frequency of the signal.
To make my point a bit better, let me overlay the autocorrelation curves for walking upstairs and walking downstairs and zoom in around the first peak to the right. These two signals have very similar frequencies and yet their respective first peaks can be separated relatively well, being around at least for sampling periods apart from each other.
Once again to compute the autocorrelation, I didn't have to remember any formula, and it took me just one line of code. Now I could go on discussing relevant measurements and strategies to automate them, but for the purpose of this demo, I'll stop here.
Once I have selected a number of measurements that I think describe well the differences between the different classes of my problem, I need to group that together so that for each new single buffer—we'll set up samples—I'm able to produce the collection of all the measurements or features for that particular instance.
The way that I did it here is that I collected all the steps that we went through together into this individual function called features from buffer.m. Here for every new buffer all acceleration samples in the three directions, I am applying the filter, computing mean and RMS values, and then computing covariance and spectral features using helper functions down here.
If I go down here, you can recognize x cor for autocorrelation the find peaks function together with a P Welsh function and in a similar way, the find peaks. This is very similar to what we've done together a few minutes ago, plus something that I haven't discussed in detail in the interest of time, which is a simple measure of how the energy of the oscillation is distributed across the power spectrum of my signal.
What I really like about this function is that if I measure the net number of code lines excluding comments and empty lines, that sums up to only 65 lines of code. Here I'm using this s log function available for free from MATLAB Central File Exchange.
So to recap at this point, now for every new signal buffer we have a way to extract a feature vector containing 66 measurements that characterize the signal. At this point, if our problem is simple enough, we may consider putting in place some custom logic that looks at the feature vector and according to the values in there, implements a strategy to guess what the right class of the signal is. That's theoretically possible.
That logic might look like if the mean value is greater than x and the RMS is greater than y and the position of the first peak, for example, is more than Z and the amplitude of the first peak is smaller than W, then the signal was produced by walking upstairs, say. The reality is that as the complexity of the problem goes up, coming up with such a manual logic mechanism is highly impractical.
Plus it does not guarantee that we take advantage of all the information in the feature vectors. The way these features really are commonly used is by means of a classification algorithm as we said earlier. There are many types of classification algorithms available out there and more specifically within MATLAB. In this case, I'm using a neural network.
In one line of code here, I'm creating a network. If you're familiar with neural networks, you might like to know that this simple syntax in this case creates a feed forward network with a single internal layer of 18 neurons that techniques to choose these magic numbers. But let's ignore that for this simple example.
In this other line, I'm training the network using a portion of my data set. The training process adapts internal parameters of the model—in this case, the network—so that it can optimally identify the right activity for the supplied signal segments. Remember that my data set would composed of both the recordings and the known activity ID values for each portion of the recordings. To train the network here, we're supplying both.
As I trigger the training of the network, you notice that a user interface appeared. This was updating us on the progress of the training process, and it may be very useful for monitoring the optimization on complex networks for which training may take longer. In this case, everything happened pretty quickly. My network is now trained to classify new signal segments that it has never seen before.
I can now run the network on new data and take a qualitative look at how well it's doing. We have now finally come to the place where we originally started. For every new buffer, here we are plotting three components of the acceleration, computing our 66 features live, on line, and finally using our train network to predict what we're doing.
Because these new signals are still coming from the data set, even if they are new to our classification algorithm, we still know what activity they belong to, so we can compare our automated guess to the ground truth.
Finally, to really assess the performance of the classification algorithm, instead of observing it in action online as we've just done, one would typically let it classify a new portion of the data set in batch mode—so all at once—and then compute some statistics. For example, one common way of summarizing visually the performance of a classification algorithm is the confusion matrix, which I'm creating in this code section. If we wanted to take away a single number, we'd probably looked down here to the right and say that overall our system was close to 92% accurate on the test set.
More in general, this matrix displays the counts for all the couplings between a given target class and guesses of the algorithm with the right guesses lying on the matrix diagonal and the rest of diagonal. For example, in this case this two by two region over here is showing an important number of wrong guesses between sitting and standing. If we wanted to improve the performance of our algorithm, something we could probably focus on would be to identify more features that help better differentiate these two particular cases.
OK. At this point, let me go back to my slides. If I had to summarize this example, our main objective has been to identify ways of extracting highly descriptive features from the time domain data or signals. My three main takeaways would be that we were able to do that entirely by reusing existing signal processing functions available out of the box. In total, we automated the measurement of 66 features using only 65 lines of code.
And we also took good advantage of the language and the built-in visualization features to establish what worked and what not. By doing that, we were able to get to the bottom of this task pretty quickly. About the large use of built-in functions that we made, something that I am particularly keen to remind you is the number of things that allowed us not to have to reinvent.
If I look at the sample functions that we used, then these would be some of the underlying formula that we will have needed to dig out from the web or maybe a textbook or a paper on the site in detail and that code in MATLAB. Having the signal processing tool box available allowed us to simply save all that time.
We also took great advantage of the neural network tool box, which allowed us to build and train a conventional type of neural network in two lines of code. If you ever studied neural networks, you may remember that even training simple networks with basic optimization algorithm is quite complex and error prone if done from scratch.
I've already also mentioned that neural networks all represent a particular set of possible choices for a classification algorithm. For all the other general purpose classification algorithms from Bayesian classifiers to support vector machines, a good place to look is Statistics and Machine Learning Toolbox™.
Statistics and Machine Learning Toolbox also covers other machine learning techniques like clustering or regression.
This brings me to the conclusion of this webinar. I hope you enjoyed it and that if not anything else at least I managed to give you an idea of the extensive set of functions for signal processing and data analysis available with MATLAB and its toolboxes. I also hope I conveyed the idea of how easy visualization function and built-in MATLAB apps can make complex discovery cycles pretty quickly after all. That was also thanks to the concise MATLAB language, which allowed us to carry out advanced processing and analysis tasks in just a few lines of code.
Now if you have questions at this point, please post them into the Q&A panel, which is indicated by a question mark in your WebEx panel at the top of the screen. We will take a few minutes to review them and then come back online to answer your questions.
Featured Product
Signal Processing Toolbox
Sélectionner un site web
Choisissez un site web pour accéder au contenu traduit dans votre langue (lorsqu'il est disponible) et voir les événements et les offres locales. D’après votre position, nous vous recommandons de sélectionner la région suivante : .
Vous pouvez également sélectionner un site web dans la liste suivante :
Comment optimiser les performances du site
Pour optimiser les performances du site, sélectionnez la région Chine (en chinois ou en anglais). Les sites de MathWorks pour les autres pays ne sont pas optimisés pour les visites provenant de votre région.
Amériques
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asie-Pacifique
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)