Avi Nehemiah, MathWorks
Face recognition is the process of identifying people in images or videos by comparing the appearance of faces in captured imagery to a database. Face recognition has many applications ranging from security and surveillance to biometric identification to access secure devices.
In this webinar, discover how to use computer vision and machine learning techniques to recognize faces in images and video. Topics covered in this session include
Existing MATLAB users will learn about new features for pattern classification, data regression, feature extraction, face detection and face recognition.
About the Presenter
Avi Nehemiah works on computer vision applications in technical marketing at MathWorks. Prior to joining MathWorks he spent 7 years as an algorithm developer and researcher designing computer vision algorithms for hospital safety and video surveillance. He holds an MSEE degree from Carnegie Mellon University.
Recorded: 13 Nov 2014
Welcome to this webinar on face recognition with MATLAB. My name is of an Avinash Nehemiah, and I'm a product marketing manager for Computer Vision here at the MathWorks. I have three goals for this webinar today. First, I'd like to give you an overview of the steps in the face recognition workflow. I'd like to show you some new capabilities in MATLAB that enable face recognition. And I'd like to teach you how to solve some common challenges, like dealing with large data sets and performing face recognition in video streams, or stream processing.
Let's start by defining face recognition just to make sure we're all on the same page. Given a gallery or a data set of facial images of people you want to recognize, when an input image is presented, like this image of my corporate head shot, a face recognition algorithm matches the face in the input image to a person from the gallery.
I'd like to point out that face recognition and facial recognition are the same thing. And you might notice me using both these terms through the course of this webinar. Face recognition has many applications in our everyday life. It is most commonly used in video surveillance to match the identity of people in surveillance footage to an existing database. It is used as a biometric password in your laptops and smartphones in very much the same way fingerprint recognition is used. And most recently, it's being used by social networking sites to automatically tag pictures of our friends by performing facial recognition.
I'd like to point out that face recognition is just an example of the larger area called object recognition, where a system can recognize and discriminate between different objects it has been trained to recognize. Object recognition leverages techniques from computer vision and machine learning and has applications in robotics, where it's used to help navigate the known checkpoints like charging stations. In self-driving cars or driver assistance systems, object recognition is used to detect pedestrians and traffic signs, like I have in my slide here.
In this webinar, I will be using face recognition as the example, but the techniques I show you are useful in solving other object recognition problems, such as the ones on the slide. MATLAB users have been solving face recognition problem for many, many years. As a result, we get a lot of questions on this topic.
I've listed the four most commonly asked questions on the slide, and those are what are the steps required to perform face recognition? How do I do this in MATLAB? How do I process big data large image collections? And how do I perform this with live video? Through the course of this webinar, I will answer all of these questions for you.
Here's a quick look at our agenda for today. I'm going to spend most of my time going through the face recognition workflow. I'll then talk about how to deal with large data sets and how to perform real time face recognition and video streams. I'll then spend a little time talking about face verification, which is a subset of face recognition and determines if two images presented belong to the same person. That should give us plenty of time for me to answer your questions.
So let's jump into the face recognition workflow. The first thing you need to create a face recognition system is a database of facial images of people you want to recognize, also known as a face gallery. You then perform a processing step, known as feature extraction, to store the discriminative information about each face in a compact feature vector.
Following this, you have a learning or modeling step when a machine learning algorithm is used to fit a model of the appearance of the faces in the gallery, so that you can discriminate between faces of different people in the database. The output of this stage is a classifier, a model that will be used to recognize input images.
When you have an input query image, a face detection algorithm is used to find where the faces are located in that image. You then crop, resize, and normalize the face to match the size and pose of images used in the training face gallery. You then form the same feature extraction step that you did with the face gallery, and you run that through your classifier or model. The output is a label or an indicator to signify which person from the database, the gallery, the query image belongs to.
I'm going to start with a feature extraction step, because this is new to a lot of people who are interested in face recognition, but haven't used computer vision, or machine learning techniques in the past. One of the questions I get asked a lot, and it's one I ask myself, when I first worked on face recognition, is if the feature extraction step is even necessary?
To illustrate this, I have the recognition results when I try to recognize the person on the left, using a simple image difference metric and no feature extraction. As you can see, he has been mismatched to the person on the right. And this is because the raw pixel information does not have enough discriminative information to distinguish between these two faces. The image paired below shows the correct recognition results. Now, I'd like to acknowledge that these images are from the AT&T database, which is a standard database to test face recognition algorithms.
So what is feature extraction? Feature extraction is the method of dimensionality reduction that represents the discriminative or interesting parts of an image in a compact feature vector. Now, MATLAB has a wide spectrum of feature vectors that you can use for many recognition tasks, as I have illustrated on this slide. The densest features are obviously the image pixels itself.
To its left, you have a feature type called a histogram of oriented gradients or HOG features. Now, this represents the structure of an object. If you take a closer look at that image, you can see that you can distinguish the structure of the bicycle, but now you have a representation that is invariant to the lighting and the image pixels of that image. To its left, you have SURF features, which are a form of local feature extraction that finds many interesting points in the image and encodes information about the area around these interesting points.
And to the far left, you have the sparsest presentation, also known as a bag of visual words, where the bicycle has been represented as a sum of visual parts. So one feature is the handlebar. Another feature is the wheel and the seat. The great advantage of this method is if even part of the bicycle is hidden, you can still find the bicycle or recognize the bicycle by seeing one or two of the other visual parts that are visible.
So with that said, let's jump into MATLAB and start solving some face recognition problems. I'm gonna start by bringing in a database of facial images for people I want to recognize into MATLAB. For this example, I'm going to use the AT&T face database that I referenced in my previous slide. I have the database stored here in this folder face database ATT. And I have separate sub directories for each subject on the database named S1 through 40.
Each sub directory has 10 images of each person. Now the construct that I'm going to use to bring this data into MATLAB is called the image set that is used to manage large collections of images. It not only brings the data into MATLAB. It maintains a hierarchical relationship between the images, so I have 40 image sets, one corresponding to each person in the database.
Now that I have this in MATLAB, I can access individual images from the database, and I can also access groups of images at a time, like this montage of all the images of the same person in the database. Now, one thing I'd like to point out is the AT&T face database is a fairly clean database. The faces have been detected. The illumination is constant. The only variation are slight changes in pose and expression.
In face recognition circles, this is generally considered a pretty easy database to deal with. Now for those of you who haven't used MATLAB in a while, or have never used MATLAB, I'd like to point out one of the advantages of MATLAB, which is here where several lines of code, I've been able to bring in a full database to MATLAB, and I've been able to visualize parts of my database.
So let's set up the problem we're going to solve in this example. So in this example, we're going to try and recognize the person on the left and match him to somebody in the database of the 40 people that I have in the montage on the right. Now, before extracting features and learning how to tell between these people, I'm going to split my database into a training set to learn how to discriminate between the faces, and the test set, so I can test my algorithm against.
So I'm going to just plot the database using an 80-20 split. Now, that brings us to a feature extraction. To learn what feature extraction methods are available in MATLAB, what I'm going to do is I'm going to navigate to the product documentation, and I'm going to search for feature extraction.
This brings up a list of all the feature extraction methods and anything related to feature extraction in MATLAB. If I click this first result here, this shows me all the feature extraction methods provided by the computer vision system tool box that I used to extract features from images, like the ones we have here.
And as I scroll down, let me open up this example here. Now, this example show shows me all the steps with a lot of text explaining what's going on, and how to recognize digits in images. So this is a very similar problem to us trying to recognize faces, and the features used are something called HOG features.
Now, HOG stands for histogram of oriented gradients, and this function here extract HOG features extracts HOG features from the training images. So let's visualize that. Now, HOG features, if you look at the visualization in the bottom, they encode the edge information and the directionality of the edge information, so they are really good for extracting information about the structure of an object, of the faces, in this case.
Now, the next thing I'm going to do is I can extract the HOG features for all the the images in my training set. Now, that that's done, let me show you the two variables that have been created. The first is this matrix training features, which if I look at in my workspace, is a huge 320 by 4,680 matrix. Now 320 corresponds to the number of images in my training set. So I have eight images of each of the 40 people. 4,680 is the size of the feature vector extracted by the HOG features for each image.
Now, 320 is normally known as the number of observations in the problem. And 4,680 is known as the dimensionality of the problem. I also have this list called Training Labels, which basically just tells me which feature corresponds to which person in the database.
Now that I've extracted the discriminative edge information from my face database, to really test how well the feature extraction works, I need to use a machine learning technique to learn what each person's face looks like and how to discriminate between them. Now, this touches upon the next step in our workflow. And that is the training of modeling.
Now, the training of modeling step for object recognition and classification problem solves a data fitting problem. So assuming we have the simple data set of red points and blue points, and these points are known as training data or training features, like the ones we just extracted, if I fit a model using a machine learning algorithm, I end up with a decision boundary here that discriminates between the red points and the blue points.
Now, this decision boundary is known as a classifier, and I can classify or recognize subsequent points by figuring out which side of the decision boundary it lies on, whether it's a red point or a blue point. Now, MATLAB has a wide variety of machine learning techniques, but one thing we've done to make it easy to discover new techniques and also to swap out between different machine learning techniques, we've created a constant interface to use all these different techniques.
Now, the interface is something like this. So to create a model of a set of data, you're going to use the text fit, followed by the letters [? C or L, ?] depending on whether you're using a classification model or regression model. You'll then follow that with the name of the model itself. It could be k-NN or SVM, which are different machine learning techniques.
And it has two inputs, x, which are the training features, and y, which are the labels to map the features to a class in the database. So to create a phase classifier, I would use this line of code, which is phase classifier, which is fit followed by C for classification, and ECOC, which is the method that I'm using, and I'm going to feed it in my training features and the training labels that I just generated off the HOG features.
Now, to use the model, you use the predict method. So you call predict. You pass the model that you trained from the training data, and your input features. And the output is a label. So in this case, I'm going to pass the phase classifier and the query features in, and the output is a label that says Dima, which is the name of a person in one of my databases.
So let me jump back into MATLAB. Now, to learn how to discriminate between the different faces, I'm going to use the ECOC classifier. To learn more about this, I can just right click on it and ask for help on that selection. And MATLAB gives me the documentation on how to use this function and tells me more about it. It also has examples on how I can use this function, which makes it really easy to use new methods in MATLAB, because it's very, very well documented.
So I'm going to pass on my training features and my training labels. I'm going to learn how to discriminate between the 40 people in my database. So let me run that. And when the training's done, I'm going to test my classifier to see if it works. So I have a query image on the left of this person on the left, and I have the match class, which in this case, correctly matches him to the right person on the right.
Now, just to get a better sense of whether our algorithm is working, let's test the images from the test set for the first five people in the database to see how well this has worked. So let me just run that. And you can see the output here shows me the query phase the test set on the left and the match class on the right. So it matches the first person accurately.
Oh, and for the second person, you see the one image has been matched accurately, but for the second test image, there's a slight mismatch where this lady has been matched for this gentleman here. And let's check out the other three people. So this gentleman has been matched correctly, this one as well. And so has this one.
So if the 10 images that we've passed in, nine have been recognized correctly, and one has been recognized wrong. So we're running at about a 90% accuracy. And if you do test the whole database with this classification or with this learning technique, it gives you about 91% accuracy, which is about what we saw with just a small test set of 10 images.
So now that we've created a simple recognition system here, using the HOG features, that has about 90% accuracy in a simple database, let's try it on a slightly more challenging database. So let me open up the script that I have here. Let me-- now, our face gallery here are a set of images of people at the MathWorks that I've collected. So this is our computer vision development team and myself in the second row here.
And one thing I want to point out as you're looking at these images is there's a fair amount of variation in the poses of the faces. Some of these images have actually been taken years apart. Also the lighting is very, very different in each of these images. Now, let me extract my HOG features from my training data set.
Let me create a fit ECOC classifier again. And now, let me test it against an independent test set that I gathered of the same people. And let's expand that. Now, as you can see for this more complicated database, of the five input images on the column on the left, only two of them, the second and third one have been matched to the correct person, Dima, there, and the other three have been misclassified, again, as Dima. So the same simple technique that give me 90% accuracy on a fairly easy database, like the AT&T database, once I've moved to a more complicated database, the accuracy has dropped from 90% to 40%.
Now, there are several things we can do to improve the recognition performance of our system. But before I get into that, let's take a step back and recap what we've learned so far. So getting back to our face recognition workflow, we bought our data set of faces that we wanted to recognize into MATLAB, using the image set construct that's used to manage large collections of images. We then performed feature extraction using the histogram of oriented gradient, or HOG features. We learned the discriminative information and modeled that using the fitceCOC classifier, and then we tested that classifier against our test set, to see how well it performed.
So we learned about feature extraction to find representations that are invariant to changes in appearance and illumination. I'd like to point out that I created a webinar called Computer Vision Made Easy earlier this year that really does a deep dive into some different feature extraction techniques. I'd recommend you check it out. We also used machine learning to create a model to discriminate between the different faces. There's another excellent webinar created by a fellow math worker, called Machine Learning with MATLAB, that does a much deeper dive into these different machine learning techniques. And I'd like to acknowledge that the images I used were from the AT&T face database that I found at the link below.
So this brings us to our next problem, and in this problem we want to improve the accuracy of this new database that we captured at the MathWorks of myself and the computer vision development team. Now, in this database, the variation of illumination is pretty substantial, and a lot of these images were captured years apart. So the faces look pretty different.
So let's get back into MATLAB and try and do that. So now, let's try and improve the accuracy of our recognition system from the measly 40% that we just got. So I'm going to open up the different script that I've written here. Let me clear all old variables and open images. Let me read in and display my gallery again. And read in a query image.
You'll see that the pre-processing steps of detecting the face and cropping it hasn't been done in this image. The other thing I'd like to point out is the illumination in this image is fairly poor, and also I look very different from the images of myself in the face gallery here. So we have a substantial variation in appearance and also in illumination.
To detect faces, I'm going to use something called a cascade object detector, which ships with several pre-trained detectors, including one to detect faces. You can also train this detector to detect any object of interest, using an app that ships with the computer vision system tool box. Once I run this and locate the face, I can crop out and normalize the face using the IM crop and the IM resize functions. So now I have a facial image of myself that is the same size as all the images in my face gallery.
Now let me extract features for this face image of myself. I'm using a combination of different feature types to create what is known as a facial vector. And I'll provide a reference on how to do this in a little bit. Now, one thing that I'd like to point out is the dimensionality of this query facial vector-- let's look at the workspace-- is substantially higher than the HOG features that I used before, which had a dimensionality of between 4 and 10,000. The dimensionality of this feature is 66,000.
Now, I have a pre-trained classifier of the face gallery. So let me run that to see how well it does recognizing my face. And as you can see, it protects the right identity of myself. And this is in spite of the huge variation in my appearance and also the pure illumination of the test image.
Now, let's run this method with a higher dimensionality against the same test images that we ran the simpler system against. Once that's done, you'll see we have a substantially improved accuracy. So now, the system's been able to recognize both query images of myself correctly and the images of Dima on the second and third row have been recognized correctly as well. So we've gone from our 40% accuracy, using the low dimensional features of HOG to 80% accuracy, where four of the five test images have been recognized correctly.
Now, this one image of [? Zheng ?] hasn't been recognized correctly. And there's several things we can do to improve the recognition results further from the 80% that we have here. Aside from trying different feature extraction methods, you can try different machine learning algorithms. But one of the most common fixes for issues like this is to use more training data. The three to five images that I have for each person is just not sufficient to recognize people with a large amount of variation, especially over time when people's appearance changes. So let me show you this neat way to generate lots of training data in MATLAB.
Now, what I'm doing here is I modified an example that ships with a computer vision system toolbox that detects and tracks the face by tracking the marker points on [? Zheng's ?] face there. And the modifications I've made stores these images in a folder so that then I can use the as training images.
So if I open up this folder here where I save the training data, you'll see for [? Zheng ?] himself, I've generated about almost 1,000 training images. And I've done the same for myself and Dima. So I've generated about 1,000 images for each of us. So we have about 3,000 training images for the three people, which is up from-- I think we had three of myself, three of [? Zheng ?] and five of Dima, which is a substantial increase in the number of training images.
Now, this brings us to a nice spot to switch to our next topic, which is dealing with large data sets. For commercial face recognition systems, you often have thousands of training images of each person and thousands of people in the database. But before I get into how to deal with large data sets, let's recap what we've done so far.
So to improve the recognition accuracy, what we did was we used higher dimensionality feature extraction. The method that I used was inspired by the paper referenced below, although we did make some slight changes, and we used [? sub ?] features instead of [? sift ?] features. We also postulated that using more training data would improve the recognition.
So so far, we've gone over the entire face recognition workflow. We've talked about the pre-processing step, which is taking in the input frame, detecting the face, and registering it or normalizing it. We've talked about the training phase, which is reeling in the face gallery, performing the feature extraction, and modeling the faces in a way that you can discriminate between them in the form of a classifier, and finally, which is the recognition phase, which takes a feature from the input image, passes it through the classifier, and returns a label of the person.
Now, on to our next topic, which is dealing with large data sets. Now, the challenge here is even if I wanted to train a classifier using thousands of images, if my feature extraction method took five seconds to process per face, and I had 3,000 images, it would take me well over four hours to process this data.
MATLAB has several ways to deal with problems like this one. Parallel Computing Toolbox provides explicit multi threading to help you maximize utilization of your multicore processor. It also provides GPU accelerations for many functions and areas, such as image processing through the Image Processing Toolbox.
MATLAB Distributed Computing Server extends the same capability to even more cores on a cluster, without needing to make any changes to your algorithmic code. So now let's jump into MATLAB and see how we can solve this problem. So now, let me open up the script that I have written here to help talk about how to process large data sets in MATLAB. What the script does is it just runs through 240 training images and perform some dense feature extraction that takes about five to seven seconds per image.
So all in all, these 240 images took about 20 to 25 minutes to process on my laptop. Now, if I want to speed that up, I can simply change this for to a parfor. And that immediately lets me use an additional core on my laptop to process this. And this gives me about a 1.5 x2 or 2x speed up, but still isn't sufficient.
To really take this to the next level, I want to run this processing on a cluster or on a server that I have set up on my network. Now, let me bring your attention to this line of code here, so you can see I have all my data stored on a shared location on my network. So I've changed all my paths to the shared location on my network. The next step for me to do is to offload this processing to a cluster.
So to do that, I'm going to go to the Home tab in my tool strip. I'm going to go to the parallel option and select one of the clusters I have set up. Now, the next thing to do is use the batch command to batch process this script large data set feature vectors that you're looking at. So when I hit Enter, this is going to offload this job to the cluster that I just pointed to.
And once that job has been offloaded, one thing you'll notice is my MATLAB session has been freed up, so I can do other stuff while this job is processing. And I can check the status of the job, simply by typing out the job number. And you can see that the job is still running.
Let's check that again to see if it's done. And it looks like it's done. Let's see how long that took to process. So there you go. So it took about a minute and 41 seconds to process the job that took about 20 to 25 minutes on my local machine. So that's a huge speed up to process this, when all I had to do was to point at a cluster on my network, put my data in a shared location, and run it on the cluster, and I was able to get this huge speed up.
And this is a tool that we have in MATLAB, where you can offload processing to a cluster, by really making almost no changes to your algorithmic code, which is what makes it so easy to use. And now, let's jump back into PowerPoint to recap what we've seen in this demo. So what have we learned with this demo?
One, you can use parallel computing to accelerate your workflows on your own machine. You can also use our distributed computing tools to offload processing to a cluster for further acceleration with very little changes to your algorithmic code. I'd also like to point out that in release 2014B, MATLAB has support for other big data functionality that includes the Hadoop file system and map produced frameworks.
Please follow this link to learn more about that. Now, this brings us to our next topic. And that is how do you perform real time face recognition in video streams or stream processing. Now, the challenge here is the recognition portion of these algorithms or the feature extraction and the classification step usually takes longer to execute in the inter-frame period, which is normally 30 milliseconds. And this can create a large delay in processing the next frame or a loss of data if you start throwing away frames in between.
And as you can see from this series of images, in three seconds, which is the normal execution time for some of these recognition algorithms, a lot can happen. The face can change sizes, as they appear in images. A second subject might show up. A lot can happen in three seconds, while a recognition algorithm is executing. So now let's jump into MATLAB and see how we can solve this problem in MATLAB.
So I'm going to run the script that I have here, that's again, a modification of the face detection and tracking example that I used to generate training vectors. So I have my face gallery here. And I have my input video here. So I've detected a face and passed it to a recognition algorithm that's processing in parallel. So you can see it's processing on the top left hand corner of the screen, but continuing to get the next frame of video and track the face using by tracking those marker points.
Then the recognition algorithm is done. If you look at the top left hand corner of the video, it displays the output label from the classifier which is my name there, [? Avi, ?] on the top left hand corner of the screen. So you can see here, I'm able to process the video stream in spite of the fact that my recognition algorithm takes a few seconds without any delay, by running the recognition process in parallel.
Now, let me show you the constructs that I used in MATLAB to make this happen. Now, the first thing that I did is I used the parfeval construct to run this function recognize face in parallel. So let's open up parfeval or help on parfeval Let's open it in the health browser and see what it does.
So basically what parfeval does is it lets us execute a function asynchronously on a parallel work or a parallel stream of execution. So all I have to do is use this parfeval to run this function, recognize face on a parallel stream of execution. And then I check to see whether it's done, and when it's done, I just grab the label of the face, the output of the classification algorithm, and print it on the top left hand corner of the screen.
And let's run the script again to just see the whole process. So I have my face gallery here. And here's my video stream and the image passed into the face recognition algorithm. And as you can see, the stream is continuing to process at about 30 frames per second, and the recognition is running in parallel. And when that's done, it outputs the label of the classification on the top left hand corner of the screen.
So let's close out of that and summarize back in PowerPoint. So here, we use parfeval to execute the time consuming recognition task asynchronously in parallel. We also used object tracking just to maintain the location of the faces in the current frame. Now, this object tracking example ships with the Computer Vision System Toolbox, and you can find it in the MATLAB help.
Now, this brings us to our last topic, and that is face verification. Face verification is just slightly different from standard face recognition. Face verification tries to determine if two query images belong to the same person. And it's often used in security applications, like as biometric passwords. And the workflow is very, very similar to a lot of the things we've learned so far, with slight modifications.
So given two images, you still perform the feature extraction step. After performing the feature extraction step, you take a difference between the two feature vectors, or you see how these two feature vectors differ from each other. And then you run it to a classifier or model. But this classifier or model is just a two classifier that has been trained to look at the difference between a set of feature vectors and decide whether the two images are the same, or if they're different.
So the output of this classifier is just a plus 1, if the images are the same, and a minus 1 if the images are not. So let's jump into MATLAB and see how we can implement face verification, using a lot of the tools that we've already learned. So let's quickly walk through the steps in the face verification workflow.
So let me open the script that I have written here. Let me load in my data, using the image set, and let's display it to talk about the problem. So I have three images here, and I want to compare all these images against each other to determine which images belong to the same person, which in this case, would be [? Vitek ?] in the two images on the right.
So I want to have a plus 1 result when I compare those two images. And I want to have a negative minus 1 result when I compare the rest. The next thing I'm going to do is I'm going to do the same facial vector feature extraction that we did and a lot of our prior examples. So let me run the section of code.
And when that's done, let's look at the [INAUDIBLE] facial vectors that are created here. So let's go to our workspace and open that up. And you'll see I have three facial vectors, 66,000 dimensional facial vectors, one for each one of those images. Now, the next thing I'm going to do in this section here is I'm going to take the square difference between the facial vector and coding, so all those pairs of images.
For those three pairs, I'm just going to subtract those facial vectors from each other and take the square of that. And I'm then going to pass that through a pre-trained model. I have used a different machine learning algorithm. I've used an SVM, or a support vector machine. I actually used the [? fit ?] [? c ?] SVM function to create this classifier.
So when I pass in the difference between the facial vectors for each pair, it gives me a label of plus 1 if the two images are from the same person and minus if they're from different people. So let's run that. And as you can see, the algorithm has done a good job. So the two images on the first two rows do not match, because they're from different people. And for the person on the last row, they do match. So I have a plus 1 result, saying those images match.
And this is just a quick teaser on what face verification is. You follow a lot of the same steps that you do from the face recognition workflow that we spent so much time talking about, except the classifier that you train really learns the difference between these facial vectors and decides where they're from the same person or a different person. So instead of having a 40 class classifier like we did for the AT&T database, we have a two class classifier for face verification.
Now, just brings us to the end of our demonstrations. Now before I go, let me show you one quick thing before concluding this webinar. So what you're looking at is the MATLAB file exchange. And this is a place where lots of MATLAB users share their code and share their ideas, and let me search for face recognition.
And you'll see there are about 100 submissions of different face recognition algorithms and source code that you can try that our users have submitted on face recognition. And this is a great way to learn about topics is go to MATLAB central, and see what the MATLAB community has already done in the field. As you can see in face recognition, they've done really quite a bit in the field. So there's a lot of source code that you can leverage.
So in conclusion, why use MATLAB for face recognition? If we make it really easy to access and visualize your data. MATLAB has tons of methods and algorithms for both feature extraction and machine learning that you can leverage and don't have to write yourself. MATLAB has some great constructs to handle large data, by offloading your computing to clusters. We also have some great, easy to use constructs of parallel execution, like parfor and parfeval that we saw in a couple of different demos.
And of course, the MATLAB community, there;s access to lots of examples of face recognition workflows, on MATLAB central, that you can check out, and you can try some of the code that MATLAB users have submitted. Now, here are a few calls to action. Let us at the MathWorks know what object recognition problems you're working on and what you're trying to solve, so we can help you with it.
Try some of our new computer vision and machine learning capabilities in MATLAB. And if you have any additional questions, please send them to me at email@example.com. Please reference this webinar and so I know where you got the questions from. Thank you for watching this webinar.
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .Select web site
You can also select a web site from the following list:
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.