Main Content

Classify Land Cover Using Hyperspectral and Lidar Data

This example shows how to classify land cover using a fusion of hyperspectral and lidar data.

This example requires the Image Processing Toolbox™ Hyperspectral Imaging Library. You can install the Image Processing Toolbox Hyperspectral Imaging Library from Add-On Explorer. For more information about installing add-ons, see Get and Manage Add-Ons. The Image Processing Toolbox Hyperspectral Imaging Library requires desktop MATLAB®, as MATLAB® Online™ and MATLAB® Mobile™ do not support the library.

This example uses data from the MUUFL Gulfport Data Set [1][2]. The MUUFL Gulfport Data Set comprises data acquired simultaneously by airborne hyperspectral and lidar sensors on an aircraft over the University of Southern Mississippi – Gulfport in November 2010. Because the hyperspectral and lidar data were acquired simultaneously, they are already spatially registered. The hyperspectral data provides information about the spectral properties of the land cover, whereas the lidar data provides information about the elevation of the land cover from the ground. You can fuse information from the hyperspectral and lidar data to classify land cover accurately.

In this example you perform these steps.

  1. Load, visualize, and preprocess hyperspectral and lidar data.

  2. Fuse the hyperspectral and lidar data into a multichannel image.

  3. Spatially split the fused multichannel image into training and test images.

  4. Train a support vector machine (SVM) classifier on the training image.

  5. Predict the class of land cover in the test image, and evaluate the classification accuracy.

Download Data Set

Download and unzip the data set.

zipFile = matlab.internal.examples.downloadSupportFile("image","data/gulfport_dataset.zip");
filepath = fileparts(zipFile);
unzip(zipFile,filepath)

Load and Visualize Hyperspectral and Lidar Data

Load the hyperspectral data into the workspace as a hypercube object. Visualize an RGB image of the hyperspectral data.

filename = fullfile(filepath,"gulfport_dataset","hyperspectralData.dat");
hcube = hypercube(filename);
rgbImg = colorize(hcube,Method="rgb",ContrastStretching=true);
figure
imagesc(rgbImg)
axis equal
title("RGB Image of Hyperspectral Data")

Load the lidar data into the workspace as a lasFileReader (Lidar Toolbox) object. Read the point cloud from the lidar data and visualize the locations of the points in the point cloud.

filename = fullfile(filepath,"gulfport_dataset","lidarData.las");
lasReader = lasFileReader(filename);
ptCloud = readPointCloud(lasReader);
figure
pcshow(ptCloud.Location)
title("Point Cloud of Lidar Data")

Preprocess Hyperspectral and Lidar Data

Extract and rescale the hyperspectral data cube from the hyperspectral data.

hyperspectralData = hcube.DataCube;
[rows,cols,bands] = size(hyperspectralData);
normHyperspectralData = rescale(hyperspectralData);

The lidar point cloud is unorganized, in which the data consists of rows of 3-D coordinates, each row representing a single point. To fuse the hyperspectral data to the lidar data, however, you must convert the lidar data to an organized point cloud. An organized point cloud separates the x-, y-, and z-coordinates of each point onto separate pages of a 3-D array, and divides the points based on their spatial relationships to one another. For more information on organized and unorganized point clouds, see What are Organized and Unorganized Point Clouds? (Lidar Toolbox).

Organizing the locations of a point cloud requires information about the spatial relationships between the points. Since the hyperspectral and lidar data have already been registered, the locations of the points in the point cloud refer to the spatial locations of pixels in the hyperspectral image. Manually organize the point cloud using the spatial information of the points.

lidarData = ptCloud.Location;
lidarData = reshape(lidarData,rows,cols,[]);

The three slices of the 3-D array lidarData refer to the x-, y-, and z-coordinates of the points in the point cloud. While the x- and y-coordinates refer to the spatial location of each point on the ground, the z-coordinate refers to the elevation of the point from the ground. Rescale the z-coordinate of the lidar data.

normLidarData = rescale(lidarData(:,:,3));

Create Fused Hyperspectral Lidar Multichannel Image

The hyperspectral data provides information about the spectral properties of the land cover, whereas the z-coordinate of the lidar data provides information about the elevation of the land cover from the ground. You can fuse information from the hyperspectral and lidar data to classify land cover accurately. Specify the value of fuseData as true to fuse the hyperspectral and lidar data into a fused multichannel image.

fuseData = true 
fuseData = logical
   1

if fuseData
    fusedData = cat(3,normHyperspectralData,normLidarData);
else
    fusedData = normHyperspectralData;
end

Load Ground Truth Label Image

Load the ground truth labels into the workspace.

filename = fullfile(filepath,"gulfport_dataset","labelData.mat");
load(filename,"labelData")
figure
imagesc(labelData)
axis equal
colorbar
title("Labels")

Create Training and Test Data Sets

Split the fused multichannel image into independent training and test images. Select the left 70% of the fused multichannel image as the training image and the right 30% of the fused multichannel image as the test image.

trainPercent = 0.7;
trainCol = floor(cols*trainPercent);
testCol = trainCol + 1;
trainImage = fusedData(:,1:trainCol,:);
trainColsNum = size(trainImage,2); 
testImage = fusedData(:,testCol:end,:);
testColsNum = size(testImage,2); 

Split the label image similarly to get a training label image and a test label image. Visualize the training and test label images separately.

trainLabels = labelData(:,1:trainCol);
testLabels = labelData(:,testCol:end);
figure
tiledlayout(1,2)
nexttile
imagesc(trainLabels)
axis equal
colorbar
title("Training Labels")
nexttile
imagesc(testLabels)
axis equal
colorbar
title("Test Labels")

Prepare Data for Classification

To use the training and test images with a classifier model, reshape the images such that each row represents a pixel and each column represents a channel of the fused multichannel image.

trainData = reshape(trainImage,rows*trainColsNum,[]);
testData = reshape(testImage,rows*testColsNum,[]);

Reshape the training label image and test label image into single columns of data.

trainLabels = reshape(trainLabels,rows*trainColsNum,[]);
testLabels = reshape(testLabels,rows*testColsNum,[]);

Remove the unlabeled pixels from the training data.

unlabelTrain = trainLabels==-1;
trainData = trainData(~unlabelTrain,:);
trainLabels = trainLabels(~unlabelTrain);

Train SVM Classifier

Train a multiclass SVM classifier on the training data and labels by using the fitcecoc (Statistics and Machine Learning Toolbox) function.

rng("default")
model = fitcecoc(trainData,trainLabels,Learners="svm")
model = 
  ClassificationECOC
             ResponseName: 'Y'
    CategoricalPredictors: []
               ClassNames: [1 2 3 4 5 6 7 8 9 10 11]
           ScoreTransform: 'none'
           BinaryLearners: {55×1 cell}
               CodingName: 'onevsone'


  Properties, Methods

Predict Labels of Test Data

Predict the labels of the test data using the trained multiclass SVM model.

predictedLabels = predict(model,testData);

Evaluate Classification Accuracy

Compute the accuracy of the predicted labels on the pixels for which ground truth labels are available. Observe that the accuracy is 80.67% if you fuse the hyperspectral and lidar data. However, if you choose not to fuse hyperspectral and lidar data, the accuracy is 70.95%.

unlabelTest = testLabels==-1;
predictedLabels(unlabelTest)=-1;
accuracy = nnz(predictedLabels(~unlabelTest)==testLabels(~unlabelTest))/numel(testLabels(~unlabelTest))
accuracy = 0.8067

Reshape the test labels and the predicted labels into images and visualize them.

testLabels = reshape(testLabels,rows,[]);
predictedLabels = reshape(predictedLabels,rows,[]);
figure
tiledlayout(1,2)
nexttile
imagesc(testLabels)
axis equal
colorbar
title("Ground Truth Labels")
nexttile
imagesc(predictedLabels)
axis equal
colorbar
title("Predicted Labels")

References

[1] P. Gader, A. Zare, R. Close, J. Aitken, and G. Tuell. “MUUFL Gulfport Hyperspectral and LiDAR Airborne Data Set.” Technical Report REP-2013-570. University of Florida, Gainesville, 2013. https://github.com/GatorSense/MUUFLGulfport/tree/master.

[2] X. Du and A. Zare. “Technical Report: Scene Label Ground Truth Map for MUUFL Gulfport Data Set.” University of Florida, Gainesville, 2017. http://ufdc.ufl.edu/IR00009711/00001.

See Also

| (Lidar Toolbox) | (Statistics and Machine Learning Toolbox)