Main Content

Create and Smooth Orthomosaic from Orthophotos

This example shows how to create an orthomosaic using feature-based image registration techniques for a given set of orthophotos. In the previous Semantic Segmentation of Orthophotos example, you obtained ortholabels for orthophotos captured from a UAV flight over a city. Now you can stitch these orthophotos along with their corresponding ortholabels to create a map for the urban environment.

Background

Feature detection and matching are powerful techniques used in many computer vision applications, such as image registration, tracking, and object detection. In this example, you use feature-based techniques to automatically stitch together a set of orthophotos. Orthophotos are geometrically corrected images that have a uniform scale, which enables them to be stitched easily with minimal changes.

Image registration is the process of aligning two images into a single integrated image. For more information about the image registration, see Image Registration. The procedure for image stitching is an extension of the feature-based image registration process. Instead of registering a single pair of images, you register multiple image pairs, so that the mapped features align with each other to form an orthomosaic.

The process of stitching two images is:

1.Take two images and find the matching features between the two images.

2. Compute the affine transform for image 2, such that its matching features overlap those in image 1.

3. Combine both of the images into a single image with their features overlapping. The resulting image is slightly larger, as it now contains the unique data from both images.

Each time you repeat this process, the stitched output becomes larger, until the final output incorporates the features from all of the images.

Setup

This example uses a pretrained Deeplab v3+ network with a Resnet-18 backbone. For more information on the pretrained network, see the previous example Semantic Segmentation of Orthophotos.

The network has been trained for binary semantic segmentation to identify roofs, classified as Roof, and nonroofs, classified as NonRoof.

If you have not run the Semantic Segmentation of Orthophotos example, you can download that data for this example by setting ranPrevEx to false. Otherwise, set ranPrevEx to true.

ranPrevEx = "false";

If ranPrevEx is false, download the required data.

if strcmp(ranPrevEx,"false")
    orthoDataURL = "https://ssd.mathworks.com/supportfiles/uav/data/orthoData.zip";
    orthoDataFolder = fullfile(tempdir,"preprocessedData");
    orthoDataZip = fullfile(orthoDataFolder,"orthoData.zip"); 
    waitMessage = "Downloading previously processed orthophotos and ortholabels data (120 MB)...";
    exampleHelperDownloadData(orthoDataURL,orthoDataFolder,orthoDataZip,waitMessage);
end

Download the pretrained network.

pretrainedURL = 'https://ssd.mathworks.com/supportfiles/uav/data/deeplabv3plusResnet18Unreal.zip ';
pretrainedFolder = fullfile(tempdir,'pretrainedNetwork');
pretrainedNetworkZip = fullfile(pretrainedFolder,'deeplabv3plusResnet18Unreal.zip'); 
waitMessage = "Downloading pretrained network (60 MB) ...";
exampleHelperDownloadData(pretrainedURL,pretrainedFolder,pretrainedNetworkZip,waitMessage);

Note that a CUDA®-capable NVIDIA™ GPU is highly recommended for running this example. Use of a GPU requires Parallel Computing Toolbox™.

Load Orthophotos and Ortholabels

Load the orthophotos and ortholabels to stitch together. Each orthophoto with its respective ortholabel represents a smaller section of the map to stitch together to obtain a final large map.

If you ran the previous example, two dialog boxes appear. Select the orthophotos and the ortholabels folders that you generated in the previous example. If you did not run the previous example, and set ranPrevEx to false, the orthophoto and ortholabel data is already in the workspace.

if strcmp(ranPrevEx,"true")
    orthophotoFolderPath = uigetdir(".","Select orthophotos folder to be read");
    imdsImg = imageDatastore(orthophotoFolderPath);
    ortholabelFolderPath = uigetdir(".","Select ortholabels folder to be read");
    imdsLbl = imageDatastore(ortholabelFolderPath);
else
    orthophotoFolderPath = fullfile(orthoDataFolder,"orthophotos");
    imdsImg = imageDatastore(orthophotoFolderPath);
    ortholabelFolderPath = fullfile(orthoDataFolder,"ortholabels");
    imdsLbl = imageDatastore(ortholabelFolderPath);
end

Sort the files by index, which is the order of their acquisition by the UAV. This ensures that successive images have overlapping regions, which is essential for feature detection.

imdsImg.Files = exampleHelperSortFilepathsByIndex(imdsImg.Files);
imdsLbl.Files = exampleHelperSortFilepathsByIndex(imdsLbl.Files);

Initialize the colormap, class names, and class indices.

cmap = [60,40,222;...
        128,0,0]./255;
classNames = ["NonRoof","Roof"];
classIndexes = [0 1];

Display a montage of the orthophotos to be stitched.

orthophotoMontage = double(montage(imdsImg).CData);
title("Montage of Input Orthophotos");

Figure contains an axes object. The axes object with title Montage of Input Orthophotos contains an object of type image.

Obtain a montage of the ortholabels to be stitched.

figure("Visible","off");
ortholabelMontage = montage(imdsLbl,cmap).CData;

Show a montage of the ortholabels overlaid on the orthophotos.

Set the transparency for the overlay.

transparency = 0.4;

Compute the overlaid montage.

orthosOverlaid = transparency*orthophotoMontage/255+(1-transparency)*ortholabelMontage;

Show the overlaid montage.

figure;
imshow(orthosOverlaid);
title("Montage of Input Ortholabels Overlaying Orthophotos");

Figure contains an axes object. The axes object with title Montage of Input Ortholabels Overlaying Orthophotos contains an object of type image.

Register Orthophoto Pairs

To create the orthomosaic, start by registering successive image pairs using these steps:

  1. Detect and match features between I(n) and I(n-1).

  2. Estimate the geometric transformation T(n) that maps I(n) to I(n-1).

  3. Compute the transformation that maps I(n) into the orthomosaic image as T(n)×T(n-1)×...×T(1).

Here, I(n) is the nth orhophoto from the set of orthophotos in the montage. The Background section provides some visual references for this procedure.

Use the detectKAZEFeatures (Computer Vision Toolbox) function to find the matching features between each orthophoto pair. For more information about the different types of feature detectors and their qualities, see Local Feature Detection and Extraction (Computer Vision Toolbox).

For this detector, set the confidence value to 99.9. Higher confidence values result in slower processing speed, but produce more accurate transforms. You can use a high value for the maximum number of trials to further improve the accuracy of the transforms. Set the maximum number of trials to 2000.

confidenceValue = 99.9;
maxNumTrials = 2000;

Detect Matching Features and Estimate Transforms

Set the random number generator seed to ensure consistency with the example outputs.

rng(250);

Read the first image from the orthophoto set and initialize the features for the first image by converting it to a grayscale orthophoto.

I = readimage(imdsImg,1);
grayImage = im2gray(I);

Detect and extract the features from the orthophoto.

points = detectKAZEFeatures(grayImage);
[features,points] = extractFeatures(grayImage,points);

Initialize the transformations for all the images as 2-D affine identity matrices.

clear tforms;
numImages = numel(imdsImg.Files);
tforms(numImages) = affine2d(eye(3));

Initialize a variable to hold image sizes.

imageSize = zeros(numImages,2);
imageSize(1,:) = size(grayImage);

For each image, read and convert the image to grayscale and detect its features. Match the points of the features between consecutive orthophotos and use those points to determine the geometric transformation matrices for each pair of orthophotos.

% Show progress bar
f = waitbar(0,"Please wait while transformation matrices are being computed ...");

% Iterate over remaining image pairs
for n = 2:numImages
    
    % Store points and features for I(n-1).
    pointsPrevious = points;
    featuresPrevious = features;
        
    % Read I(n).
    I = readimage(imdsImg, n);
    
    % Convert image to grayscale.
    grayImage = im2gray(I);    
    
    % Save image size.
    imageSize(n,:) = size(grayImage);
    
    % Detect and extract KAZE features for I(n).
    points = detectKAZEFeatures(grayImage);    
    [features,points] = extractFeatures(grayImage,points);
  
    % Find correspondences between I(n) and I(n-1).
    indexPairs = matchFeatures(features,featuresPrevious,Unique=true);
    
    % Get matching points
    matchedPoints = points(indexPairs(:,1), :);
    matchedPointsPrev = pointsPrevious(indexPairs(:,2),:);        
    
    % Estimate the transformation between I(n) and I(n-1).
    tforms(n) = estimateGeometricTransform2D(matchedPoints,matchedPointsPrev, ...
        "affine",Confidence=confidenceValue,MaxNumTrials=maxNumTrials);
    
    % Compute T(n) * T(n-1) * ... * T(1)
    tforms(n).T = tforms(n).T * tforms(n-1).T;

    % Update the progress bar
    progress = n/numImages;
    waitbar(progress,f,sprintf('Finding affine transform matrix for frame number [%d/%d] - %.2f%% Complete\n',n,numImages,progress*100));
end

% Close progress bar
close(f);

Find Central Image

All of the transformations in tforms are relative to the first image. This is a convenient way to perform the image registration procedure because it enables sequential processing of all the images. However, using the first image as the start of the orthomosaic can produce a less accurate orthomosaic because it can distort most of the images that form the orthomosaic. To improve the accuracy of the orthomosaic, modify the transformations so that the center of the scene is the least distorted orthophoto. To accomplish this, invert the transform for the center image and apply that transform to all the other transforms.

First, find the output limits for each transform. Then, use the output limits to automatically find the image that is approximately in the center of the scene.

Compute the output limits for each transform.

xlim = zeros(numel(tforms),2); 
ylim = zeros(numel(tforms),2);
for i = 1:numel(tforms)           
    [currXLim, currYLim] = outputLimits(tforms(i),[1 imageSize(i,2)],[1 imageSize(i,1)]);
    xlim(i,:) = currXLim;
    ylim(i,:) = currYLim;
end

Next, compute the average x-limits and y-limits for each transform, and find the image that is in the center.

avgXLim = mean(xlim,2);
avgYLim = mean(ylim,2);
avgLims = [avgXLim avgYLim];
[~,idx1] = sort(avgLims(:,1));
[~,idx2] = sort(avgLims(idx1,2));
idx = idx1(idx2);
centerIdx = floor((numel(tforms)+1)/2);
centerImageIdx = idx(centerIdx);

Apply Inverse Transform of Central Image

Finally, apply the inverse transform of the center image to all the other transforms.

Tinv = invert(tforms(centerImageIdx));
for i = 1:numel(tforms)    
    tforms(i).T = tforms(i).T * Tinv.T;
end

Initialize the Orthomosaic

To initialize the orthomosaic, you must determine the spatial limits of the orthomosaic. To do so, you must compute the minimum and maximum output limits over all transformations. These values help you compute the size of the final orthomosaic.

Compute the output limits of each transformed image.

for i = 1:numel(tforms)           
    [xlim(i,:), ylim(i,:)] = outputLimits(tforms(i),[1 imageSize(i,2)],[1 imageSize(i,1)]);
end

Find the minimum and maximum output limits values across all transformed orthophotos.

xMin = min(xlim(:));
xMax = max(xlim(:));
yMin = min(ylim(:));
yMax = max(ylim(:));

Calculate and round the xy-ranges to determine the width and height of the final orthomosaic.

width  = round(xMax-xMin);
height = round(yMax-yMin);

Create empty arrays with the computed width and height, one for the orthophoto images and the other for the ortholabels.

orthomosaicImages = zeros([height width 3],like=I);
orthomosaicLabels = zeros([height width 1],like=I);

Compute the Transformed Orthophotos

You must transform the orthophotos so that you can place them into the orthomosaic.

Use the imwarp function to map images into the orthomosaic, and a vision.AlphaBlender object to overlay the images together. Output the transformed orthoimages, transformed ortholabels, and stitched maps into these folders:

  • warpedOutputImage Contains the transformed orthophotos.

  • warpedOutputLabels Contains the transformed ortholabels.

  • stitchedOutput Contains the stitched orthophoto output and stitched ortholabel output as a map of the city. One pixel in each of these stitched maps corresponds to reductionFactor/meterToPixel meters in the real world. For more information, see the Survey Urban Environment Using UAV example.

Create a 2-D spatial reference object defining the size of the orthomosaic.

xLimits = [xMin xMax];
yLimits = [yMin yMax];
orthomosaicView = imref2d([height width],xLimits,yLimits);

Intialize variables to hold the images, the labels, and their file paths. Use these to create montages.

warpedImages = cell(1,numImages);
warpedImagePaths = cell(1,numImages);
warpedLabels = cell(1,numImages);
warpedLabelPaths = cell(1,numImages);

Create an output folder to save the transformed images warpedOutputImages.

warpedOutputImageFolderPath = "warpedOutputImages";
exampleHelperMakeFolder(warpedOutputImageFolderPath);

Create an output folder to save the transformed labels warpedOutputLabels.

warpedOutputLabelFolderPath = "warpedOutputLabels";
exampleHelperMakeFolder(warpedOutputLabelFolderPath);

Compute the transformed image and transformed label for each image-label pair using your estimated transformation matrices.

% Show progress bar
f = waitbar(0,"Please wait while the transformed orthophotos are being computed ...");

% Compute the warped images and warped labels using the transformation matrices
% obtained earlier
for i = 1:numImages
    
    % Read current image
    I = readimage(imdsImg, i);

    % Obtain its segmap
    C = readimage(imdsLbl, i);
   
    % Transform I into the orthomosaic view.
    warpedImage = imwarp(I, tforms(i), OutputView=orthomosaicView);
    warpedImages{i} = warpedImage;
        
    % Obtain warped label
    warpedLabel = imwarp(C, tforms(i), OutputView=orthomosaicView);
    warpedLabels{i} = warpedLabel;

    % Get the name of the current image
    [~,name,~] = fileparts(imdsImg.Files{i});

    % Save warped image
    warpedImagePath = fullfile(warpedOutputImageFolderPath,sprintf("%s_warped.png",name));
    warpedImagePaths{i} = warpedImagePath;
    imwrite(warpedImage,warpedImagePath);

    % Save warped labels
    warpedLabelPath = fullfile(warpedOutputLabelFolderPath,sprintf("%s_warped.png",name));
    warpedLabelPaths{i} = warpedLabelPath;
    imwrite(warpedLabel,warpedLabelPath);

    % Update the progress bar
    progress = i/numImages;
    waitbar(progress,f,sprintf('Compute the transformed orthophoto for frame number [%d/%d] - %.2f%% Complete\n',i,numImages,progress*100));
end

% Close progress bar
close(f);

Show a montage of the saved transformed images.

warpedOrthophotoMontage = double(montage(warpedImagePaths).CData);
title("Montage of Transformed Orthophotos");

Figure contains an axes object. The axes object with title Montage of Transformed Orthophotos contains an object of type image.

Obtain a montage of the saved transformed labels.

figure("Visible","off");
warpedOrtholabelMontage = montage(warpedLabelPaths,cmap).CData;

Show a montage of the transformed ortholabels overlaid on the transformed orthophotos. Compute the montage ortholabel overlay.

warpedOrthosOverlaid = transparency*warpedOrthophotoMontage/255 + (1-transparency)*warpedOrtholabelMontage;

Show the overlaid montage.

figure;
imshow(warpedOrthosOverlaid);
title("Montage of Transformed Ortholabels Overlaying Transformed Orthophotos");

Figure contains an axes object. The axes object with title Montage of Transformed Ortholabels Overlaying Transformed Orthophotos contains an object of type image.

Create and Save Orthomosaic

This example uses the same algorithm to create the orthomosaic as in the Feature Based Panoramic Image Stitching (Computer Vision Toolbox) example. However, that example stitches ground-level perspective images, rather than orthographic images, so the algorithm assumes that all pixels in the transformed images have colors (or content). This is not necessarily true for orthographic transformed images, which may have missing regions. As a result, using that algorithm for this application without modification leads to missing pixels in the stitched image. The results from the algorithm would be correct if the transformation matrices you used were perfect, but this is not possible when using automatic marking point and transformation matrix computation.

To prevent missing areas, you can simplify algorithm. While merging images 1 and 2, specify for the algorithm to use the color from whichever image has a color for the current pixel location for that pixel. If both images have colors at that location, such as in the overlap region, perform a blend between the two colors. Set this blend value using the blendFactor variable. A default value of 1 ensures that image 2 completely overwrites image 1 in areas of overlap.

Create and save the outputs of the stitching algorithm for the transformed orthophotos.

First, create two blenders, one each for images and labels.

blenderImg = vision.AlphaBlender(Operation='Blend', ...
    OpacitySource='Input port');
blenderLabels = vision.AlphaBlender(Operation='Binary mask', ...
    MaskSource='Input port');

Then, we specify the blend factor for common overlapping pixels.

Note that this is the value for the 2nd image. A value of 1 indicates the entire value will be taken from the 2nd image, while 0 means it will be taken from the 1st image.

blendFactor = 1;

Now, stitch the warped images together one by one sequentially.

% Obtain the 1st warped image and its warped labels
I1 = warpedImages{1};
L1 = warpedLabels{1};

% Show progress bar
f = waitbar(0,"Please wait while the transformed orthophotos are being stitched ...");

% Stitch images (and their labels) one by one
for idx = 2:numImages
    
    % Get next image (and its label) in the dataset
    I2 = warpedImages{idx};
    L2 = warpedLabels{idx};

    % Find where I1 has no color
    maskNoColorI1 = I1(:,:,1)==0 & I1(:,:,2)==0 & I1(:,:,3)==0;

    % Find where I2 has no color
    maskNoColorI2 = I2(:,:,1)==0 & I2(:,:,2)==0 & I2(:,:,3)==0;

    % Find where I1 has no color, there take I2
    mask = maskNoColorI1;

    % Find where both have color, there take blend
    % Note: Blend is only taken for image and not for labels as labels are
    % always integer values. Image colors on the other hand can be
    % floating-point values.
    maskImg = mask + (~maskNoColorI1 & ~maskNoColorI2)*blendFactor;
    maskLabels = mask + ~maskNoColorI1 & ~maskNoColorI2;

    % Stitch image
    stitchedImg = step(blenderImg, double(I1), double(I2), maskImg);
    
    % Stitch labels
    stitchedLabels = step(blenderLabels, L1, L2, maskLabels);

    % Make I1 the stitched image
    I1 = stitchedImg;

    % Make L1 the stiched labels
    L1 = stitchedLabels;

    % Update the progress bar
    progress = idx/numImages;
    waitbar(progress,f,sprintf('Stitched transformed orthophoto number [%d/%d] - %.2f%% Complete\n',idx,numImages,progress*100));
end

% Close progress bar
close(f);

Show the stitched orthomosaic.

figure;
imshow(stitchedImg/255);
title("Stitched Orthomosaic");

Figure contains an axes object. The axes object with title Stitched Orthomosaic contains an object of type image.

Show the stitched labels overlaying stitched orthomosaic using a colormap.

stitchedOverlaid = labeloverlay(stitchedImg/255,categorical(stitchedLabels),Colormap=cmap,Transparency=0.4);
figure;
imshow(stitchedOverlaid);
title("Stitched Orthomosaic - Labels Overlaid");
exampleHelperPixelLabelColorbar(cmap, classNames);

Figure contains an axes object. The axes object with title Stitched Orthomosaic - Labels Overlaid contains an object of type image.

If an output folder for stitched images does not already exist, create one.

stitchedOutputFolderPath = "stitchedOutput";
exampleHelperMakeFolder(stitchedOutputFolderPath);

Save the stitched orthomosaic, stitched labels, and labels overlaid on the stitched orthomosaic as images.

imwrite(stitchedImg/255,fullfile(stitchedOutputFolderPath,"stitchedOrthophoto.png"));
imwrite(stitchedLabels,fullfile(stitchedOutputFolderPath,"stitchedOrtholabel.png"));
imwrite(stitchedOverlaid,fullfile(stitchedOutputFolderPath,"stitchedOrthosOverlaid.png"));

Selective Smoothing of Orthomosaic

In the orthomosaic generated from the UAV flying over the US City Block scene, you might notice stitch artifacts due to the automated stitching not being as accurate as manual stitching. You can use smoothing techniques, such as selective smoothing [1] to eliminate these line artifacts.

Background

In traditional smoothing, the same smoothing filter is applied over every pixel in the original image. In selective smoothing, you apply the smoothing filter to only those pixels that meet certain conditions. This approach retains more of the original content. Selective smoothing consists of these stages:

  1. Choose a filter.

  2. Determine a pixel selection logic.

  3. Apply the filter over pixels that meet the selection logic.

Various filter types are capable of selective smoothing. For example, consider a simple 5-by-5 averaging filter as an example. It looks as follows:

averagingFilter.PNG

This figure shows what an image looks like before and after applying this filter using traditional smoothing.

smoothedImage.png

This figure shows the same filter, used to selectively smooth only those pixels with a color value greater than or equal to the mean color of the photo.

selectivelySmoothedImage.png

Note the difference between the two smoothed images. You can apply the selective approach to the stitched orthomosaic.

Read and Visualize Input Stitched Orthomosaic with Artifacts

Read the stitched orthomosaic into the workspace.

orthomosaic = imread(fullfile(stitchedOutputFolderPath,"stitchedOrthophoto.png"));

Visualize the stitched orthomosaic.

figure
imshow(orthomosaic)
title("Original Stitched Orthomosaic with Line Artifacts")

Figure contains an axes object. The axes object with title Original Stitched Orthomosaic with Line Artifacts contains an object of type image.

Choose Filter

Create a 5-by-5 averaging filter.

F = ones(5)/25
F = 5×5

    0.0400    0.0400    0.0400    0.0400    0.0400
    0.0400    0.0400    0.0400    0.0400    0.0400
    0.0400    0.0400    0.0400    0.0400    0.0400
    0.0400    0.0400    0.0400    0.0400    0.0400
    0.0400    0.0400    0.0400    0.0400    0.0400

Determine Pixel Selection Logic

To eliminate the pixels causing the line artifacts, select only pixels that meet both of these requirements:

  1. In the raw image, the pixel has a color value less than your specified upper bound color.

  2. After perturbing the image, the color value in the original pixel location is greater than your specified lower bound. For this step, you perturb the original image in eight ways, which replicates looking at the eight neighboring pixels. If any of the neighboring colors is greater than the specified lower bound, then the original pixel meets this requirement.

Select the upper bound color for the first requirement.

upperBoundRed = 100;
upperBoundGreen = 100;
upperBoundBlue = 100;

Get the locations of the pixels that meet the first requirement.

redUnderUpperBound = orthomosaic(:,:,1) <= upperBoundRed;
greenUnderUpperBound = orthomosaic(:,:,2) <= upperBoundGreen;
blueUnderUpperBound = orthomosaic(:,:,3) <= upperBoundBlue;
requirement1 = redUnderUpperBound & greenUnderUpperBound & blueUnderUpperBound;

Select the lower bound color for the second requirement.

lowerBoundRed = 50;
lowerBoundGreen = 50;
lowerBoundBlue = 50;

Determine the perturbations along the x- and y-axes for the image required for second requirement.

perX = -1:1;
perY = perX;

Pad the array by 1 pixel on all sides to replicate the boundary pixels having zero-value neighbor pixels.

orthomosaicPadded = padarray(orthomosaic,[1 1]);

Finally, obtain the pixel locations that meet both requirements.

requirement2 = zeros(size(requirement1));
for i=1:numel(perX)
    for j=1:numel(perY)

        % Continue if no perturbation
        if perX(i)==0 && perY(j)==0
            continue;
        end

        % Transform the image to get perturbed image
        orthomosaicPerturbed = orthomosaicPadded(2-perX(i):end-1-perX(i),2-perY(j):end-1-perY(j),:);
        
        % Find where the pixel color increased more than the lower bound
        % per color channel
        redOverLowerBound = orthomosaicPerturbed(:,:,1)-orthomosaic(:,:,1) > lowerBoundRed;
        greenOverLowerBound = orthomosaicPerturbed(:,:,2)-orthomosaic(:,:,2) > lowerBoundGreen;
        blueOverLowerBound = orthomosaicPerturbed(:,:,3)-orthomosaic(:,:,3) > lowerBoundBlue;
        orthomosaicPerturbedPass = redOverLowerBound | greenOverLowerBound | blueOverLowerBound;

        % Add on to the existing passed pixel locations
        requirement2 = requirement2 | orthomosaicPerturbedPass;
    end
end

Finally, obtain the pixel locations that meet both requirements.

bothReqMet = requirement1 & requirement2;

Apply Filter Over Pixels Meeting Selection Logic

First, apply smoothing to all pixels in the original image to obtain a traditionally smoothed image.

orthomosaicSmoothed = imfilter(orthomosaic,F,"symmetric");

Then, replace colors for those pixels that meet both requirements in the original image with the averaged color, for that pixel in the traditionally smoothed image.

blender = vision.AlphaBlender(Operation="Binary mask",MaskSource="Input port");
orthomosaicSelSmoothed = step(blender, orthomosaic,uint8(orthomosaicSmoothed),bothReqMet);

Finally, show the pixel locations that meet both requirements, and the selectively smoothed image with those pixels affected.

Show pixel locations that meet both requirements.

figure
imshow(bothReqMet)
title("Pixels Affected")

Figure contains an axes object. The axes object with title Pixels Affected contains an object of type image.

Show the orthomosaic with the selected pixels smoothed.

figure
imshow(orthomosaicSelSmoothed)
title("Selectively Smoothed Orthomosaic")

Figure contains an axes object. The axes object with title Selectively Smoothed Orthomosaic contains an object of type image.

Save the selectively smoothed image.

imwrite(orthomosaicSelSmoothed,fullfile(stitchedOutputFolderPath, "stitchedOrthophotoSmoothed.png"))

Semantic Segmentation of Smoothened Orthomosaic

Now that you have removed the line artifacts, reobtain the semantic segmentation labels for the smoothed orthomosaic. This is the final city map.

Load Pretrained Network

Load the deep learning network, downloaded in the Setup section, into the workspace.

pretrainedNetwork = fullfile(pretrainedFolder,"pretrainedNetwork.mat");  
load(pretrainedNetwork);

Obtain Labels for Smoothed Orthomosaic

Obtain labels for smoothed orthomosaic by using the pretrained network.

smoothedLabelsCat = semanticseg(orthomosaicSelSmoothed, net);

Convert the smoothed ortholabel from a categorical matrix to a numeric matrix.

smoothedLabelsNum = exampleHelperCategoricalToClassIndex(smoothedLabelsCat,classNames,classIndexes);

Show the final stitched and smoothed orthomosaic representing the map of the city.

figure
stitchedOverlaid = labeloverlay(orthomosaicSelSmoothed,smoothedLabelsCat,Colormap=cmap,Transparency=0.4);
imshow(stitchedOverlaid)
title("Smoothed Orthomosaic - Labels Overlaid")
exampleHelperPixelLabelColorbar(cmap, classNames);

Figure contains an axes object. The axes object with title Smoothed Orthomosaic - Labels Overlaid contains an object of type image.

Save final stitched smoothened ortholabels map and overlaid map.

imwrite(smoothedLabelsNum,fullfile(stitchedOutputFolderPath,"stitchedOrtholabelSmoothed.png"))
imwrite(stitchedOverlaid,fullfile(stitchedOutputFolderPath,"stitchedOrthosOverlaidSmoothed.png"))

Conclusion

This example showed you how to automatically create an orthomosaic using feature based image registration techniques. We also discussed selective image smoothing [1] to deal with stitch artifacts. You can incorporate additional techniques can be incorporated into the process in this example to improve the blending and alignment of the orthophoto images [2]. You can also use sensor fusion techniques to make the map more accurate by using other sensors, like GNSS, to reduce the accumulation of errors while transforming and stitching.

From a use-case standpoint, you can now use the generated urban city map for numerous applications.

  • Navigation — Now that you have obtained a high-resolution map of previously unmapped terrain, you can navigate the terrain using this map.

  • Infrastructure Cover — You can directly compute the area marked as roofs to determine how much land has been used for infrastucture projects.

  • City Planning — Mapping terrain can help you create a blueprint for new city planning. Mapping an existing city and segmenting roads can help you simulate traffic flow by obtaining the road configuration. It can also help you identify where traffic lights are missing and can be placed. Further, a pixel-accurate map ensures you can accurately measure the area covered by specific buildings as well as the lengths of roads.

  • Forest Cover — In this example we segmented roofs, but you can use a similar approach to segment trees to calculate the forest cover area in the mapped terrain. You can use this information to address deforestation.

References

[1] Alvarez L, Lions PL, Morel JM. Image Selective Smoothing and Edge Detection by Nonlinear Diffusion. II. SIAM Journal on Numerical Analysis. 1992;29(3):845–866

[2] Matthew Brown and David G. Lowe. 2007. Automatic Panoramic Image Stitching using Invariant Features. Int. J. Comput. Vision 74, 1 (August 2007), 59-73.