Adding images to reach a final composite.

7 views (last 30 days)
Addison Collins
Addison Collins on 11 May 2021
Commented: DGM on 13 May 2021
Hello all, I am attempting to add several photos such that I can see the distribution of particles in a laser sheet as they exit a converging nozzle. I am having issues arriving at a final composite photo that contains the particles without over-saturating. Below I will post a few of the base photos (there are really several hundred). I will also show two methods I tried out but to no avail. I will detail the methods below next to their code.
Background photo (2/3 of the photos in the dataset are identical to this because the camera was not in sync with the laser pulses):
Particles photo 1:
Particles photo 2:
Method 1: In method 1 I converted the grayscale .bmp images to double (matrices) before handling them and converted them back mat2gray() before using imshow(). I noticed that the particles were bright when the first few photos were added, but they became dark as I progressed the loop. I also attemped a run involving subtracting the background image, but this led to worse results, as everything approached black (commented out in the bottom for-loop).
clear; clc; close all;
image_folder = 'E:\PIV Shakedown 1\run3\Photos'; % Path of photos
filenames = dir(fullfile(image_folder,'*.bmp'));
nfiles = length(filenames);
images = cell(1,nfiles);
background = imread('background.bmp'); % Background photo with laser off
background = double(background);
for n = 1:nfiles
f = fullfile(image_folder,filenames(n).name);
current_image = imread(f);
current_image = double(current_image);
images{n} = current_image; % Several hundred photos, some with particles and some without
end
addition = images{1}; % Preparing the first photo for adding
for j = 1:88%nfiles-1
addition = addition+images{j+1}; % -background; % Adding every photo in double format to prevent oversaturation (grayscale caps at 255)
figure() % For debugging purposes
imshow(mat2gray(addition)) % For debugging purposes
end
background_avg = addition/(j+1); % Creates a background based upon every photo summed in addition
% ISSUE: The below change to addition is pointless because background_avg*(j+1) = addition, thus the following subtraction = 0
% addition = addition - background_avg*(j+1); % Can use background or background_avg
background_avg = mat2gray(background_avg);
figure()
imshow(mat2gray(addition))
figure()
imshow(background_avg)
figure()
imshow(mat2gray(background))
Resulting addition photos (at different loop iterations)
RUN 1: First addition involves a photo with no particles and one with sparse particles
j=1, first addition: Note that the particles are dimmer. This addition inluded a bright photo like the ones above.
j=5, fifth addition: Notice that the particles are even more dim (even though more particles are added on this 5th addition). Ideally the final photo will look like this but the particles wil be VERY obvious. Instead, after about 50 pictures you cannot distinguish any particles at all due to their continual approach to black in these photos.
Method 2: In method 2 I did not convert the photos to doubles, so I ran into the issue that is the grayscale range (0-255). I.E. 233+234 = 255 due to the cap.
%% Non-Normalizing
clear; clc; close all;
image_folder = 'E:\PIV Shakedown 1\run3\Photos'; % Path of photos
filenames = dir(fullfile(image_folder,'*.bmp'));
nfiles = length(filenames);
background = imread('background.bmp'); % Background photo
images = cell(1,552);
for n = 1:nfiles
f = fullfile(image_folder,filenames(n).name);
current_image = imread(f);
images{n} = current_image;
end
addition = images{1};
for j = 1:250%nfiles-1 %nfiles when using all images
addition = imadd(addition,images{j+1});
addition = imsubtract(addition,background);
% figure()
% imshow(addition)
end
figure()
imshow(addition)
After 8 iterations of Method 2:
After 250 iterations of Method 2: Note that it isn't white purely due to particles, but the entire image creeps towards being white.
I appreciate any help!

Accepted Answer

DGM
DGM on 12 May 2021
Edited: DGM on 12 May 2021
By averaging, you're essentially extracting the BG, but not really in a way that would produce a BG estimate that's good for BG removal, since the estimate error is concentrated in the ROI. There are probably canonical ways to approach this, but with the tools I'm used to using, this is my first approximation:
n1=im2double(imread('nozzle1.bmp'));
n2=im2double(imread('nozzle2.bmp'));
n3=im2double(imread('nozzle3.bmp')); % added fake specks
n4=im2double(imread('nozzle4.bmp')); % added fake specks
ns = cat(4,n1,n2,n3);
nbg = extractbg(ns); % extract the bg
% normalize the abs(sum()) of frame differences from BG
nds = simnorm(abs(n1 + n2 + n3 - 3*nbg));
I added a few synthetic frames to see how it was going to work.
There aren't many frames in the sum, so it's not very populated. I can tweak the contrast for viewing purposes:
nds = imlnc(nds,'independent','k',1.2);
The behavior of extractbg() is nonlinear, which tends to be better than simple averaging at suppressing frame differences. It's basically an iterative interframe color-distance thresholding operation used to selectively ignore the parts of an image which change with respect to the current BG estimate. Its default initial estimate is the average, and it refines from there.
The BG extraction would likely improve with more frames, though using extractbg() with 4D arrays like this example is costly in terms of memory. It does support incremental operation on files, but only for video files. It could probably be adapted to work incrementally on a folder full of images. Matlab may have background extraction tools in the CV Toolbox, but I don't have that.
Overall, this might work as an example, but would need a lot of work in order to be practical for a full set of images this large.
The tools used in this example are from the MIMT, which is on the file exchange at the link above.
  3 Comments
DGM
DGM on 13 May 2021
Yeah, double() just changes the class, so it's still in a [0 255] range. On the other hand, im2double() also normalizes the data to [0 1] range. You don't necessarily lose the data when you do it the first way, it's just that all the image handling tools expect float images to be normalized, so things like imshow() or imwrite() will truncate the data at the expected range.

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!