File Exchange

Tutorial and Toolbox on real-time optical flow

version 1.20 (4.07 MB) by
Code with visualization and excercises. Camera supported

Updated 29 Nov 2020

From GitHub

1) runMe.m
(use arrow keys to interact with pattern)
2) OpticalFlowTutorialPart1.pdf
NOTE 1: This is a beta version. I would like to know about bugs so that I can improve this major update. We are working on a publication, so please check back for a proper reference if you plan on using this for your work.
Note 2: There are NO toolboxes required to run this. If it says differently below, ignore it.
Some of the material that was intended for use in parts 2 and 3 have ended up in proprietary software, so I hope those who were waiting for that will accept my apologies. To compensate, I am working hard to make part 1 (the only part) especially easy to use.
http://islab.hh.se/mediawiki/index.php/Stefan_Karlsson/PersonalPage
A Quick view on applications:
Get Started:
http://youtu.be/u1jSwcVoFcM

Cite As

Stefan Karlsson and Josef Bigun (2019). Tutorial and Toolbox on real-time optical flow (https://www.mathworks.com/matlabcentral/fileexchange/44400), MATLAB Central File Exchange. Retrieved December 19, 2019.

Stefan Karlsson

@Shubhi, if you lower the range of the image color (such as dividing it) then you change logic underlying the "edge normalization". Se section 3.1.1 in the pdf. In the code, flow1.m, there is a constant called "m", if you change this one with a corresponding amount to what you scale the image, it may be invariant to your operation... sort of. You still may get rounding errors, and there may still be other places in the code (cant recall now) that assumes the range of the color is un-altered. cheers

Shubhi Sharma

Great work!!!
I am working on a project related to optical flow(HS and LK method) I wanted to understand the vectors which it creates are they velocity vectors or displacement vectors. Actually when I divide the values of image array by some no. these vectors reduce their size but if they are velocity vector as stated on Matlab tutorials they should not.

Stefan Karlsson

@ Mohammad Kamil Shams, and the world :)

I looked around a little, and if I understand the badly documented image acquisition toolbox, then the function "webcam" is not part of image aquisition toolbox. Is it so that stand-alone matlab now allows web-cam input?

If this is the case, my toolbox could be upgraded to use the function webcam, but I will not be able to do any such work. I have no matlab licenses. Anyone out there that can do this for my toolbox?

anyone? dont all stand up at once, form a line :P

Stefan Karlsson

Try this to get the Image Aquisition toolbox to become backwards compatible:

Also, it is possible that if you read the error message carefully (from when in.movieType='camera' fails) there is a description on how to solve it.

I have the image acquisition toolbox and have the webcam support setup as well (when i run cam = webcam it creates a webcam object). But when I run the runMe file, it says image acquisition toolbox not available. I'm not sure what I'm doing wrong. I do not have multiple cameras connected so in.movieType = 'camera' should work?

ILLIA MUDRYI

Hi! I have a question about your matlab work under optical flow using Optical Flow 1 method. There is a diagram with vectors of optical flow. Do you know, how to compute the total amount of these vectors and their lengths? I need to know the total amount of these vectors, how many vectors have the length e.g. A, how many vectors have the length e.g. B and etc. May you know how to compute these data? If you know it, can you tell me how to do this or can you do it by yourself? I am very needed in these data. I'll be very appreciated for any help. Thanks.

Stefan Karlsson

@Liu,

If I understand you correctly, you have the frames of your video saved as images in some format, for example:

c:\test\frame1.jpg
c:\test\frame2.jpg
...
Unfortunately, the interface you want is not supported in the toolbox. When you set the video input to a folder, the toolbox will look for data saved previously by the toolbox. see the pdf, section 1.7.2 for details.

If your video sequence originated in this frame-by-frame format, the easiest way to plug it into the toolbox is to use "ffmpeg" to first encode it as video file. See for example:
http://hamelot.io/visualization/using-ffmpeg-to-convert-a-set-of-images-into-a-video/

If instead you are interested in helping out with developing the toolbox, the feature that you want would be rather simple to implement..I cant do that myself, because i have no access to Matlab licenses at the moment. Send me an email if you want help out in this fashion.

PS. excuse any inconsistencies, I am currently sick with a flu.

Liu

@Stefan

Great toolbox! I have one question. I turn my input video to a folder with frame images and set in.movieType = 'C\test\' the location of the foler. Then when it runs at [~, ~, ~,vid] = getSavedFlow(startingTime-1, movieType); in myVidSetup, it try to load the flow.mat, which I dont have. So the program stops at ~exist(fullfile(session,'flow.mat'),'file') || ~exist(fullfile(session,'flow.bin'),'file').

Any suggestions for this?

eric-erki müür

Dominika Szt

@Stefan
You are right, I misunderstood the concept of this method. I even read the pdf document but didn't pay attention to page with the color coding scheme. Nevertheless, you helped me a lot in my analyses, thank you!

Stefan Karlsson

@Dominika

It may be that you have misunderstood the color coding (or equally likely i have misunderstood you). The hue of the color (whether it is blue or green) is not given by the amount of motion, but by the direction of motion.

look at the front page of the pdf document in the submission, it has a description of the color coding.

it may be easier to get an idea of how the color coding works by first saving some video, and then viewing it by the fancyflow player.

Dominika Szt

@Stefan
using this command I obtained a matrix of values of the movement. I just thought that there is a possibility of obtaining the threshold value, for example 5 meaning that if the value of movement reaches 5, the blue color turns into green, when it reaches 10 green turns into yellow etc.
However, thank you very much for response :)

Stefan Karlsson

@Dominika

just put the line:
in.sc = 2;

somewhere in runMe.m, before the call to vidProcessing

like so for example:
[start of file]
% Copyright: Stefan M. Karlsson, Josef Bigun 2015
clear in;
% This script sets up the call to the function 'vidProcessing'.

in.sc = 2;

%% argument 'movieType'
[----]

Dominika Szt

@Stefan
Thanks for advice, unfortunately it doesn't work. It only shows basic info like movie type or method chosen

Stefan Karlsson

@Dominika

try to set "in.sc" in runMe.m to different values.

like:

in.sc = 10;

for example. Please write back here if it works, because i no longer have a matlab license, so cannot test it myself

Stefan Karlsson

@Hamish

My first reply to you got deleted by mistake. Here goes again

the interface to the toolbox has changed. the old interface no longer works, where you would write:

in.movieType = 'camera 1'; % DOES NOT WORK

the new interface equivalent:

in.movieType = 'camera';
in.camID = 1;

Dominika Szt

Dear Stefan,
first of all, thank you for sharing this code.
I have a question about the FlowHS method. In a result I have a video with a color map on it. I wonder what is a threshold for changing the color as I can't find it in the code.

Stefan Karlsson

@Hamish

The reason you get the cryptic error message is because the toolbox will look for a folder on your computer called "camera 1" if you feed it this input. This is because the toolbox can read saved data from folders, if the input is a string other then "camera"

Hamish Krippner

Hi stefan,

When changing in.movieType = 'camera 1'; i have an error in the vidprocessing file. The error i get is below:

Error using ParseAndSetupScript (line 182)
provided in.movieType: camera 1 invalid. No such file or folder

Error in vidProcessing (line 104)
[g.in, g.vid] = ParseAndSetupScript(in); %script for parsing input and setup environment

Error in runMe (line 76)
pathToSave = vidProcessing(in);

I'm not familiar with calling the camera by just typing 'camera'

Stefan Karlsson

New tiny update: v 1.053

SHOULD NOW WORK ON ALL MATLAB VERSIONS FROM 2010 ONWARDS (I HOPE)

fixed some small issues i found in Matlab 2016a regarding the video-input lib(windows only alternative to image aquistiion toolbox, included).

Note that to get webcam support, you should ideally have the image aquistion toolbox installed, and configured to use webcams.

Stefan Karlsson

A tiny update: v 1.052

Bug fixes

- fixed rendering window jittering around during arrow rendering.

- Updated FancyFlow player with latest version, yielding better rendering, better frame rate and slightly better interface

- new option: in.bAutoPlayWithPlayer controls whether recorded data is automatically played (with FancyFlowPlayer) after the session

- updated documentation: remarks in runMe.m and VidProcessing.m.

Stefan Karlsson

@CV

if you want to use HOG together with optical flow for human pedestrian detection, check out the approach of HOF (histogram of flow), also suggested by Bill Triggs.

regarding your questions, I get the feeling you are just not trying hard enough.

There is no need AT ALL to call fancy flow player in order to access the saved data. You can iterate through the entire thing yourself:

%get the first frame:
[im, u, v] = getSavedFlow(1, pathToSave);

%get the HOGs (not a function in toolbox):
HOGvector = getHOGs(im);

%detect your humans(not a function in toolbox)
DetectHumans(u,v,HOGvector);

%after you called getSavedFlow the first time, you can now call it succesive times for the succesive frames:
while(true)
[im, u, v] = getSavedFlow();
HOGvector = getHOGs(im);
DetectHumans(u,v,HOGvector);
end

Computer Vision

@Stefan

The code work properly as I said before.

The thing I need to get U and V flows of a sequence of images and directly use HOG with those flows.

I cant get the images unless close the fancyflow.

the second problem is the shadow that happened when I increase the resolution of the flow as the following link the show the result of flow resolution = [50 50]

https://www.dropbox.com/s/89ehnh5u7uziv5g/50x50.JPG?dl=0

Stefan Karlsson

@cv

regarding your 2 specific questions below (restated):

"
1- I want to get the U and V images for each frame and use in another code but I can't do that as I should choose specific frame from FlowFancy and colse it
"

you have already shown in your example code how to get specific frames, for use in other code:

[im, u, v] = getSavedFlow(40, 'GroundTruthData');

"
2- when I increase the Flow resolution to [48 48], there is a shadow appeared in the U and V images result from the increased number of arrows with the moving object.
"

I dont understand this at all, and without a minimal, self-contained example code to reproduce the error, I can do nothing to help you

Computer Vision

@Stefan

clear in;
% This script sets up the call to the function 'vidProcessing'.

%% argument 'movieType'
% The source of video to process.
% in.movieType = 'synthetic'; % generate synthetic video.
in.movieType = 'person01_walking_d1_uncomp.avi'; % assumes a file 'LipVid.avi' in current folder.
% Variable framerate videos not supported
% in.movieType = 'GroundTruthData'; % A folder containing video previously saved with
% the toolbox
%in.movieType = 'camera'; % assumes a camera available in the system.
% if many cameras connected, use "in.camID" to choose

%% argument 'method'
% optical flow or visualization method %%%

% optical flow methods are referenced by function handles. For your own
% optical flow algorithm, implement it as a function and set a handle to
% it like this:

% in.method = @FlowLK; %Lucas and Kanade
in.method = @Flow1; %Locally regularized and vectorized method
% in.method = @FlowHS; %Horn and Schunk, variational (global regularization)
% in.method = 'synthetic'; %Ground truth optical flow (synthetic sequence only)

%%% Options for 'method' that gives no flow:
% in.method = 'edge'; %Displays 2D edge detection by gradients
% in.method = 'nothing'; %generate only the video

%% Argument bRecordFlow
%in.bRecordFlow = 1; %record the video (and flow if available)

%% Arguments vidRes and flowRes
% resolution of video and flow field [Height Width]:
in.vidRes = [128 128]; %video resolution, does not affect file-input(avi or saved-folder)
in.flowRes = [24*3 24*3]; %flow resolution

%% argument 'tIntegration'
% the amount of temporal integration. tIntegration should be in the
% range [0,1). If tIntegration = 0, then no integration in time occurs.
% in.tIntegration = 0.2;

in.bDisplayGT = 1; %display groundtruth flow, if available

%% Argument syntSettings
%Use "in.syntSettings" to
% to specify the contents of the synthetic
% video. For example:
in.syntSettings.backWeight = 0.7; %background edge
in.syntSettings.edgeTiltSpd=-2*pi/10000; %speed of rotation of background edge
% in.syntSettings.noiseWeight = 0.2; %signal to noise weight (in range [0,1])

in.pathToSave = 'GroundTruthData'; %define directory for saving
% in.pathToSave ='TestLK';
% in.startingTime = 50;
in.endingTime = 1000;
% in.endingTime = 'eof';

in.targetFramerate = 20;
vidProcessing(in);

%%%After finishing with the session, you can view recorded data by:

FancyFlowPlayer('GroundTruthData');

%[fr,macroDat,im, u, v] = getSavedFlow(40, 'GroundTruthData');

%[im, u, v] = FancyFlowPlayer(1);

[im, u, v] = getSavedFlow(40, 'GroundTruthData');

figure;imagesc(im);
figure;imagesc(u);
figure;imagesc(v);

Stefan Karlsson

@CV

Please post the code for generating the data as well.

Computer Vision

Also, when I use the following lines the images of U and V is shown OK

[im, u, v] = getSavedFlow(40, 'GroundTruthData');

figure;imagesc(im);
figure;imagesc(u);
figure;imagesc(v);

But it still can't save or showing the U and V images automatically. I mean:

1- I want to get the U and V images for each frame and use in another code but I can't do that as I should choose specific frame from FlowFancy and colse it

2- when I increase the Flow resolution to [48 48], there is a shadow appeared in the U and V images result from the increased number of arrows with the moving object.

Regards

Computer Vision

@Stefan, Dear Stefan This is what I have used:

[im, u, v] = getSavedFlow(40, 'GroundTruthData');

subplot(1,2,1);imagesc(im); colormap gray;
axis image;title('video frame');

subplot(1,2,2);imagesc(sqrt(u.^2+v.^2));colormap gray;
axis image; title('Motion magnitude(groundtruth)')

quiver (u,v);

There is no error, but the U and V images and also flow image are shown as one square white colour. and I used 2015a

Stefan Karlsson

@CV

Please provide a minimal code example to illustrate the issue, and I will look into it. Also mention the version of Matlab you are using.

Computer Vision

@Stefan, Thanks Again,

I have used this method before and re-used again just as you mentioned but it just shows the video frame and a dark grey image (grey square) in the magnitude figure.

Also, when I used quiver it shows me a white image just beside video frame.

Stefan Karlsson

@CV

from function description of FancyFlowPlayer.m (first part of the file):

/begin quote:
....
%%% ... then access the data of the frame:
% [im, u, v] = getSavedFlow(frameNr, 'GroundTruthData');
%%% im is the video frame
% subplot(1,2,1);imagesc(im); colormap gray;
% axis image;title('video frame');
%%% u and v are the components of the flow
% subplot(1,2,2);imagesc(sqrt(u.^2+v.^2));colormap gray;
% axis image; title('Motion magnitude(groundtruth)')
/end quote

if you want to display the vectors as arrows(not just the magnitude image), you can do that as well using matlab quiver function:

quiver(u,v);

the quiver function has a range of settings for putting different bodies, heads and other eye-candy on the vectors.

Computer Vision

@Stefan,
Thanks for you response,

yes, I am looking to get U and V of each frame automatically without need to open and close the FancyFlowPlayer or ever other windows.

BTW, here is the problem .. when I disabled (%) FlowFancyPlayer , then the U and V images shown with a part of the body (Like there is no body). But it works completely when Flowfancy is enabled.
[im, u, v] = getSavedFlow(1,'GroundTruthData');
[im, u, v] = getSavedFlow(40);

im : showing correctly frame 40
U and V : showing as I think just the head or the hand.

Stefan Karlsson

@CV

"getting the images automatically" can mean many things. Perhaps share what you would like to do more specifically?

if you want to access saved data without the FancyFlowPlayer, then use function:

[im, u, v] = getSavedFlow(frameNr, saveDir)

Computer Vision

Very interesting, Thanks for your efforts.

How can I get the images automatically as you showed in the comments, without need to close the FancyFlow window or whatever opening windows.

Thanks

Mammo Image

Thanks sooo much, every thing is done

Mammo Image

it is ok, it is done now .. still the thing is the figure that showed it of n frame is just like a small square is it because the size is 64x64?

Mammo Image

3 - another thing please, how can I get the output for the whole video (flow vectors) because I tried but it still 1x24

thanks so much

Stefan Karlsson

there is stand-alone submission of FancyFlowPlayer that has a simpler interface. Havent gotten around to incorporating it into this tutorial. Find it here:

http://se.mathworks.com/matlabcentral/fileexchange/53600-fancyflowplayer/content/FancyFlowPlayer_v1.05/FancyFlowPlayer.m

in it, you can get the flow directly from the FancyFlowPlayer by:

[fr,macroDat,im,u,v] = FancyFlowPlayer(pathToSave)

Stefan Karlsson

@Mammo

%record video AND flow:
in.bRecordFlow = 1;
in.pathToSave = 'mySaveFolder';

%do the processing with interaction:
vidProcessing(in);

%playback, showing video AND flow interactively (stop FancyFlowPlayer when you have the frame of interest to you):
fr = FancyFlowPlayer(in.pathToSave);

%Use the function getSavedFlow like this:
[im, u, v] = getSavedFlow(fr, in.pathToSave);

this will give you the video frame (im) as well as the flow (u,v) at the frame (frameNr)

Mammo Image

Mammo Image

why there is one vector for the whole video? shouldn't be a flow vector for each frame or for double frames?

Mammo Image

Dear Stefan,
Thanks so much for the code,

Can I get the frame (image) of the flow1 from the FancyFlowPlayer?

Regards

Stefan Karlsson

@Anna

Color can be used to represent optical flow vectors. Read more about it in the pdf that comes with the submission ( OpticalFlowTutorialPart1.pdf) . The front cover should be enough to explain this. Otherwise google will help you.

Anna

can optical flow be done to obtain color video??

Yasser Elhouderi

Thank you very much!! this is really a great tutorial

Paul Gresham

Thankyou Stefan

Stefan Karlsson

@Benjamin,

The toolbox saves BOTH flow and video. You can playback the recorded session by using the FancyFlowPlayer.

Below is a minimal example
-------------
in.movieType = 'synthetic'; %video type.
in.method = @Flow1; %flow type
in.vidRes = [128 128]; %vid res
in.flowRes = [128 128]; %flow res

%record the video AND flow:
in.bRecordFlow = 1;
in.pathToSave = 'mySaveFolder';

%do the processing with interaction:
vidProcessing(in);

%playback, showing video AND flow:
FancyFlowPlayer(in.pathToSave);
--------------

If you wish to access a specific frame of your saved data, you can use the function getSavedFlow like this:

[im, u, v] = getSavedFlow(frameNr, pathToSave)

this will give you the video frame (im) as well as the flow (u,v) at the frame (frameNr)

Benjamin

Hello!
First of all thank you for sharing this code. It's easily accessible and is nicely commented!

The way it is now, when I am recording a video and use Flow1, the recorded video is saved in GroundTruthData as an .avi file. However, it is just the video without the optical flow.
Is it possible to save the video WITH the optical flow in an .avi file?

Stefan Karlsson

Here is something to get you started. better approaches for this sort of thing can be achieved with level set formulations and the likes, but that is getting us off topic.

Common bugs can arise from Matlab indexation (its im(y,x), not im(x,y)) and the fact that in interp2 you modify the coordinate system, not the image pixels. Experiment around, and you will find the way

s = 40; %size of data

[x,y] = meshgrid(1:s,1:s);

binIm = (x-(s/2)).^2 +((y-(s/2)).^2)*10 <((s/3))^2;

subplot(2,2,1); imagesc(binIm); colormap gray;axis image;

%some rotational flow (curl)
Ur = 10*(y-(s/2))/(s/2);
Vr = -10*(x-(s/2))/(s/2);

%some zooming flow (divergent)
Uz = 10*(x-(s/2))/(s/2);
Vz = 10*(y-(s/2))/(s/2);

binImR = interp2(double(binIm),x-Ur,y-Vr);
subplot(2,2,3); imagesc(binImR); axis image;colormap gray;
hold on; quiver(Ur,Vr);
title('rotational flow');

binImZ = interp2(double(binIm),x-Uz,y-Vz);
subplot(2,2,4); imagesc(binImZ); axis image;colormap gray;
hold on; quiver(Uz,Vz);
title('zooming flow');

Brandon Roy

How can I propagate the pixel information from previous frame to current frame according to the optical flow?

For example, I have the 1st frame mask and I try to propagate the mask to 2nd frame by optical flow to generate 2nd frame mask. When I use warping/interpolation strategy, it will generate a mask which is closer to 1st frame instead of 2nd frame. How can I refine it to match 2nd frame?

Thanks again.

sassi nizar

Stefan Karlsson

Previous post is about gradient estimation in video, especially for motion. Find below the algorithm that the question was about.
There is a recursive version in the toolbox as well, for improved calculations in the prescence of noise. You will have to dive into the tutorial to find that one :)

%%%%Authors: Stefan Karlsson and josef Bigun, 2015

gg = [0.2163, 0.5674, 0.2163];
f = imNew + imPrev;
dx = f(:,[2:end end]) - f(:,[1 1:(end-1)]);
dx = conv2(dx,gg','same');

dy = f([2:end end],:) - f([1 1:(end-1)],:);
dy = conv2(dy,gg ,'same');

dt = 2*conv2(gg,gg,imNew - imPrev,'same');

The boundary estimates for the gradient will be weighted a bit inaccurately(i skipped that for sake of timely execution), which wont make much difference for the flow algorithms in the toolbox, but may affect other applications. Beware.

Stefan Karlsson

@Brandon Roy

There is more than one way to interpret your question, and I will take the most interesting one.

I believe you are thinking in terms of the common practice to use the following scheme for derivatives (found in grad3D as remarks, "Approach 1"):

%This is what the tutorial instructs you to do:
dx = conv2(gg,dg,imNew,'same');
dy = conv2(dg,gg,imNew,'same');
dt = 2*(imNew - imPrev);

The above can be seen as a "forward" scheme, the backward scheme would be:

dx = conv2(gg,dg,imPrev,'same');
dy = conv2(dg,gg,imPrev,'same');
dt = 2*(imPrev - imNew);

Many researchers do this approach. Attempts at exploiting the difference between the end estimates are not uncommon(they will differ with more than just a sign as your question indicates you know).

You can easily change the code to use a "grad3Dforward" and a "grad3Dbackward"
I prefer not to take this approach, and I believe I have good reasons to do so from a timeliness(speed of computations) as well as a stability and accuracy perspective.

A test that you could easily do is to do a forward estimation flow (Uf), then a backward one (Ub), and take the average as your final estimate (minus sign on the backward first):

Ufinal = (Uf - Ub)/2

This could be compared to the output you get from using grad3D.m as it is now.

Using the grad3D.m version I have provided I BELIEVE will give better end result with less computations, but this is not something thoroughly tested.

Some of the reasoning behind this is found in the tutorial text. In short, you can consider the video as a volume, and both the forward and the backward schemes then correspond to mis-aligned kernels/stencils. This has practical implications for how well the optical flow constraint equation will hold, and thus how good any optical flow can be calculated

Brandon Roy

Thanks for sharing. But I cannot understand the code in "grad3D.m". I want to realize the optical flow is forward(imPre->imNew) or backward(imNew->imPre) in these calculation. Thanks for replying.

Jacky Tu

Stefan Karlsson

@Jacky Tu,

The optical flow is always in matrices U and V.

example:

in.movieType = 'synthetic';
in.method = @Flow1;
in.bRecordFlow= 1;
%set video resolution to be same as flow res:
in.vidRes = [128 128];
in.flowRes = [128 128];
[~, ~, ~,~,~,pathToSave] = vidProcessing(in);
% get 2 consequtive frames(10 and 11):
preF = getSavedFlow(10, pathToSave);
[curF, U, V] = getSavedFlow(11, pathToSave);

Instead of trying to "stick pixels luminance", use a sane warping/interpolation strategy. look at interp2 for example

Jacky Tu

Thanks for sharing Mr Stefan. I try to stick pixels' luminance from previous frame to current frame following the optical flow.

It did not work well due to (U, V) vectors are too small when I used curF(X2, Y2) = preF(X+U, Y+V) for luminance transferring.

But the color optical flow seems to be accurate. How can I get the actual flow data(refined U and V) at both x and y axis?

p.s. I use Flow2Full.m to calculate full resolution optical flow.

Stefan Karlsson

Version 1.05, hits the FEX! Some highlights:

- THE FANCY FLOW PLAYER, playback recorded session, simultaneous zoom in of your motion field and video frame, proper seekbar and all.

- NEW FUNCTION HANDLE INTERFACE
Super easy to plug your own optical flow/dense motion algorithm in, just provide function handle.

- BEEFED UP SYNTHESIS
Generate your own data with groundtruth, WHILE YOU ARE INTERACTING WITH IT. Saving and reusing data also made super easy. More options for synthesis, to properly show motion boundaries, multiple motions, higher motions, noise, flicker and more

Please provide me with feedback, as this version is BETA. There are bound to be bugs.

Stefan Karlsson

I am working on an update. A new version should up here within a week or two. And then i will dedicate a week or two after that for fine-tuning and bug fixing.

Thanks to those who have given input both here and by correspondance. If there are any requests of new features, now would be a good time. My time will be very limited again beginning next month.

Thanks again for all the good constructive feedback.

TM Hoogland

Baofeng Wang

akindini michael

hello .. I am try to use the selected source code in a video o carotid artery of the heart but because is a real video and the part stenosis that we want to check is moving i have to focus on it with shape or something like this. Can anyone of you to help me with coding or some reference to find the answer?

Stefan Karlsson

Thanks for your interest. To name two people who made contributions and that I know share code:
Michael J. Black,
Bill Triggs

Many of their suggested frameworks would work with the simple algorithms in this submission. There are two major classical issues in using this tutorial code straight away with real-world problems: higher motions and multiple motions. For problems such as pedestrian detection, the higher motion problem can be fixed with some simple pyramid tricks. The multiple motions problem would be a bit more tricky. Phrased differently, you do not want the output flow to be dependent on what the background of the scene happens to be. I dont know of any local, fast algorithms that deal with this, if someone does... do post me a line.

Thanks for sharing Mr Stefan. I would like to ask, if the optical flow can be used to detect the movement patterns of objects? For example in the detection of pedestrian behavior. Is any example code?

Stefan Karlsson

be sure to use a simple option to toggle the video recording on/off. Especially the way you implemented it is likely to suck up alot of processing time from Matlab. Real-time should still not be an issue, as long as you keep your video small and your machine powerful. Would be a shame if you want to show a fast script for someone, and it lags up your computer.

Stefan Karlsson

@Rui

the reason that only square videos are supported is just a lack of time on my part. Some parts of the code use simply a width parameter for height and width both. Yet other parts use a height and a width inconsistently (mixed up). In other words, semi-bugs that I havent had time to fix.

May get around to fixing it for part 2, if someone does it for me, please send me the updated code.

Thanks, Stefan. I applied the code to a 'movieType'. In case anyone else is wondering how to output the video,

in vidProcessing2D.m within the 'while' loop, I inserted mov(t) = getframe(gcf) and after it was complete, I used movie2avi(mov).

Thanks again!

Rui

Thanks for sharing. Nice work. But I actually have one question. If I use method 'flow1', it only suits a square video. What if I want to apply it on a rectangle video like (600*800), how can I change it? Thanks.

Stefan Karlsson

if you were interested in writing your generated optical flow to a binary file, may i recommend you look into:

memmapfile

makes for easier notation than fwrite, and supposedly faster as well.

may I also warn against storing the matrices as complex valued, as it seems to slow down Matlabs execution quite alot.

I will hopefully incorporate something of this kind for the next part of this tutorial (in the making now).

Stefan Karlsson

To save video, check out "videoWriter". If you want to save what you see on screen, simply use the "getFrame" function. Alternatively, you can access all the subparts of the visualization data by looking into "updateGraphicsScript".

If you are interested in saving the actual flow data (matrices U1 and V1), then you must take care that you cant save negative values into an avi. You can fiddle with this manually, for example by:

uint8((U1+1)*128)
uint8((V1+1)*128)

which will corrupt the flow that you save, if it goes beyond the range (-1,1), not to mention corruption due to quantization effects of casting to uint8.

What you really want to do then is to have a nice way to save streaming data to a mat-file. Unfortunately, the matlab built-in command does not support this. Perhaps something else exists that have to do with serialization of data that can fix this.

Of course, you could just save the entire thing in memory, and save it when your done with execution.

yeah... good luck with that :)

Thanks for sharing this. It will come in useful to create demos on OF. Any hint how to record the output into avi?

Pablo

Great work!

da

Thank you for sharing

Stefan Karlsson

@Bartybek, thanks. Positive comments and ratings will speedup the development of part 2 and 3 of the planned series of tutorials. Next up will be 3D tensor versions, with increased stability with little impact on performance. Stay tuned.

Batyrbek

Great job :)

MATLAB Release Compatibility
Created with R2016a
Compatible with any release
Platform Compatibility
Windows macOS Linux
Acknowledgements

Inspired: FancyFlowPlayer

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!