# Deconvolution gives Inf results - MATLAB precision error?

10 views (last 30 days)
Maximilian Rüppell on 26 Oct 2017
Answered: Bjorn Gustavsson on 8 Nov 2017
Hi everyone,
I am trying to deconvolve a signal with a template:
Template=[2 5 6 3];
Events=[0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0];
Signal=conv(Template, Events);
[RestoredEvents,Remainder]=deconv(Signal, Template);
The upper code works fine and RestoredEvents== Events.
If I increase the size of Template (1x600) and Events (1x5000), the deconvolution results in a RestoredEvents vector of zeros, at some point linearly increasing to inf and then NaNs. And I have no clue why. Interestingly, if the Events are spaced far enough from each other (not overlapping), it works fine again. As soon as there is one overlapping data point, the deconvolution results in nonsense.
In the MATLAB help for deconv it says:
[q,r] = deconv(v,u) deconvolves vector u out of vector v, using long division.
The quotient is returned in vector q and the remainder in vector r such that v = conv(u,q)+r
But the last equation does not hold true in my case. Could it be a precision problem? Or a MATLAB bug?
A note why I am doing this: In real life, "Signal" is real data from patch clamp experiments, "Template" is the form of a unitary postsynaptic potential, and "RestoredEvents" would be the result I want to have in order to know at which times postsynaptice potentials did occur.
Maximilian
David Goodmanson on 7 Nov 2017
Hi Maximilian,
After trying a few cases I believe that this is due to the limited precision of double precision arithmetic, as you suggested.

Maitreyee Mordekar on 8 Nov 2017
The behaviour that you observe is not a bug, but it is caused by limits of numerical precision. The deconv function performs a polynomial division and for long inputs, this can run into problems because the generated polynomial coefficients that are necessary to perform the division can get very large. Namely, since the denominator coefficient vector in the call to “filter” within deconv is very long, things can “blow up” and the returned coefficients can become Inf and NaN when the polynomial division becomes numerically unstable.
I assume that by “overlapping convolutions”, you mean that the non-zero entries of the Events array are spaced far enough apart that an entire copy of the Template signal can exist between events. In this case, the effect of the denominator coefficients has time to “wind back down” in the polynomial division. Put another way, events spaced closely together make the polynomial division very numerical unstable. This is an unfortunate weakness of computing a deconvolution via pure polynomial division. The same is true for a repeated sequence of events, though it depends significantly on how the frequency at which the sequence is repeated.
There are a few options for alternative approaches that you can use. Since the signal is calculated using the full linear convolution of Events with Template, the deconvolution can be achieved with the FFT:
RestoredEvents = ifft(fft(Signal) ./ fft(Template, length(Signal)), 'symmetric');
RestoredEvents = RestoredEvents(1:length(Events));
This is, in general, a numerically robust approach unless the template signal is very spectrally concentrated (i.e., if it has energy in only a small band of frequencies). If anything ever “blows up”, it is because the transform of the Template signal contains coefficients that are very small in value. This can be countered by a regularization of the denominator coefficients.
It’s important to note, however, that this only gives a good answer because for this case we are certain that the Signal that we have is exactly the linear convolution of the template signal with the series of events (and the template signal has good spectral properties). However, this may not always be the case. The suspicion is that the larger goal here is to take an observed signal and attempt to determine the locations of sparse events that trigger the template response, possibly in the presence of noise. More importantly, I suspect that using circulant convolution may not be an appropriate way to model this problem if it is more general than the example that was given.
Another alternative approach to attempt to discover the events is to set this problem up as a linear convolution system:
T = toeplitz([Template(:); zeros(numel(Signal)-numel(Template), 1)], [Template(1), zeros(1, length(Events)-1)]);
RestoredEvents = T \ Signal(:);
This is a reasonable approach, as it obtains the deconvolution by solving a linear system to return the least-squares deconvolution of the two inputs. However, it’s an inefficient tactic because it requires the explicit construction of a (large) Toeplitz matrix. If the Template is known to be short relative to the length of Signal, then there is possibly some advantage to constructing this as a sparse matrix.
Hope this helps!

Bjorn Gustavsson on 8 Nov 2017
In the case where deconv is failing and you have access to the image processing toolbox you have the image deconvolution functions to try: deconvreg, deconvwnr and deconvlucy, all have (slightly different) means to regularize the deconvolution, if you have Gonzales (sp?) Woods (and Eddins) book on image processing it is "rather straightforward" to implement the two first.
HTH