Simulate single precision arithmetic in Matlab?

20 views (last 30 days)
Hi! I'm trying to replicate, in Matlab, compiled algorithms implemented in C using single-precision arithmetic; I have the C source code, and an executable which is reputed to implement said C source, but no convenient means to compile and link said source myself. (Yes, I know I can download the Microsoft C compiler, incl. a no-frills version of Visual Studio, for free, but that's hardly "convenient.") I've implemented the algorithms and I'm 99% sure they're right because the "residuals" (the difference between my results and those of the original C implementations) are more-or-less randomly distributed around zero, and about 85% of them are within half of the precision of the C implementations' output (which varies, but, for example, when it's 1e-4, 85% of my residuals satisfy abs(res)<5e-5). It's the other 15% that leave me 1% uncertain, and my attempts to explain the discrepancy have brought me to the single/double precision issue: I have good cause to suspect that the C implementations are using single-precision arithmetic, and I've been led to believe that Matlab always performs double-precision arithmetic, even on single precision "data," yes? (I tried making 'single' all the data in my implementation, and confirmed after each "gross" calculation that the output remained so, but that only reduced the number of bad residuals by 3 out of 4118 total input points.) So, I'm wondering if there's either a "convenient" way to simulate single-precision arithmetic in Matlab (I searched File Exchange for relevant functions, but found none) or to estimate/bound the "error" introduced by using single-precision arithmetic (including power/root and exponential/log functions) instead of double. Thanks!
  2 Comments
Geoff
Geoff on 11 Apr 2012
Look at the C code. If it's full of variables of type 'float', it's using single precision. If it's full of variables of type 'double' it _might_ be using double precision... Depending on the various functions (if any) that are used to obtain intermediate values.
Samana
Samana on 21 Apr 2024
Round-off error can often accumulate when performing computations with repeated additions and subtractions. The error accumulates differently depending on the precision with which the numbers are stored. To demonstrate this concept, consider the following summation: The analytical result for is exactly zero, but you will find that does not equal zero when performing this computation with MATLAB due to the finite precision with which numbers are stored in memory. For this problem, write a script that performs the computation two different ways: Use all double precision (MATLAB's default) numbers in the calculation. Assign the result for to the variable named ZeroDouble. Use all single precision numbers in the calculation. Assign the result for using single precision to the variable named ZeroSingle.

Sign in to comment.

Accepted Answer

Walter Roberson
Walter Roberson on 11 Apr 2012
Look on undocumentedmatlab.com for the information about the features supported by system_dependent()
  1 Comment
Jan
Jan on 11 Apr 2012
Good idea! "feature('SetPrecision', 24)" or 53 or 64 affects the rounding, when intermediate results are copied from the processor registers back to the memory. While the Intel processors and e.g. the lovely 68040 have 80 bit registers in the floating point unit, DOUBLE have 64 bits only. A processor flag can control, if this register is rounded. Matlab uses 53 bit precision for the mantissa as default (this means 64 bit for the DOUBLE when stored in a register). This is useful to get equivalent results for running Matlab on different platforms. Increasing the accuracy by "feature('SetPrecision', 64)" -this means 80 bits in the register- does not reduce the speed, but it causes a platform dependency of the results.

Sign in to comment.

More Answers (4)

Jan
Jan on 10 Apr 2012
Matlab uses single precision arithmetic for single precision values. Therefore there is no reason to simulate it.
I'd suggest to use double arithmetic in the C-function instead for the intermediate results to increase the accuracy.
Downloading and installing the Windows-SDK needs several minutes only.
  3 Comments
Jan
Jan on 11 Apr 2012
Simply try it:
k = eps('single'); one = single(1);
a) disp(one + k - one) % Single values
b) disp(one + k/10 - one) % Single values
c) disp(1.0 + double(k)/10.0 - 1.0) % Double values
Now I expect that b) replies zero, because adding k/10 to 1 is exactly 1. But in double precision arithmetics, 1+k/10 differs from 1.0.
Converting all values to SINGLE and using SINGLE precision arithmetic in consequence can conceal programming errors, e.g. if some _small_ values are accidently added to _large_ value. If 15% of the results are effected substantially by using different precisions, the algorithm is instable or contains a bug.
Setting the floating point processor flag for the storage of intermediate values by "feature('SetPrecision', 64)" (See Walter's answer) should not change the result of a computation substantially. But the result of 1e17 + 1 - 1e17 can be influenced -- this means that the naive approach to calculate a SUM is not stable.
Jan
Jan on 11 Apr 2012
Another evidence for SINGLE arithmetics in Matlab is using Matlab 6.5: Due to the missing implementation, SINGLE(1)+SINGLE(1) failed with an error message. This means, that at least in this old release the operation has not been performed in DOUBLE precision with a later re-casting to SINGLE. But of course this is not a proof.
FILTER stores the temporary accumulator variables as SINGLE, when the signal is SINGLE. This limits the accuracy of the results remarkably without increasing the speed dramatically. The Mex functions for faster filtering I've published in the FEX accumulates the internal status as DOUBLEs to reduce the artifacts produced by rounding.

Sign in to comment.


James Tursa
James Tursa on 11 Apr 2012
It is very difficult to verify what is actually happening behind the scenes for single precision calculations. E.g., on a PC it may be the case that certain calculations on float variables get converted to 80-bit extended precision for the actual calculations and then re-converted to float to store the result. This can be the case for your C code even though all the variables are float. I am not sure which functions of MATLAB are strictly single and which are not. Bottom line is I seriously doubt you will be able to get an exact match no matter how much time you spend on the problem, unless the code is very simple. Whether the residuals you are currently seeing are "small" or not depends on the algorithms you are implementing. I think Jan's advice is a pretty good one ... do it all in double and compare that result to the single result.

David Goldsmith
David Goldsmith on 11 Apr 2012
OK, points well-taken, so here's the nitty-gritty:
In what follows, P ("big" O(1-10)), T (O(1-10)), and K (O(0.1-10)) are inputs; in the C implementation, they come in w/ precisions of 1e-3, 1e-4, and 1e-6, resp., i.e., the precision is held constant, not the sig figs--yes, I know, bad science, but hey, it's not my software.
k = 0.0162;
K35150 = 42.914;
A = [2.070e-5 -6.370e-10 3.989e-15];
B = [3.426e-2 4.464e-4 4.215e-1 -3.107e-3];
G = [6.766097e-1 2.00564e-2 1.104259e-4 -6.9698e-7 1.0031e-9];
a = [0.0080 -0.1692 25.3851 14.0941 -7.0261 2.7081];
b = [0.0005 -0.0056 -0.0066 -0.0375 0.0636 -0.0144];
S = zeros(size(T)); RP = S; RT = S;
% Begin calculation
R = K / K35150;
val = 1 + B(1)*T + B(2)*T.*T + B(3)*R + B(4)*R.* T;
good = (val~=0);
RP(good) = 1 + (P(good) .* (A(1) + P(good) .* (A(2) + A(3)*P(good)))) ./ val(good);
val = RP .* (G(1) + (T .* (G(2) + T .* (G(3) + T .* (G(4) + G(5)*T)))));
good = (val~=0);
RT(good) = R(good) ./ val(good);
RT(RT <= 0) = 1e-6;
tmp = RT';
tmp = [ones(size(tmp)); tmp.^0.5; tmp; tmp.^1.5; tmp.^2; tmp.^2.5];
sum1 = a * tmp;
sum2 = b * tmp;
val = 1.0 + k * (T - 15.0);
good = (val~=0);
S(good) = sum1(good)' + sum2(good)'.*(T(good) - 15.0)./val(good);
So, as you can see, there are lots of additions, multiplications, and powers (incl. fractional), involving coefficients of widely different scales; I've posted a plot of the "residuals" = the (same) inputs processed by the compiled C app (output precision = 1e-4) - processed by my Matlab implementation. Note: for the inputs, I'm able to duplicate the C (implemented in double precision, I'm informed) results to within +/-0.5*output precision, uniformly for T and K, and to about 99% in the case of P. So, please opine: should I be concerned? Thanks!

David Goldsmith
David Goldsmith on 11 Apr 2012
Ouch, the C program's output format bit me; it's somewhat complicated, so suffice it to say I was comparing apples to oranges (though why the C program outputs both apples and oranges when one would expect it to output only one or the other remains in question)--once I figured this out and started comparing apples to apples, I found that my apples are just like theirs! Sorry for the noise.
  2 Comments
David Goldsmith
David Goldsmith on 11 Apr 2012
Though at least it caused me to learn about system_dependent/feature!
Walter Roberson
Walter Roberson on 11 Apr 2012
You know that in C you can coerce apples into being oranges, since they both have the same underlying quark bits. MATLAB is a bit more restrictive: it only allows you to coerce whole numbers of unstemmed apples into unstemmed oranges.

Sign in to comment.

Products

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!