I'm currently testing the NTC thermistors' characteristics, from -40~150Celcuis; so i have two set of data; the first one is the "expected Value" from my calculation & Simulation, Which is the V-T (Voltage-Temperature)graph of the NTC thermistor, and the temperature value is integer, from -40 to 150 and each temperature has only one voltage value.
But the second one is the data i got from actually testing, the temperature value contains decimal points, for each degree, it can have up to 20numbers between one degree, and it can have up to 10 numbers for each of the decimal numbers.
So how can i find the "maximum deviation" "Maximum difference" between these two plots with different dimension of the matrices, which is [191*1] and [9656*1]
191 for -40~150 degree celcius in integer, 11657 for -40~150 degree celcius contain decimal numbers.
I would like to know at which point, the Estimate and Test Value have the largest deviation, but theset two matrices have two different dimensions, so can someone please help me with that?
I've done a lot of reserching, and there are some people giving ideas but not really work on mine.
Fill most of the data with NAH? to make these two matrices in the same size so that matlab can do the calculation
Try using interp1? im not familiar with that command and i am getting errors on that