Calibration

Dynamic range brings with it calibration issues. A certain dynamic range implies a

certain number of bits of precision. But real parts that are used to measure realworld

things have real tolerances. A 10K resistor can be between 9900 and

10,100 ohms if it has a 1% tolerance, or between 9990 and 10,010 ohms if it has

0.1% tolerance. In addition, the resistance varies with temperature. All the other

parts in the system, including the sensors themselves, have similar variations.

These will be addressed in more detail in Chapter 9, but for now the important

thing from a system point of view is this: how will the required accuracy be

achieved?

For example, say we’re still trying to measure that 0-to-100 _C temperature range.

Measurement with 1 _C accuracy may be achievable without adjustments. However,

you might find that the 0.1 _C figure requires some kind of calibration because you

can’t get a temperature sensor in your price range with that accuracy. You may have

to include an adjustment in the design to compensate for this variation.

The need for a calibration step implies other things. Will the part of the system

with the temperature sensor be part of the board that contains the compensation?

If not, how do you keep the two parts together once calibration is performed?

And what if the field engineer has to change the sensor in the field? Will the

engineer be able to do the calibration? Will it really be cheaper, in production,

to add a calibration step to the assembly procedure than to purchase a more

accurate sensor?

In many cases in which an adjustment is needed, the resulting calibration

parameters can be calculated in software and stored. For example, you might

bring the system (or just the sensor) to a known temperature and measure the

output. You know that an ideal sensor should produce an output voltage X for

temperature T, but the real sensor produces an output voltage Y for temperature

T. By measuring the output at several temperatures, you can build up a table of

information that relates the output of that specific sensor to temperature. This

information can be stored in the microprocessor’s memory. When the microprocessor

reads the sensor, it looks in the memory (or does a calculation) to determine

the actual temperature.

You would want to look at storing this calibration with the sensor if it was not

physically located with the microprocessor. That way, the sensor could be changed

without recalibrating. Figure 1.1 shows three means of handling this calibration.

In diagram A, a microprocessor connects to a remote sensor via a cable. The

microprocessor stores the calibration information in its EEPROM or flash memory.

The tradeoffs for this method are:

. Once the system is calibrated, the sensor has to stay with that microprocessor

board. If either the sensor or the microprocessor is changed, the system has to

be recalibrated.

. If the sensor or microprocessor is changed and recalibration is not performed,

the results will be incorrect, but there is no way to know that the results are

incorrect unless the microprocessor has a means to identify specific sensors.

. Data for all the sensors can be stored in one place, requiring less memory than

other methods. In addition, if the calibration is performed by calculation

instead of by table lookup, all sensors that are the same can use the same

software routines, each sensor just having different calibration constants.

Diagram B in Figure 1.1 shows an alternative method of handling a remote

sensor, in which the EEPROM that contains the calibration data is located on the

board with the sensor. This EEPROM could be a small IC that is accessed with an

I2C ormicrowire interface (more about those in Chapter 2). The tradeoffs here are:

. Since each sensor carries its own calibration information, sensors and microprocessor

boards can be interchanged at will without affecting results. Spare

sensors can be calibrated and stocked without having to be matched to a specific

system.

. More memories are required, one for each sensor that needs calibration.

Finally, diagram C in Figure 1.1 takes this concept a step further, adding a

microcontroller to the sensor board, with the microcontroller performing the

calibration and storing calibration data in an internal EEPROM or flash memory.

The tradeoffs here are:

. There are more processors and more firmware to maintain. In some applications

with rigorous software documentation requirements (medical, military)

this may be a significant development cost.

. No calibration effort is required by the main microprocessor. For a given realworld

condition such as temperature it will always get the same value, regardless

of the sensor output variation.

. If a sensor becomes unavailable or otherwise has to be changed in production, the

change can be made transparent to the main microprocessor code, with all the

new characteristics of the new sensor handled in the remote microcontroller

Another factor to consider in calibration is the human element. If a system

requires calibration of a sensor in the field, does the field technician need arms

twelve feet long to hold the calibration card in place and simultaneously reach the

‘‘ENTER’’ key on the keyboard? Should a switch be placed near the sensor so

calibration can be accomplished without walking repeatedly around a table to hit

a key or view the results on the display? Can the adjustment process be automated

to minimize the number of manual steps required? The more manual adjustments

that are needed, the more opportunities there are for mistakes.

نظرات 0 + ارسال نظر
برای نمایش آواتار خود در این وبلاگ در سایت Gravatar.com ثبت نام کنید. (راهنما)
ایمیل شما بعد از ثبت نمایش داده نخواهد شد