Calibration Principles

Calibration is the activity of checking, by comparison with a standard, the accuracy of a measuring instrument of any type. It may also include adjustment of the instrument to bring it into alignment with the standard. Even the most precise measurement instrument is of no use if you cannot be sure that it is reading accurately – or, more realistically, that you know what the error of measurement is. Let’s begin with a few definitions:
  • Calibration range – the region between the within which a quantity is measured, received or transmitted which is expressed by stating the lower and upper range values.
  • Zero value – the lower end of the calibration range
  • Span – the difference between the upper and lower range
  • Instrument range – the capability of the instrument; may be different than the calibration range.
For example, an electronic pressure transmitter may have an instrument range of 0–750 psig and output of 4-to-20 milliamps (mA). However, the engineer has determined the instrument will be calibrated for 0-to-300 psig = 4-to-20 mA. Therefore, the calibration range would be specified as 0-to-300 psig = 4-to-20 mA. In this example, the zero input value is 0 psig and zero output value is 4 mA. The input span is 300 psig and the output span is 16 mA.
Be careful not to confuse the range the instrument is capable of with the range for which the instrument has been calibrated.
Ideally a product would produce test results that exactly match the sample value, with no error at any point within the calibrated range. This line has been labeled “Ideal Results”. However, without calibration, an actual product may produce test results different from the sample value, with a potentially large error. Calibrating the product can improve this situation significantly. During calibration, the product is “taught” using the known values of Calibrators 1 and 2 what result it should provide. The process eliminates the errors at these two points, in effect moving the “Before Calibration” curve closer to the Ideal Results line shown by the “After Calibration” curve. The error has been reduced to zero at the calibration points, and the residual error at any other point within the operating range is within the manufacturer’s published linearity or accuracy specification. Every calibration should be performed to a specified tolerance. The terms tolerance and accuracy are often used incorrectly. In ISA’s The Automation, Systems, and Instrumentation Dictionary, the definitions for each are as follows:
  • Accuracy – the ratio of the error to the full scale output or the ratio of the error to the output, expressed in percent span or percent reading, respectively.
  • Tolerance – permissible deviation from a specified value; may be expressed in measurement units, percent of span, or percent of reading.
It is recommended that the tolerance, specified in measurement units, is used for the calibration requirements performed at your facility. By specifying an actual value, mistakes caused by calculating percentages of span or reading are eliminated. Also, tolerances should be specified in the units measured for the calibration. Calibration tolerances should be determined from a combination of factors. These factors include:
  • Requirements of the process
  • Capability of available test equipment
  • Consistency with similar instruments at your facility
  • Manufacturer’s specified tolerance
The term Accuracy Ratio was used in the past to describe the relationship between the accuracy of the test standard and the accuracy of the instrument under test. A good rule of thumb is to ensure an accuracy ratio of 4:1 when performing calibrations. This means the instrument or standard used should be four times more accurate than the instrument being checked. In other words, the test equipment (such as a field standard) used to calibrate the process instrument should be four times more accurate than the process instrument. With today’s technology, an accuracy ratio of 4:1 is becoming more difficult to achieve. Why is a 4:1 ratio recommended? Ensuring a 4:1 ratio will minimize the effect of the accuracy of the standard on the overall calibration accuracy. If a higher level standard is found to be out of tolerance by a factor of two, for example, the calibrations performed using that standard are less likely to be compromised. The out-of-tolerance standard still needs to be investigated by reverse traceability of all calibrations performed using the test standard. However, our assurance is high that the process instrument is within tolerance. Traceability Last but not least, all calibrations should be performed traceable to a nationally or internationally recognized standard. For example, in the United States, the National Institute of Standards and Technology (NIST) maintains the nationally recognized standards. Traceability is defined by ANSI/NCSL Z540-1-1994 as “the property of a result of a measurement whereby it can be related to appropriate standards, generally national or international standards, through an unbroken chain of comparisons.” Note this does not mean a calibration shop needs to have its standards calibrated with a primary standard. It means that the calibrations performed are traceable to NIST through all the standards used to calibrate the standards, no matter how many levels exist between the shop and NIST. Traceability is accomplished by ensuring the test standards we use are routinely calibrated by “higher level” reference standards. Typically the standards we use from the shop are sent out periodically to a standards lab which has more accurate test equipment. The standards from the calibration lab are periodically checked for calibration by “higher level” standards, and so on until eventually the standards are tested against Primary Standards maintained by NIST or another internationally recognized standard. The calibration technician’s role in maintaining traceability is to ensure the test standard is within its calibration interval and the unique identifier is recorded on the applicable calibration data sheet when the instrument calibration is performed. Additionally, when test standards are calibrated, the calibration documentation must be reviewed for accuracy and to ensure it was performed using NIST traceable equipment. M.G. Newell offers a variety of calibration services that keep your operations consistent and cost effective. Contact your local account manager for rates and plan options.

A distillery wanted to update and automate many of the manual processes in their plant. See how our Newell Automation saved them time and money and improved their accuracy

Are there areas in your plant still tied to manual weighing and recording? A conservative estimate of 0.05% error in weighing can cost a plant tens of thousands of dollars every year. For example, measuring one ton of bulk grain with a 0.05% error translates to 100 pounds of unaccounted product. Our Newell Automation team was contacted by a distillery that wanted to update and automate many of the manual processes in the distillery. One of the most manual areas was the granary where they received and stored various grains (corn, wheat, barley, and rye) that are cooked, fermented and then distilled into whiskey. The existing granary process used manual scales – some of which dated back nearly a hundred years! Grains were stored in bulk and augers would transfer the desired grain into an intermediate weigh bin. An operator would visually weigh the grains and document all the measurements in a written notebook. Opportunities for errors were numerous – old scales possibly out of calibration, manual recording of weights, manually stopping/starting of the auger, unknown levels of grain in bulk tank, etc. Newell Automation provided the distillery with a new PLC/Controls platform. The system recorded weights from load cells on both the bulk tanks and the intermediate weigh bin. Information was relayed to the operator in real-time via an HMI display. We also provided automated start/stops for all the augers via the PLC. A motor control center with VFD’s gave the operator further control of the auger speed to dose the grains more accurately for each recipe. The system automated the documentation and recording – leading to more accuracy and improved cost savings. It also freed up a significant amount of time for the operator to perform other tasks in the plant. More whiskey production through improved automation, that’s something we can raise a toast to! If you want to streamline and automate some of your processes, contact one of our associates to see how We Make It Work Better.

The ABC’s of VFDs – A user’s guide to VFD terminology

Keeping up on all the terminology surrounding the usage of variable frequency drives can be daunting, so we’ve prepared a guide to help explain some basic terminology so that you can be a power user of variable frequency drives (VFDs).

A VFD is a device that controls the speed of an electrical motor by varying the frequency and voltage of its power supply. The VFD also has ramp-up and ramp-down capabilities to start and stop the electrical motor smoothly.

Why do we need to control the speed of an electrical motor? Well, there are multiple reasons, most of these that you already know:

  • Save energy and improve system efficiency
  • Reach the desired torque or power for the process requirements
  • Lower the noise levels of pumps, blowers, fans, compressors, etc.
  • Reduce mechanical stress on the machines and improve their life cycle
  • Improve the working environment

There are also new features on today’s VFDs such as CIP safety, preventative maintenance monitoring of drive data to alert maintenance before failure and automatic PID tuning for system changes.

A VFD consists of 3 primary sections: the rectifier/converter, the DC Bus, and the inverter.

The rectifier/converter is the first of the three sections of a VFD’s main power circuit, and first in terms of power flow. Incoming AC line voltage is rectified or converted to DC voltage in the converter section, which consists of diodes, silicon-controlled rectifiers (SCRs), or insulated gate bipolar transistors (IGBTs) connected in a full-wave bridge configuration. One rectifier will allow power to pass through only when the voltage is positive. A second rectifier will allow power to pass through only when the voltage is negative. Two rectifiers are required for each phase of power.

The DC Bus is the 2nd section of a VFDs main power circuit. The main function of this link is to store, smooth and deliver the DC voltage. The incoming power from the rectifier contains voltage ripples which need to be smoothed using capacitors.

The 3rd section of a VFD power circuit is the Inverter. The inverter section of a VFD is the primary difference between an AC drive and a DC driver. This section is comprised of Insulated Gate Bipolar Transistors (IGBTs) which convert the DC voltage back into AC voltage to feed the motor. IGBTs are very fast and very small semiconductor switches that are actuated electronically, thus creating a sinusoidal output current.

The technique used to convert AC voltage and vary the output frequency is Pulse Width Modulation (PWM). PWM is a VFD control scheme in which a constant DC voltage is used to reconstruct a pseudoAC voltage waveform using a set of six power switches, usually IGBTs. Varying the width of the fixed amplitude pulses controls effective voltage. This pulse width modulation scheme works because the motor is a large inductor that does not allow current to pulse like the voltage. Sequenced correctly, PWM outputs motor current in a nearly perfect sinusoidal waveform.

All three of these are controlled by a microprocessor unit that performs numerous functions such as controlling the speed, monitoring the alarms and faults and interfacing the AC drive with different devices using a communication protocol. This means that the user can now control the start/stop function, motor speed control and receive feedback about current, speed, and other motor or device variables.

Soft Starter vs VFD

What’s the difference between a soft starter and a VFD? While the VFD can vary the speed of the motor, a soft starter only controls the starting and stopping of the motor. A soft starter will be cheaper and smaller, however a VFD should be used when a high starting torque is required, or speed control is required.

For additional information, visit our website: www.mgnewell.com or contact us at sales@mgnewell.com.

What the Heck is SQL? – Understanding data storage and visualization

A customer on the East Coast wanted to understand more about the down-time in their process. What was causing the downtime? What equipment was involved? How much time was actual down-time? Like many manufacturers, they were hoping to improve their process time and get more production out the door. They turned to Andy Baughman, M.G. Newell Control Systems Engineer. Andy upgraded their controls and programmed their system to collect all of the relevant data about their process. He then developed a solution based on SQL programming that allowed them to data-mine the collected process information. The end result:
  • The customer was able to see trends about the equipment and process that had not been detected previously
  • They were able to tailor their Preventive Maintenance using these trends
  • With the entire system being web-based, it allowed the users to integrate web features such as talk-to-text
  • Their West Coast office was able to monitor the process in “real time”
  • The SQL program gives them the versatility to add other options such as bar-coding in the future
  • With improved monitoring and data collection, the customer realized that their process was actually running over 50 units per minute instead of the 30 units per minute previously thought
  Why SQL? Structured Query Language (SQL) is a programming language for storing, manipulating and retrieving data from databases. Database tables collect and store process and product data in a way that can be retrieved and used later. SQL allows users to describe, define and manipulate that data and even allows the user to embed it within other programming languages. “Flat Files” describe data that is stored in columns and rows (like an Excel® spreadsheet). It is preferable to collect data in database tables rather than flat files. Why? Let’s use our example above to explain. Downtime doesn’t fit the Flat File model. For any given production run, the number of downtime events can vary from 0 to infinity. A flat file would need enough columns to store the maximum number of downtime events. This would not be efficient or practical because any production run that concluded with no downtime would have zeros or “null” values in every column assigned for downtime events. The file itself would be huge and contain very little to no information. Likewise, retrieving any information would be tedious. A better solution is to store the data in multiple “tables”. A “Job” table could store all the information regarding a unique production run. A Job marries a specific product to a specific production line. It documents details about that job such as start/stop times, product information, production rate, etc. A “Downtime” table could store all the information regarding each downtime occurrence. The two tables could then be linked to each other in a database. A new field could be added to the Job data to represent the number of downtime events that occurred during each production run. Additionally, database tables can be added or deleted at any time. For example, our customer plans to add bar-coding functionality in the future. One never knows today what data might be important five years from now. By collecting as much data as possible now, the collected information can be “data mined” in the future. Data mining is an analytic process designed to explore large amounts of data in search of consistent patterns or systematic relationships between variables. That information is then validated by applying the patterns to new subsets of data. The ultimate goal of data mining is prediction. For example, data mining is now common in the insurance industry as they gather more and more sophisticated information about car accidents. It is also commonly used by retailers to gather consumer buying trends. Want to know more? M.G. Newell has a team of Control and Automation Specialists on staff to help with PLC design and programing, control panel design and fabrication and software programming. As a UL Certified facility, our staff averages 20+ years of experience with backgrounds in electrical engineering, systems controls and automation design. Need an impartial assessment of your control systems? Do you wish your controls were more flexible? Contact us at sales@mgnewell.com and we can schedule an audit or just come in to answer any questions you may have.

Control Panel Fabrication – Best Practices for a Panel Builder

Depending on your process, a control panel may be small – designed to control a simple batching process – or complex – designed to control several loops and integrate various pieces of equipment in various locations around your plant. As such, these panels can house an increasing array of devices. In addition, you may want to plan for a future expansion. When selecting a vendor to build your control panels, it is wise to do some homework and check out their work. We’ve put together a few Frequently Asked Questions to help guide you through the process of selecting an automation company to build your control panel.
  1. Are they UL Certified? Do they use UL components? UL is a global, independent safety science company with over 100 years of expertise in safety solutions. UL standards encompass their extensive range of safety research and scientific expertise. Most control panels fall under the UL-508 specification. This requirement covers industrial control panels intended for general industrial use and operating from a voltage of 1000 volts or less.Within this standard, UL-508A is the UL standard for the construction of Industrial Control Panels. It provides guidelines to panel builders on various issues including proper component selection, wiring methods and calculation of short circuit current ratings. A panel that carries the UL-508A listed mark means that the panel, it’s electrical components and construction meet UL-508A standards.Electrical inspectors look for this mark as evidence of third-party certification. This is important to a local municipal inspection authority as well as the panel purchaser. It shows that the panel is compliant with acceptable safety standards.
  2. Do they have other certifications? Other than UL Certifications, most engineers and panel builders also have other industry certifications. Other organizations include CSIA – the Control System Integrators Association and ISA – the International Society of Automation. Both organizations perform independent audits on the panel builder and/or the company and provide a non-biased, objective assessment.CSIA Certified companies have demonstrated through an independent audit that they adhere to CSIA’s comprehensive Best Practices. Key areas covered include not only project management, system development and quality assurance, but also covers the company’s financial health, human resources and marketing and business development.ISA certifies a technician’s skills. Through their Certified Control Systems Technician (CCST) program, they provide an objective assessment and confirmation of a technician’s skills. ISA’s three levels of CCST certification require differing degrees of technical experience, education, and training. CCSTs calibrate, document, troubleshoot, and repair/replace instrumentation for systems that measure and control level, temperature, pressure, flow, and other process variables.
  3. Do they keep their panel building area clean? Keeping your panel building facility clean may sound like a no-brainer, but the benefits of a clean facility go beyond simply clean floors. The most important component of any work environment is its people. By providing a clean and hygienic workplace, businesses can make a significant positive impact on the health and safety, productivity and satisfaction of employees.Studies have shown that employees in a clean facility are 12% more productive. What would this look like in a panel building shop? It means:• projects are clearly organized and in their own workplace • components are labeled and stored with the correct project • tools and other supplies are clearly labeled and easily accessible • trash and other non-used supplies are removed from the work area regularly • dust, dirt and debris are reduced that could cause a fault within the pane
  4. Do they standardize the layout of the panel? While neatness may be the first thing that jumps out about a well-designed panel, there are other aspects that you should look for in good control panel design. These aspects include component placement, labeling, panel size and space and wire design.• Components should be arranged in a logical and functional manner. High and low voltage components should be segregated from each other. Since most panels have their main power disconnect switch in the upper right of the panel, it makes sense that the highest voltage rating components should be at the top with decreasing voltage level components below. The PLC racks and other sensitive electronics are typically located away from the hotter power components. • Labeling of components, especially wiring, is key and the labelling should be consistent within the panel. Wiring and components should also correspond to the P&ID, that way troubleshooting will be easier down the road. • Panel size and spacing should obviously be large enough to house the needed components, but also allow room for possible future expansion. Proper heat dissipation is also critical within a control panel. A well-design panel will incorporate the means for expelling heat excess vertically within the enclosure. Additional room should also be left at the bottom for coiling spare field wiring. • A good wiring plan uses both the right type and the right amount. Enough space should be given so that each wire can be neatly connected to its component and that its label can be clearly seen. Finally, wiring should be firmly connected so that wiring cannot be easily pulled out or lose connection.
  5. Do they provide documentation with the panel? Last but not least, your automation supplier should provide all of the proper documentation along with the panel itself. At a minimum, documentation should include any layout drawings and/or P&ID’s, electrical schematics and bills of material for the components. If you requested a UL panel, the panel should also include a UL sticker and verification that all components are UL-Certified. Panels should also have a tag that includes the manufacturer and the project number.