Hazardous Area Classifications

In any manufacturing environment, one of the major safety concerns is the risk of a fire or explosion. Therefore, OSHA and many other regulating bodies have established systems to classify products and locations that could result in a hazardous situation for workers. OSHA Publication 3073 defines a hazardous location as: Hazardous locations are areas where flammable liquids, gases or vapors or combustible dusts exist in sufficient quantities to produce an explosion or fire. In hazardous locations, specially designed equipment and special installation techniques must be used to protect against the explosive and flammable potential of these substances. The National Electrical Code (NEC) defines hazardous areas as: An area where a potential hazard (e.g. a fire, an explosion, etc) may exist under normal or abnormal condition because of the presence of flammable gasses or vapors, combustible dusts or ignitable fibers or flyings. Once an area is identified and classified as hazardous, any electrical equipment in these areas should be specially designed and tested to ensure it does not initiate an explosion due to arcing or high surface temperature of equipment. In the sanitary industry, these areas can be found in distilleries, bakeries, pharmaceutical and personal care plants to name a few. Hazardous Area Classification In North America, the most widely used classification system for identifying hazardous areas is defined by NFPA Publication 70, NEC and CEC. It defines 3 terms that succinctly describe an environment:
  • Class – the general nature of the hazard
  • Division – the probability of the hazard being present
  • Group – the type of hazard
The Class defines the general nature of hazardous or ignitable substances present in the atmosphere:
  • Class I – flammable vapors and gases
  • Class II – combustible dust
  • Class III – ignitable fibers or particulates
The Division defines the probability of the hazardous material being present in a flammable concentration:
  • Division 1 – hazards exist under normal operating conditions and/or hazard caused by frequent maintenance or repair work – high probability
  • Division 2 – hazards are handled, processed or used but which are normally in closed containers or closed systems from which they can only escape through rupture or breakdown of the container system – low probability
The Group defines a substance by rating its flammable nature in relation to other known substances. Materials are placed in groups based on their ignition temperature and explosion pressure. The table shows a few examples. Temperature classes also exist to designate the permissible surface temperature of electrical equipment which allows them to operate normally in the surrounding atmosphere. For example, ethanol is used as an ingredient in a batch formulation for a pharmaceutical product. The reactor itself would be a Class I, Division 1. The remainder of the production area would be rated as a Class I, Division 2 hazard area. Therefore, any control panels or other electrical equipment in that room would need to meet the requirements of a Class I, Div 2, Group D hazardous area. An office or work station in a separate room would be classified as a Nonhazardous location. Our Newell Automation team can help you identify the hazards in your plant and design the control panels and software that allow you to safely produce. To learn more, email us at sales@mgnewell.com or call us at 336- 393-0100

Calibration Principles

Calibration is the activity of checking, by comparison with a standard, the accuracy of a measuring instrument of any type. It may also include adjustment of the instrument to bring it into alignment with the standard. Even the most precise measurement instrument is of no use if you cannot be sure that it is reading accurately – or, more realistically, that you know what the error of measurement is. Let’s begin with a few definitions:
  • Calibration range – the region between the within which a quantity is measured, received or transmitted which is expressed by stating the lower and upper range values.
  • Zero value – the lower end of the calibration range
  • Span – the difference between the upper and lower range
  • Instrument range – the capability of the instrument; may be different than the calibration range.
For example, an electronic pressure transmitter may have an instrument range of 0–750 psig and output of 4-to-20 milliamps (mA). However, the engineer has determined the instrument will be calibrated for 0-to-300 psig = 4-to-20 mA. Therefore, the calibration range would be specified as 0-to-300 psig = 4-to-20 mA. In this example, the zero input value is 0 psig and zero output value is 4 mA. The input span is 300 psig and the output span is 16 mA.
Be careful not to confuse the range the instrument is capable of with the range for which the instrument has been calibrated.
Ideally a product would produce test results that exactly match the sample value, with no error at any point within the calibrated range. This line has been labeled “Ideal Results”. However, without calibration, an actual product may produce test results different from the sample value, with a potentially large error. Calibrating the product can improve this situation significantly. During calibration, the product is “taught” using the known values of Calibrators 1 and 2 what result it should provide. The process eliminates the errors at these two points, in effect moving the “Before Calibration” curve closer to the Ideal Results line shown by the “After Calibration” curve. The error has been reduced to zero at the calibration points, and the residual error at any other point within the operating range is within the manufacturer’s published linearity or accuracy specification. Every calibration should be performed to a specified tolerance. The terms tolerance and accuracy are often used incorrectly. In ISA’s The Automation, Systems, and Instrumentation Dictionary, the definitions for each are as follows:
  • Accuracy – the ratio of the error to the full scale output or the ratio of the error to the output, expressed in percent span or percent reading, respectively.
  • Tolerance – permissible deviation from a specified value; may be expressed in measurement units, percent of span, or percent of reading.
It is recommended that the tolerance, specified in measurement units, is used for the calibration requirements performed at your facility. By specifying an actual value, mistakes caused by calculating percentages of span or reading are eliminated. Also, tolerances should be specified in the units measured for the calibration. Calibration tolerances should be determined from a combination of factors. These factors include:
  • Requirements of the process
  • Capability of available test equipment
  • Consistency with similar instruments at your facility
  • Manufacturer’s specified tolerance
The term Accuracy Ratio was used in the past to describe the relationship between the accuracy of the test standard and the accuracy of the instrument under test. A good rule of thumb is to ensure an accuracy ratio of 4:1 when performing calibrations. This means the instrument or standard used should be four times more accurate than the instrument being checked. In other words, the test equipment (such as a field standard) used to calibrate the process instrument should be four times more accurate than the process instrument. With today’s technology, an accuracy ratio of 4:1 is becoming more difficult to achieve. Why is a 4:1 ratio recommended? Ensuring a 4:1 ratio will minimize the effect of the accuracy of the standard on the overall calibration accuracy. If a higher level standard is found to be out of tolerance by a factor of two, for example, the calibrations performed using that standard are less likely to be compromised. The out-of-tolerance standard still needs to be investigated by reverse traceability of all calibrations performed using the test standard. However, our assurance is high that the process instrument is within tolerance. Traceability Last but not least, all calibrations should be performed traceable to a nationally or internationally recognized standard. For example, in the United States, the National Institute of Standards and Technology (NIST) maintains the nationally recognized standards. Traceability is defined by ANSI/NCSL Z540-1-1994 as “the property of a result of a measurement whereby it can be related to appropriate standards, generally national or international standards, through an unbroken chain of comparisons.” Note this does not mean a calibration shop needs to have its standards calibrated with a primary standard. It means that the calibrations performed are traceable to NIST through all the standards used to calibrate the standards, no matter how many levels exist between the shop and NIST. Traceability is accomplished by ensuring the test standards we use are routinely calibrated by “higher level” reference standards. Typically the standards we use from the shop are sent out periodically to a standards lab which has more accurate test equipment. The standards from the calibration lab are periodically checked for calibration by “higher level” standards, and so on until eventually the standards are tested against Primary Standards maintained by NIST or another internationally recognized standard. The calibration technician’s role in maintaining traceability is to ensure the test standard is within its calibration interval and the unique identifier is recorded on the applicable calibration data sheet when the instrument calibration is performed. Additionally, when test standards are calibrated, the calibration documentation must be reviewed for accuracy and to ensure it was performed using NIST traceable equipment. M.G. Newell offers a variety of calibration services that keep your operations consistent and cost effective. Contact your local account manager for rates and plan options.

A distillery wanted to update and automate many of the manual processes in their plant. See how our Newell Automation saved them time and money and improved their accuracy

Are there areas in your plant still tied to manual weighing and recording? A conservative estimate of 0.05% error in weighing can cost a plant tens of thousands of dollars every year. For example, measuring one ton of bulk grain with a 0.05% error translates to 100 pounds of unaccounted product. Our Newell Automation team was contacted by a distillery that wanted to update and automate many of the manual processes in the distillery. One of the most manual areas was the granary where they received and stored various grains (corn, wheat, barley, and rye) that are cooked, fermented and then distilled into whiskey. The existing granary process used manual scales – some of which dated back nearly a hundred years! Grains were stored in bulk and augers would transfer the desired grain into an intermediate weigh bin. An operator would visually weigh the grains and document all the measurements in a written notebook. Opportunities for errors were numerous – old scales possibly out of calibration, manual recording of weights, manually stopping/starting of the auger, unknown levels of grain in bulk tank, etc. Newell Automation provided the distillery with a new PLC/Controls platform. The system recorded weights from load cells on both the bulk tanks and the intermediate weigh bin. Information was relayed to the operator in real-time via an HMI display. We also provided automated start/stops for all the augers via the PLC. A motor control center with VFD’s gave the operator further control of the auger speed to dose the grains more accurately for each recipe. The system automated the documentation and recording – leading to more accuracy and improved cost savings. It also freed up a significant amount of time for the operator to perform other tasks in the plant. More whiskey production through improved automation, that’s something we can raise a toast to! If you want to streamline and automate some of your processes, contact one of our associates to see how We Make It Work Better.

What the Heck is SQL? – Understanding data storage and visualization

A customer on the East Coast wanted to understand more about the down-time in their process. What was causing the downtime? What equipment was involved? How much time was actual down-time? Like many manufacturers, they were hoping to improve their process time and get more production out the door. They turned to Andy Baughman, M.G. Newell Control Systems Engineer. Andy upgraded their controls and programmed their system to collect all of the relevant data about their process. He then developed a solution based on SQL programming that allowed them to data-mine the collected process information. The end result:
  • The customer was able to see trends about the equipment and process that had not been detected previously
  • They were able to tailor their Preventive Maintenance using these trends
  • With the entire system being web-based, it allowed the users to integrate web features such as talk-to-text
  • Their West Coast office was able to monitor the process in “real time”
  • The SQL program gives them the versatility to add other options such as bar-coding in the future
  • With improved monitoring and data collection, the customer realized that their process was actually running over 50 units per minute instead of the 30 units per minute previously thought
  Why SQL? Structured Query Language (SQL) is a programming language for storing, manipulating and retrieving data from databases. Database tables collect and store process and product data in a way that can be retrieved and used later. SQL allows users to describe, define and manipulate that data and even allows the user to embed it within other programming languages. “Flat Files” describe data that is stored in columns and rows (like an Excel® spreadsheet). It is preferable to collect data in database tables rather than flat files. Why? Let’s use our example above to explain. Downtime doesn’t fit the Flat File model. For any given production run, the number of downtime events can vary from 0 to infinity. A flat file would need enough columns to store the maximum number of downtime events. This would not be efficient or practical because any production run that concluded with no downtime would have zeros or “null” values in every column assigned for downtime events. The file itself would be huge and contain very little to no information. Likewise, retrieving any information would be tedious. A better solution is to store the data in multiple “tables”. A “Job” table could store all the information regarding a unique production run. A Job marries a specific product to a specific production line. It documents details about that job such as start/stop times, product information, production rate, etc. A “Downtime” table could store all the information regarding each downtime occurrence. The two tables could then be linked to each other in a database. A new field could be added to the Job data to represent the number of downtime events that occurred during each production run. Additionally, database tables can be added or deleted at any time. For example, our customer plans to add bar-coding functionality in the future. One never knows today what data might be important five years from now. By collecting as much data as possible now, the collected information can be “data mined” in the future. Data mining is an analytic process designed to explore large amounts of data in search of consistent patterns or systematic relationships between variables. That information is then validated by applying the patterns to new subsets of data. The ultimate goal of data mining is prediction. For example, data mining is now common in the insurance industry as they gather more and more sophisticated information about car accidents. It is also commonly used by retailers to gather consumer buying trends. Want to know more? M.G. Newell has a team of Control and Automation Specialists on staff to help with PLC design and programing, control panel design and fabrication and software programming. As a UL Certified facility, our staff averages 20+ years of experience with backgrounds in electrical engineering, systems controls and automation design. Need an impartial assessment of your control systems? Do you wish your controls were more flexible? Contact us at sales@mgnewell.com and we can schedule an audit or just come in to answer any questions you may have.