Over the last three decades, power quality monitors have gone through significant changes. Their processing power, flexibility, and usability have increased, and their costs per monitoring point have decreased. These changes, driven by market demands, standardized measurement techniques, large-scale integrated circuits, and software improvements, have made today's power quality monitors better than ever. The latest models feature multiple processor architectures (such as digital signal processors), the ability to communicate over high-speed Ethernet links, and Web browser capabilities that permit visual data analysis from remote locations. What were once future possibilities in the realm of PQ monitors are now present-day realities.
Microprocessor-based power quality monitors first appeared on the market in the mid 1970s. These early monitors produced text messages indicating a disturbance by the event type and magnitude, such as CHA LO 100.1V 10:15:42.21 04/24/76. Voltage was the only parameter monitored. Second-generation monitors included graphical outputs of captured waveforms. The next generation of instruments, produced in the mid 1980s, featured megahertz sampling rates that provided detailed information on medium- and high-frequency transients. With fourth-generation power quality monitors, automated setups and versatile software with report writers were commonplace. Today's monitors employ the hardware, firmware, and communication technology necessary for condition- and performance-based, predictive maintenance.
While the speed, power, and analytical capabilities of PQ monitors have dramatically improved over the years, the overall architecture has remained constant. Data acquisition equipment converts analog signals into digital representations, which processors then manipulate. A database system helps manage the waveforms and calculated parameters by determining what information should be stored and when. After software analyzes and characterizes the data, the information outputs to a local display or remote visualization tool, such as a Web browser.
With the newest PQ monitors, you can obtain power and energy measurements, harmonic measurements, sequencing components, and with some systems, the physical parameters of torque, strain, pressure, temperature, and humidity. You can then measure and calculate many of these parameters by using standards developed by the Institute of Electrical and Electronics Engineers (IEEE) and the International Electrotechnical Commission (IEC).
For example, let's consider harmonic measurements. The latest PQ monitors watch much more than simple total harmonic distortion (THD) numbers. Today, you can monitor the full harmonic spectrum.
In 60 Hz systems, standardized IEEE and IEC calculation methods can yield values or bins every 5 Hz. The bins on either side of the integer harmonic frequencies (115, 120, 125 for h2; 175, 180, 185 for h3; etc.) are combined together for the individual harmonic magnitudes.
The remaining bins between each harmonic are combined together for the interharmonic values (h2-3, h3-4, etc.). Frequencies below the fundamental frequency are the sub-harmonics, which are combined in the flicker algorithm to generate Pst and PIt values.
The latest PQ monitors also can track harmonic summary quantities, including total harmonic distortion (THD), total demand distortion (TDD), total interference distortion (TID), total interference factor (TIF), and transformer derating factor (TDF). Some of these summary quantities have established limits, especially the parameters for electromagnetic devices, where the harmonic content causes increased heating losses, reduced capacity, and shortened life cycles.
The condition-based, predictive monitoring possible today requires adequate historical information, especially during times of equipment operation or stress. Fault occurrences that cause breakers (or other equipment) to operate are highly unpredictable. That's why monitoring equipment must run continually. But data cannot be saved on every cycle for every parameter indefinitely, even if memory costs are less than in the past. That's where trigger engines come into play.
All of the new power quality monitors have some threshold-based trigger engines. These flexible but complex engines combine hardware and firmware to determine what data, and how much data, will be saved. They optimize the probability that important data will be captured and stored. Without them, 10 megabytes of data would be stored in less than 15 sec.
Trigger methods include fixed and floating limits and sensitivities, waveshape changes, specific event characteristic parameters, and repetitive and time-based adaptive thresholds.
When it comes to storing captured data, the latest power quality monitors allow for compression and other storage optimization techniques. They store steady-state data as a baseline for comparison and can handle the variable-length, binary records of waveform data. This is necessary because the duration and magnitude of a disturbance are as unpredictable as the occurrence itself. Memory sizes are finite, and even larger storage devices aren't the answer because queries of such immense databases would take impractical processing times.
Time synchronization, online visualization, and notification advancements in PQ monitors have also helped usher in the condition-based maintenance era. For example, using global positioning systems (GPSs) and network time protocols (NTPs) with PQ monitors can time-synchronize data down to the millisecond, allowing facility engineers to accurately observe and analyze system-wide effects.
The current crop of online visualization tools can display information intuitively and hierarchically. They provide answers to questions (such as what happened and where) quickly and clearly. Successive levels of increasing detail are available to those who need to conduct additional analyses by “drilling down” within the system.
In addition, Web- or client-server-based systems allow multiple persons to access the data simultaneously. And stand-alone software programs have been developed to encapsulate the visual analysis process of PQ experts. These programs may be incorporated into the monitoring system itself, as is the case with the AnswerModules in the Dranetz-BMI signature system.
Finally, notification schemes (e.g., pager, e-mail, or contact closures) immediately send information to users as to what happened, where, and why. This greatly reduces the time it takes to get the substation, production line, or data center back online, minimizing the financial impact and heading off system instability situations.
Monitors in Action
The new generation of power quality monitors have enhanced capabilities you can use in a number of applications. These applications may involve transformers, power factor capacitor banks, circuit breakers, and cables.
For example, it takes a significant amount of money to disassemble and overhaul circuit breakers used on electrical distribution or transmission systems. Each time a circuit breaker operates, important data is processed to determine its condition. New PQ monitors can measure the fault current that triggered the breaker, record the sequence of events from the relay's trip signal to the breaker's subsequent operation, and determine the degree of contact degradation from the interruption's current signature.
The latest monitors also can assess the quality of manufacturing processes, especially continuous stream processes such as textile or optical fibers. And they can accurately correlate the changes in transducer outputs to those in supply voltage characteristics.
The graphs in Fig. 1 and Fig. 2 show how DataNodes on a Dranetz-BMI signature system monitored various parameters during a sag. The monitoring system captured changes in supply voltage and current for an adjustable-speed drive 300VDC bus, a sensor/actuator bus system 24VDC, and speed and tension sensors.
As you can see, power quality monitoring systems have come a long way. They have progressed from providing basic information to supplying additional measurements in an enhanced way for diagnostic and predictive purposes.
Thanks to the improved capabilities of these systems, condition-based monitoring programs can trigger equipment overhauls, predict pending failures, or even feed automatic mitigation equipment. In other words, you can foresee tomorrow's power quality problems on your system today.
Richard P. Bingham is the director of product planning for Dranetz-BMI in Edison, N.J. You can reach him at [email protected].