PQ Monitoring: Where Do We Go From Here?

On the edge of a shelf, high above my workstation, sits one of my favorite power quality monitors an impulse recorder built years ago by Franois Martzloff. The instrument shows its age: its zip-type power cord is cracked and frayed, the last date on its log label is March 21, 1968, and its only display is an electromechanical counter. Recently, I plugged the monitor into a 1980s vintage Schaffner

On the edge of a shelf, high above my workstation, sits one of my favorite power quality monitors — an impulse recorder built years ago by François Martzloff. The instrument shows its age: its zip-type power cord is cracked and frayed, the last date on its log label is March 21, 1968, and its only display is an electromechanical counter. Recently, I plugged the monitor into a 1980s vintage Schaffner disturbance generator. When I pushed the generator's trigger, the mechanical counter clicked politely, recording its first impulse in 34 years. What a wonderfully reliable monitor! Future PQ monitors need that kind of reliability, but they also need functionality well beyond yesterday's and today's capabilities (see Photo 1, on page 14). The following discussion addresses the current state of PQ monitoring and identifies areas that require further development.

When power disturbances interrupt facility operations, engineers need sophisticated monitoring tools to get the facility up and running as quickly as possible. In the following sections, I'll discuss the current state of affairs and the near-term outlooks for the most important aspects of power quality monitoring. As you'll see, the journey to advanced PQ monitoring systems is far from complete.


Although many people receive PQ monitoring data, they still don't understand power. I was reminded of this point recently when a highly skilled engineer and power systems designer asked me, “What's a swell?”

It's even worse outside the engineering field. Most large-scale decisions are made by executives who don't know how a light switch works, let alone the consequences of a 50% sag for 12 cycles. We can be dismayed by this fact or, more practically, we can accept it and make sure that the outputs of our power quality instruments are at least acceptable to nonspecialists. This means having automated systems that can convert raw data into information that compares past and present events and intelligently solves problems. In addition, the systems need to express this data in basic terms.

The near future

The range of problems suitable for analysis will expand rapidly. Reports will describe specific causes and solutions — based on power quality data — in simpler terms. Reports with neutral recommendations (i.e., “don't do anything”) will become more common because most suspected power quality problems are not PQ problems at all.


If engineers only monitor and record the data they think they'll need, they may miss the data that's actually required. For years, industry experts thought it was enough to record the depth and duration of sags, until they found out that information in rms time plots could determine the cause of a sag. It took several more years of recording rms time plots to find out that information in point-on waves could ascertain the likely effects of a sag.

I don't know what will come next, but I do know it's important to keep raw voltage and current waveforms, especially in research projects. These voltage and current waveforms will undoubtedly yield more information.

The near future

The industry will witness abrupt, nonincremental changes in what it considers necessary data, similar to the text-to-graphics change in the mid 1980s.


Power quality monitors need to function well in nasty environments. Their very reason for existence is to record what happens when other electronic equipment fails. But equipment doesn't fail just because of a lapse in power quality. It also fails because of temperature extremes, vibrations, mechanical shocks, huge magnetic fields, radio interference, voltage swells, and electric impulses.

Manufacturers must construct PQ monitors that sneer at these violent disruptions. Unfortunately, the monitors produced by inexperienced manufacturers just aren't tough enough, and they fail at the same time as the equipment they're monitoring.

The near future

Experienced manufacturers have the test equipment and the staff to duplicate and solve the problems that PQ monitors have to deal with (i.e., sags, swells, high frequency impulses, etc.), including those in corrosive, vibrating, or even hazardous locations (see Photo 2).


The big challenge with PQ instruments is deciding what data to keep. Too much data can be as useless as too little data. A 16-bit instrument that monitors 3-phase voltages and currents and performs 2 MHz sampling accumulates 24 MB of data per second, or a little over 2 terabytes of data per day. That's more than anyone can reasonably store, even in the near future. Most of that data is dull and uneventful as perfect sine waves slew up and down in regular patterns. Either you toss out most of the data, trying to come up with algorithms that correctly identify the interesting parts, or you compress the data and try to keep the most useful information.

Typical compression algorithms include rms measurements that can be further compressed with minimum/average/maximum values, resulting in typical compression ratios of a billion-to-one. Of course, this directly conflicts with the completeness requirement previously addressed. In addition, there are some bad compression algorithms, such as 15-minute averages, that engineers must avoid.

The near future

New compression and selection algorithms will lead to incremental improvements. Algorithms that compare present waveforms to prior waveforms can quickly pick up any changes in quality. In addition, wavelets may be used to pick out interesting disturbances and describe their key characteristics.

Other averaging techniques, such as peak-sense-equivalent, may supplement the rms values that reflect the way resistors respond to waveforms. (Peak-sense equivalent is a technique that produces values that represent the way electronic loads react to voltage waveforms.)


There's a big difference between a voltage sag in an empty building and one in a semiconductor plant. The latter will cost someone a lot of money.

By some respectable definitions, an event isn't a “power quality” event unless a load is disturbed. Some even argue that disrupted equipment — not power quality events — should trigger PQ monitors. It makes sense, then, to record the effects of a power quality disturbance as well as the disturbance itself. If the equipment is operating perfectly, then there's not much pain in discarding power quality data.

The near future

Look for rapid improvements in this area. Work is just beginning, and it has a great potential for economic return. (See the sidebar, on page 18.)


In most instances, obtaining real-time, onsite power quality data is a challenge. When something goes wrong with the power, the monitor is usually located in a grimy factory, tucked underneath an airplane's landing gear, or deep inside a peanut-sorting machine. Engineers need the data to appear immediately in an office, on a pager, or even at a consultant's lab half way around the world.

The near future

Internet use will increase, but firewalls continue to restrict the easy flow of data. More power quality data will be “piggy-backed” on other communication channels, such as meter-reading and e-diagnostics channels.


Sometimes engineers measure the wrong thing. For example, they might measure the electric power when, in actuality, sabotage, humidity, or trucks striking the loading dock are to blame. They also might measure in the wrong location, such as downstream from a UPS or on the wrong phase.

I remember one case in a peanut-processing facility. Sensitive loads failed only when the huge sorting machine was running. The problem related to power quality only indirectly. The sorting machine vibrated the whole building and shook loose the connections on the circuit breakers.

At another site, a minicomputer failed at the same time each day. A power monitor revealed nothing. It turned out that the computer operator would sit at the computer at that time each day, smoking a cigar. The smoke from the cigar corrupted the read head on the hard drive.

A fish processing plant in Canada serves as a final example. An intermittent computer failure was traced to pulsed radiation from an arcing bug zapper. A power quality monitor was actually useful because it recorded the pulses even though it wasn't connected to the power line.

Even when electric power is the culprit, it's possible to measure the wrong parameter. For example, total hamonic distortion for current measurements is a useless parameter because it uses the amount of fundamental current as a reference, and the amount of fundamental current wanders up and down dramatically.

The near future

We need to broaden the vision of PQ specialists. Field engineers who solve real-world problems know that power quality is only one of many mysterious, hard-to-duplicate issues. Practical, hands-on reports explaining how suspected problems were solved (even if they weren't power quality problems in the end) would be helpful.


Two power quality monitors connected to the same wires should record the same results, but they usually don't. This is a real problem for enforcing power quality contracts. Perfectly accurate instruments, with perfectly correct algorithms, can produce wildly different readings.

For example, true rms meters can differ on their averaging interval. Remember, the “m” in rms stands for “mean,” and nobody knows if that average should be 1 cycle, 1 second, or 15 minutes.

The near future

Some standards are beginning to address this issue. For instance, IEC 61000-4-30 specifies quite precisely how engineers should measure voltage sags, dips, and swells, which is good news in a way. But standards that specify how to do a task, instead of specifying the required result, always end up discouraging innovation. As a result, the industry must do some careful balancing.


By the time you need a monitor, the problem has already happened. The only solution is to have lots of low-cost monitors in place before a problem shows up. This implies that the ideal PQ monitor is more like an inexpensive accessory buried inside productive devices, and one that communicates through the device's data channel.

The near future

Look for accessory-type power quality monitors at costs that are one to two orders of magnitude lower than traditional monitors and embedded in larger systems.


To date, no power quality monitor meets the goals of clarity, completeness, robustness, conciseness, correlation, communication, correctness, consistency, and cost. Indeed, it may not be possible because some of the goals contradict each other.

Your challenge, as a user, is to figure out why you need to monitor and what you plan to do with the results. Next, review the goals outlined in this article and select the most important ones for your application. The chances are good that you'll find an available power quality monitor that meets your needs. Chances are even better that, in the near future, improved power quality monitors will meet needs that you didn't even know you had.

Alex McEachern is the president of Power Standards Laboratory in Emeryville, Calif. You can reach him at [email protected] or through his Web site, www.alex.mceachern.com.

Productivity Monitoring

Managers don't spend money to measure power quality; they spend money to keep their manufacturing processes running. Here's an interesting example from a textile mill. Voltage sags don't matter at this location, but broken threads do. As it turns out, some voltage sags — but not all — cause broken threads.

In addition to monitoring AC voltages and currents, this particular monitoring system also monitors motor speeds, tensions, temperatures, and other parameters.

The upper graph shows a traditional presentation of the voltages and currents during a sag. Experts transmitted these waveforms through the Web for analysis. Note the huge increase in current at the end of the sag — typical behavior for electronic power supplies. Although these graphs are meaningful to power quality engineers, they mean nothing to the people who matter: the managers of this factory.

The lower graph shows the process parameters recorded by the power quality monitor during the sag. The arrow points to an overshoot in the tension-control backup signal, a real problem to the factory managers, and one they know how to fix.

The key here is getting the right data, in the right form, to the right people. The power engineers need the voltages and currents to identify their problem, and the factory managers need the process parameters to solve theirs. A power quality monitor should record both.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.