Power Quality: How Bad is Bad?

Aug. 1, 1998
Are your equipment failures due to "bad power?" How bad can your power be, before being a problem? What electrical yardsticks can you use, and how reliable are they? Often, using these yardsticks will reveal simple, inexpensive solutions. Power problems seem to be everywhere. So, too, do incorrect fixes of these problems. For example, people often blame harmonics, but not correctly so. Even if excessive

Are your equipment failures due to "bad power?" How bad can your power be, before being a problem? What electrical yardsticks can you use, and how reliable are they? Often, using these yardsticks will reveal simple, inexpensive solutions.

Power problems seem to be everywhere. So, too, do incorrect fixes of these problems. For example, people often blame harmonics, but not correctly so. Even if excessive harmonics are present, fixing them may not fix your "bad power" problems. It's also true you can resolve some harmonics-related problems without expensive countermeasures. Correctly identifying power quality problems requires methodical detective work, a solid understanding of power basics, and good instruments.

Power system quality problems can vary with complexities. Often, however, the solution depends on going back to basics and on eliminating misapplications. This does not mean such things as special training or safety precautions are superfluous. Just the opposite is true: qualified and trained personnel are essential for working on energized systems. Simply buying and using expensive digital power analyzers is not the answer, either; although you may need such equipment to know what you are looking at. You must be able to interpret the data and understand the system you are troubleshooting.

Let's look at some recent examples of allegedly "bad" power. As you read these, you should be able to draw your own conclusions, and apply the lessons learned to your own situation.

Case #1: Waste at a treatment plant; let's get physical. At a wastewater treatment plant, three effluent sleeve valve motor assemblies controlled treated sewage flow. Erratically. Three-phase 480V powered the valve motors, and a programmable logic controller (PLC) controlled them.

The valve motors occasionally tripped on "overtorque" and lost valve travel direction (open vs. closed) information. We suspected relatively long periods of excessive current draw accompanied this random tripping with excessive starting current and excessive "starts-per-minute."

We used a recording power disturbance analyzer (with a modem link). In less than a day, the data revealed exactly what was going on. Data showing us the duty cycle of the sleeve valve enabled us to count the motor starts and profile the magnitude of inrush current. Using the profiles, we decided to move the holding tank level sensor to a more effective position for Valve #2. This gave an obvious reduction in duty cycle, which was part of the cure.

In Fig. 1 (of original article) Sleeve Valve #1 drew excessive current for several seconds. Fig. 2 (of original article) provides an expanded start sequence captured shortly before the machine tripped on overtorque. Because the horizontal sleeve was at an angle to the flow of effluent, opening required more current than closing. By viewing the time plots collectively, we determined valve direction, which helped us understand operation.

This nagging "power quality problem" had nothing to do with power quality. Why? With the information provided by a power disturbance analyzer, the engineers determined the following vital information:

  • Tank level sensor required relocation.

  • Motor starts per unit of time were excessive.

  • Starting inrush current was appropriate.

  • Running load current met specifications until ambient conditions caused an overload.

Case #2: An education can be shocking. A school's telephone equipment failed often, and the facility's lights flickered routinely. People got shocked when touching ordinary objects and surfaces. Monitoring incoming power showed nothing remarkable. However, a visual inspection revealed several deficiencies, including the absence of grounding at the main switchboard. Ground resistance measurements revealed 80 ohms to 100 ohms during wet conditions. This was at various locations with "grounding" provided only by conduits. Several circuits in the 25-year-old aluminum cable system had low insulation resistance, and one phase conductor read a direct short to ground. At the panel boards, all neutral busbars connected to the equipment grounds. The "shocking" objects and surfaces lacked grounding.

This project quickly changed from a power quality investigation to a safety investigation and grounding survey. Fortunately, the school upgraded its system before incurring serious or lethal injury.

Case #3: Reach out and filter the UPS chargers? A telecommunications company installed four new 2000kW diesel generators in a building, as part of a "bulletproof" automated system.

On the heels of the generator project, they completed a large UPS installation. The generators carried the whole building and the UPS. Someone decided the UPS would be the perfect test load for the weekly generator run.

Unfortunately, the generator voltage and current were unstable at times, fluctuating between 8% and 10% of normal. Also, the UPS chargers would turn off and on, in response to the voltage fluctuation. What caused the instability? After a series of unsuccessful adjustments, they replaced all four generator voltage regulators. Unfortunately, the problem remained.

It's common, when facing a "bad power" problem, to try to fix the harmonics. So, a consulting engineer convinced them to install a harmonic filter at the switchboard feeding the UPS chargers. Thousands of dollars later, the problem persisted.

An in-depth power quality study, using a high-resolution power quality analyzer, revealed what was really going on. The UPS chargers created notches in the leading and trailing edges of the sine waves, due to the firing of the thyristor controls. At certain loads on the UPS inverters, the notches were in such spots that when reflected back to the generator bus, they confused the sensing circuitry of the generator's voltage regulators.

Curing the problem, they added a 500VA-isolation transformer at the sensing input terminals of each voltage regulator, costing less than $1000 per generator.

Case #4: We have met the enemy... When a military testing facility's generator ran, the charger for a lighting system transferred back and forth from inverter to bypass. The effect on the lighting was a major security issue.

The only other loads on the 60kW generator were the building power, a 20kVA UPS, and the air conditioning system. To develop a plan for curing this "unsolvable" problem of "bad power," we installed a digital data acquisition device.

When the generator ran the site load, the power factor was 0.45 leading (harmonics were minimal). Why did this happen? The UPS rectifier drew a high-peak input current early in the sine wave, thus creating the excessive phase angle between voltage and current. The solution was a small 20kW resistive load bank. The resistive load added a power factor of 1.0. This shifted the overall system power factor to 0.79 leading; this wasacceptable to the lighting system charger. As a bonus, the additional resistive load improved the ability to perform the routine diesel generator test.

Case #5: Take care with critical applications. A hospital had frequent failures of its critical care monitors. The failures consisted of blown power supply fuses and damage to printed circuit boards. This was not only expensive, but it worked against the hospital's mission to provide the best possible care to its patients.

Though seeming to be a case of "bad power," it wasn't. The failures usually occurred after testing the emergency generators. Tests during the power transfers showed that the distribution system carried voltage transients severe enough to cause the critical care monitors to fail. Installing transient voltage suppressors at the monitors eliminated the problem.

Case #6: Separating loads by type is part of the whole picture. A newspaper facility used photo-imaging machines to produce high-resolution pictures. The operators had problems with unwanted shutdown and poor picture quality. The distribution panel feeding the photo imaging machines was also supplying other equipment with large surge current demands. This resulted in voltage sags and transient disturbances on the lines feeding the photo imaging machines.

The cure? Simply providing dedicated feeders to supply the photo imaging equipment.

Case #7: Watch where you plug in. A school's computer laboratory had excessive computer crashes, even during exams. A field engineer discovered a laser printer was the culprit. The laboratory had several laser printers, however, this printer (remotely located away from the power distribution panel) produced voltage disturbances severe enough to interfere with computers sharing the same power wiring. We moved the printer to a circuit other than the one feeding the computers, eliminating the problem.

Case #8: Grid fixes the gridiron. A sports arena had problems with timers, alarms, lighting, and other vital equipment, all computer-controlled. Plus, the audio equipment performed poorly. Various power sources supplied power to the collection of computerized equipment.

Further, the power sources had varying ground configurations, depending on equipment location. However, the communication lines were common among the equipment, producing ground loop potential differences.

To minimize the ground loop potential differences, we created a low-resistance ground grid system for the facility. After implementing the grid, verification with a power quality analyzer confirmed a successful solution.

Case #9: Wye did you disconnect that? A facility engineer showed our field engineer a connector (for modular furniture) that had the neutral of a four-wire connecting plug burned open. He had cut it in half-sideways, and you could see the neutral had melted.

A harmonic analyzer showed a voltage distortion would come and go about every 30 sec. The control system of a nearby UPS was unstable every time the voltage became distorted. Total harmonic distortion (THD) was going up 28%!

What caused this? An unconnected ground connection (to the "wye" of the transformer) allowed the center point to float. The voltage distortion was the third harmonic voltage adding in series with each phase voltage, since it was a high impedance path (instead of a low impedance path) for third harmonics. Once we attached the ground, the voltage distortion disappeared and the facility stopped melting modular furniture connectors!

Editor's note: Webb Kee (a past author for EC&M) encountered an identical situation. A client company asked Webb to recommend a power conditioner. Rather than rattle off a model number, Webb investigated the distribution system and found a disconnected ground on a wye transformer.

Case #10: Calculation versus true sensing: A semiconductor plant's 400A circuit breaker feeding a critical panel nuisance-tripped frequently. The plant engineer measured the highest phase current at only 70% of the pickup value. A field engineer connected a power quality analyzer into the cable feeding the panel and recorded a harmonic snapshot of the load. He also installed a digital multimeter with a current transformer to B-phase to capture the single-cycle peak value.

The 400A circuit breaker had an electronic trip unit with a peak-sensing device. This means, the breaker measured the peak value and then divided that value by 1.414. That would be correct if the current were a pure sine wave. The Sidebar, on page 34, shows the relationship between rms current and peak current for a sine wave. The peak current was 624A on B-phase. Dividing this value by 1.414 gives an rms current of 441A.

The 400A circuit breaker's setting allowed it to pickup at 1.1 x 400A (440A). The B-phase current had significant harmonic distortion, causing the high-peak value. This caused the circuit breaker trip unit to "see" an overload when there wasn't one. The highest actual rms current on B-phase was 305A. Changing the trip unit to a true rms device saved thousands of dollars in downtime.

Editor's note: It's common to find an out-of-spec condition, fix it, and erroneously assume you've solved the problem. However, this field engineer (like the others whose cases we discuss) was thorough in his troubleshooting. The amount of data behind this article is staggering.

Case #11: Watch that signal wiring. A commercial building experienced erratic operation of the lights, which turned on and off on their own. Often, the building would be in the dark. The lighting control system used a signal carrier in the power lines. The facility had 22 variable frequency motor drives (VFDs) distributed throughout the building-a likely source of the interference.

Per the lighting control system manual, a signal of 40mV at a frequency of 80 kHz or higher would cause interference with the control system. Readings with a spectrum analyzer showed interference of 50mV to 100mV at frequencies of 80 kHz to 100 kHz, with the drives operating. The interference dropped by 10mV to 25mV with the drives off.

Because filters may have reduced the lighting control signals to an unacceptable level, we installed a directly wired energy management system instead.

Case #12: Drop voltage-drop myths. A manufacturing plant experienced problems with starting a 3000A motor. It appeared a 2400V utility infeed voltage drop was the culprit.

A field engineer monitored the two motor starts. He left the existing transition time (to full load) at a fixed 5 sec. At this setting, the transition occurred at approximately 1880A (300% of rated current). The transition was not smooth: the motor vibrated loudly. The engineer performed a second start, with a new transition time set at 8 sec. This start had a smooth transition at approximately 1785A (285% of rated current).

The motor's solid state controller required the current to drop below 150% of rated current, to enable transition to full load. The existing tap of 65% on the reduced voltage starter did not allow the motor to accelerate to rated speed due to the reduced torque at the lower voltage. So, the current did not drop to a level where the transition relay would operate on current.

We proposed two possible cures:

1. Run the motor at the existing autotransformer tap (65%), and set a fixed transition time of 8 sec.

2. Set the autotransformer at 80% tap, transition current at 149%, and transition time at 10 sec to 15 sec. Start the motor and time the transition relay output. If it is less than 10 sec, the motor can run at this tap. If the motor appears to stall and the transition relay operates at 10 sec, the motor can't run with a current-operated transition.

The client chose the second alternative. This eliminated those reoccurring problems, as well as the "utility-infeed-voltage-drop" myth!

About the Author

Jean-Pierre Wolffe

Voice your opinion!

To join the conversation, and become an exclusive member of EC&M, create an account today!

Sponsored Recommendations

Electrical Conduit Comparison Chart

CHAMPION FIBERGLASS electrical conduit is a lightweight, durable option that provides lasting savings when compared to other materials. Compare electrical conduit types including...

Fiberglass Electrical Conduit Chemical Resistance Chart

This information is provided solely as a guide since it is impossible to anticipate all individual site conditions. For specific applications which are not covered in this guide...

Considerations for Direct Burial Conduit

Installation type plays a key role in the type of conduit selected for electrical systems in industrial construction projects. Above ground, below ground, direct buried, encased...

How to Calculate Labor Costs

Most important to accurately estimating labor costs is knowing the approximate hours required for project completion. Learn how to calculate electrical labor cost.