Skip to main content Skip to search

Archives for Company Blog

The PTI Song

Click here to play: PTI

Lyrics:

PTI, we sing to thee, scene of peace and harmony; Watch the mind of each PE, spouting forth technology; We will toil from dawn til night that our brethren may delight; In a life of wealth and luxury, in a life of wealth and luxury

As you all can plainly see we have grave responsibility; In our role supervisory making corporate policy; Yet despite this awesome charge we must yet our jobs enlarge; Doing work upon computer keys, doing work upon computer keys

See the rich TAG retirees, engineers we also please; Money goes to employees like it grew on Christmas trees; Precious left for we PEs, there is not to us appease; In our life of pain and misery, in our life of pain and misery!

Read more

The Future of Transient Analysis is Bright

Yes, or at least, it’s brightening.
We make this bold observation after attending the 2012 users’ group meeting for the PSCAD/EMTDC software, held March 27-20 at a little gem of a coastal town named Castelldefels in Spain. About 60 participants (eyeball count) from universities, manufacturers, utilities, sysops, sales reps and consultants gathered together for techno-talk on the decidedly geeky subject of power system transients and PSCAD applications.

Read more

On Engineering Software

As an analytical consulting firm, Pterra regularly uses about half a dozen engineering software, and about the same number on an occasional basis, to be able to conduct its services. The software are necessary to be able to simulate complex physics and market conditions and/or large scale databases. In addition, we try to use the same software that our clients use so that part of our deliverable is an updated system model or database.

Read more

An Anniversary

Yesterday was the 7th anniversary of the founding of Pterra, LLC.   The original team of 5 who started this journey remain, with some worthy additions.   All have grown somewhat older, hopefully wiser, and after all the contingencies encountered through the years, more resilient and united as ever.

Our core competencies remain the same: power engineering analysis, new technologies, modeling and simulation.   But service applications have grown, from the initial focus on transmission planning and interconnection of new generation, Pterra now offers distributed generation studies, solar photovoltaic and wind power modeling, applications training, assessment for high voltage direct current transmission, expert witness, among others.

No seven-year itch here.   Just some wistful reminiscing and cautionary tales for the next 70 years.   Overall, one can say that it is possible to follow the dream, to have a workplace adopted to family, health, faith, other life situations.   Or, to use an electric power analogy: to be like a lightning arrester, withstanding the normal and continuous challenges and allow all other extraordinary surges to flow.

Read more

Report from the 2011 PSLF Users’ Group Meeting

by Ric Austria

If only for this one new feature, the trip to attend the meeting (held April 28-29 in sunny Orlando, Florida) was worth it. The new feature is …

IMAG0082-100x100

PSLF now allows “continuous” tap solutions for phase angle regulators, or PARs. Why does this matter? It matters a lot to those who work in the U.S. Eastern Interconnection (EIC) where most utilities use the competing software package, PSS/E.

Read more

Lights Out at Copacabana

Itaipu Hydro-electric Damby R. Austria

On Nov 10, 2009, a massive power failure blacked out Brazil’s two largest cities and other parts of Latin America’s biggest nation leaving millions of people in the dark. Transmission connecting the large Itaipu dam to Brazil and Paraguay apparently tripped disconnecting some 17,000 megawatts of power. I was on Copacabana Beach years ago for a training course and can only imagine the disruption that the outage may have caused. A blackout in a major city is not a fun time.

But blackouts are interesting to study. More often than not, the initiating cause is something innocuous, such as the infamous overgrown trees in the 2003 Northeastern US-Canada blackout. (An announcement just came out that the 2007 Brazil blackout that was blamed on hackers was due to sooty insulators!) So when the news report says, “A storm near the hydro dam apparently uprooted some trees that caused the blackout,” I am inclined to consider that the trees hit some transmission lines which could have led to the isolation of Itaipu. That’s not so far-fetched. You never know what a failure-bunching event such as the major storm that hit Itaipu could do to redundancy and good planning practice. Reliability is only as good as the next blackout!

itaipu-300x180By happenstance, this event occurred when we were conducting our HVDC course in Albany. So this picture of the Itaipu transmission lines came in handy. A bipole high-voltage DC (direct-current) line is on the right; while two double-circuit AC (alternating current) lines are on the left. They’re too far apart to fall down on each other so most likely more than one failure occurred from separate incidents but these failures were “bunched” or overlapping in time due to the heavy storm at the time.

 

 

 

Frequently asked questions

Are transmission systems designed to withstand this type of failure bunching?

Generally not. Loss of one line is the general practice for design criteria, and this appears to be at least the loss of two lines with the subsequent, dependent failure of the third line.

Can transmission systems be designed to withstand this type of event?

Yes, but at some cost penalty. A planner can add double or triple redundancy to the transmission path, and locate the lines so that there is reduced exposure to the same weather pattern, all at a price. A more common practice is not to provide withstand capability for the extreme event, but to design the system so that any outages will be short and manageable. This is accomplished by providing reserve, backup and startup generation capacity for the large load centers, and isolating any faults quickly and efficiently through system protection equipment, among others.

Would wind power have helped avoid this blackout?

If located at Itaipu, wind generators would have dropped off as soon as the wind picked up above about 25 mph. If located near the urban centers, such as Rio de Janeiro and Sao Paulo, they could have provided backup power if there was wind blowing at the time.

Can this blackout happen again during the next Summer Olympics?

Objectively, the likelihood remains small. But viewed subjectively, if an improbable event happens once, it can happen again. The engineering answer would thus depend on whether you would consider yourself a scientist or a populist.

Read more

Two Views of Power System Reliability

A science fiction writer once tried to sell the idea of a trip inside the sun as an “opportunity to view sunspots from behind.”  It may not be comfortable, but the observations would be unique and would undoubtedly contribute to a better understanding of the phenomena.  In a more practical sense, being able to examine complex structures from different vantage points — inside and outside, or close-up and from a distance — makes new insight possible, and hopefully, better understanding.

Let’s try this method on the very mundane subject of power system reliability.

The internal observer

When the bulk power system (BPS) is described as “reliable,” it implies that the same system has passed a whole battery of tests, which for the US might mean such standards as NERC Categories A, B, C and D  and other associated NERC standards, as well as standards for regions, regulatory bodies and individual utilities.  In this sense, a “reliable” bulk power system offers a “yes-or-no” qualification that belies the complexity of the power system and its various probable and improbable failure modes, and the tests and criteria applied to make the assessment.

Though the description may imply that the whole system is “reliable”, what it actually means is that the system, taken as a whole, is reliable.  Reliability is not homogeneous, but chunky, or to use a more common term, “locational.”  A thought experiment may help clarify this.  If we allowed load to grow in the BPS, simply assuming more generation from the existing resources, and then applied the reliability tests, we would find … at some point that the BPS would fail the reliability test.  The failure may initially be one slight thermal overload or voltage violation or a generator losing synchronism.  But the number of failures would increase as demand is increased further, perhaps developing into cascading outage, widespread instability or voltage collapse failures. [1]  We might then contend that where the initial failures are observed is where the BPS is least reliable, and where more and more failures are observed we might develop a profile of varying levels of reliability throughout the various locations of the BPS.

Hence, the specific character of reliability within the BPS varies by location, and tends to change over time (as demand grows), regardless of whether the BPS itself is kept “reliable.”  The locational characteristic is primarily a function of the dispatch of generation (which in turn is a function of market rules), outage rates of various BPS equipment such as transmission lines, transformers, power conditioning equipment, maintenance schedules of the same equipment and random external factors that can lead to extended outages.

A hypothetical internal observer, perhaps a system planner, would thus observe power system reliability as a quantitative measure applied to the BPS which has a general value, a yes-or-no reliability, and a locational value, such as frequency and duration of service outages for customers connected to a specific location.

The new external observer

When the bulk power system is described as “reliable”, it is able to accept incoming power from a power plant and transfer the same to a point of delivery.  This implies that the BPS with the power plant has passed the impact study testing for initial operation in accordance with the standards for interconnection of the FERC [2], if in the US, and any of the ruling regional organization such as an RTO or ISO [3], or public service commission or local utility.  This is a generic qualification that allows for approval for construction, interconnection and/or energization without indicating the complexity of the energy market in which the power plant operates.

“Reliable” in this context means meeting specific criteria and standards under certain assumed operating conditions.  In practice, the “reliability” is subject to energy market price fluctuations, demand changes, firm and non-firm transactions, and scheduled outages. [4] A simple example would be to consider power plants delivering power over a single path to a load center.  As more power, and power plants, collocate with the existing power plants to deliver power over the path, the reliability standards would require that the total power eventually be constricted or limited, or to use a more common term, limited by “available transfer capability (ATC).”  ATC would change by time of day, by any scheduled outages, and by any prior reservations made with the owners of the transmission path.  On the other hand, if a power plant locates on the other side of the path, nearer to the load center than the other power resources, it may not have the same constraints on ATC.  In this sense, reliability is “locational” since the point of receipt of power affects the ability to transfer the power to points of delivery.

Hence, to a hypothetical external observer, perhaps a power marketer representing several power generators, the “reliable BPS” is a generic designation which may not impact the actual dispatch of his portfolio.  The key factors, to this observer, would be the price of energy at which he is selling relative to anyone else in the same market, and the specific location of each power plant in his portfolio relative to ATCs.  He would consider this for various timeframes, from long-term sales, to intermediate term, to monthly, to daily and hourly (in some systems, even on a quarter-hour basis).  Each transaction remains subject to reliability specific to the timeframe.  The reliability assessment would establish whether each specific transaction is allowed or not, or if constrained to a lower amount.

Commonalities

We deliberately chose observers to whom a “reliable” BPS may not mean very much, to illustrate a point.  In fact BPS reliability would impact the chosen observers in a less direct manner.  For instance, the reliability of the BPS may be used to determine investments in the transmission grid which would eventually impact the specific reliability concerns of the observers.  Or, BPS reliability may be used to revise standards and criteria, which would also eventually filter down to the observations.

Both observers recognize the locational nature of reliability, although each would measure this in different ways; i.e., the internal viewer measures service interruptions, while the external viewer measures constraints on transfer.  The analytical basis for the measurements also differ in that the dispatch and outage assumptions would be different, even if the actual criteria are the same.  For both observers the impact of local reliability can be measured in cost/benefit terms, so are, in this sense, comparable.  (Although real attempts at making this comparison have been difficult.)

Conclusion

There is a possibility that when different observers refer to “power system reliability”, they may actually mean different things, leading to confusion and miscommunication.  It would be important to recognize that the observations are only common in terms of reliability of the BPS taken as a whole, and to the specific criteria applied to establish it.  Other terms, such as ATC and local reliability, may differ in their basis, analytical character and measurement.

Notes

  1. There is a lot of simplification in the preceding illustration, such as the fact that we are looking at different dispatches, or if applying specific rules such as the California ISO’s G-1/N-1 criteria, different generators on outage.  We are also assuming deterministic tests, rather than probabilistic.  But this is only a short article!
  2. FERC – in the United States, the Federal Energy Regulatory Commission.
  3. RTO – regional transmission organization; ISO – independent system operator.  Both industry organizations are common in the United States, tasked with overseeing power system operations and markets over interconnected neighboring utilities.
  4. A curious term arises from having to take all these factors into consideration, “dispatchability.”  A possible definition of dispatchability is the ability to provide energy to the grid both as a sale into the energy market and as a reliable power transfer to point/s of receipt.

References

  1. Reppen, N.D., “Increasing utilization of the transmission grid requires new reliability criteria and comprehensive reliability assessment,” Probabilistic Methods Applied to Power Systems, 2004 International Conference on, 12-16 Sept. 2004
  2. Clark, H.K.; de Mello, F.P.; Reppen, N.D.; Ringlee, R.J., “The grid in transition – facts or fiction when dealing with reliability?” Power and Energy Magazine, IEEE, Volume 1,  Issue 5,  Sep-Oct 2003
  3. Khan, E.P.; Marnay, C.; Berman, D., “Evaluating dispatchability features in competitive bidding [in power systems],” Power Systems, IEEE Transactions on, Volume 7,  Issue 3,  Aug. 1992

For questions, comments and further discussion, contact us at info@pterra.us

Read more

Open Source or Proprietary Data: the Model Dilemma

Data creep is an ugly name for a common practice, that of adding special models to community databases.  This is no less prevalent in databases of interconnected grids.  In the Eastern Interconnection, a planning database will contain on the order of 70,000 buses and 30,000 dynamic models, representing everything from Florida to the Texas panhandle, from Idaho to New Brunswick, at voltage levels from 34.5 to 765 kV. Whether we need all the data for any specific analysis is moot, however, frequently studies will carry the full dataset regardless that the focus is on localized
phenomena. Equivalents or reduced models were necessary in the past when computers had memory restrictions and low speed performance. Nowadays, reduced models may be justified for the sake of simplicity and convenience as long as accuracy is not compromised.

The Dilemma

Part of the dilemma is due to the highly-meshed nature of the grid, that using equivalents comes with the concern of reduced accuracy.  Another part of the dilemma comes from the widely varying practices in modeling – the level of detail, the bases for data such as ratings, and standards for analysis.  But perhaps, most concerning of all, is the recent popularity of the practice of adding closed special models for equipment such as hvdc converters, wind turbines, exotic storage devices, composite loads, among others.  These models come in executables or dll’s that are closed to anyone but the providers of the models.  Users of the models basically must trust the providers that the models faithfully represent the equipment response for the the study purpose, and that the models will not introduce intractable numerical errors in simulations.  Users have the alternative of using a generic or standard model in place of the special model, at the risk of misrepresenting the characteristic and bringing to question the accuracy of the analysis.

The basic arguments for the practice are clear enough.  Those favoring closed models point out the need to keep model information strictly confidential because manufacturers would like to secure proprietary data for special equipment, avoid misrepresentation of the performance, and retain rights to key parameters and functions.  Those who favor open models stress the need to be able to debug the special models when they produce unexpected results, and to understand the basis for the modeling – critical to designing system solutions that can actually work.

There are obvious misconceptions and derived malpractices.  Among these:

  • That the models provide comparative performance with other models in field applications – Since each model is proprietary and tested in accordance with specific conditions defined by the developers, there is no direct comparison of their response characteristics to a common set of tests.
  • That the models are the most accurate representation for all applications – Every model has a limited range of application. There is such a wide variety in planning and operations analyses that it is impractical, if not impossible, to model response for all conditions and disturbances.  Developers will aim for the expected applications and wait for user feedback.
  • That a specific model is better than a generic one – If an analyst is running a transient stability time simulation in quarter-cycle time steps, how significant is the twelfth-cycle response of a set of controlled thyristors, in say an hvdc model?
  • That a model supplied by a software vendor is foolproof – Not to open a broader debate on the subject, but good software comes from users helping debug them over the course of man-day, months and years of application.
  • That a model will carry forward or be backward compatible for other versions of the software – Guess again.

Horror stories abound.  One such is the case of the blackbox model of a new wind generator that was actually no more than a Norton equivalent — a current source behind an impedance.  This model went the rounds, being embedded in a few studies.  How many planning decisions were made with this contrived model?

Possible Solutions

In the Western Interconnection, providers of the widely-used stability software, PSLF, seem to have nipped the problem in the bud by insisting and standing by a policy of open modeling.  This is not to say that the pressure and need to allow for closed models isn’t there, but asserting the planner’s right to use the database as intended deserves a good cheer.

Unfortunately, this solution seems to be beyond reach in many other cases.  The practice of closed user models may have become too widespread and, in instances, may even have the support of software suppliers, who are themselves manufacturers.  An alternative in this case may be to set standards for the models in terms of tests, comparisons, numerical performance, allowable ranges of input voltages and currents, allowable time step size, allowable duration of simulation, and so on.  Also, it would be important to require clear specification of what “alpha” or “beta” versions really mean from developers.

A novel way to solve the closure/disclosure dilemma is to establish a patent for each model and disclose the block diagram and all associated information, including software code. The patent will protect the manufacturer from being copied and at the same time will disclose the information needed in the power industry for the pertinent analysis. This is of course assuming that the information and technology is patentable.  The approach may provide user’s with broader leeway is applying models in studies.  Additionally, a regulatory body could help with the official update of the models and associated software.

As new technologies enter the landscape of grid models, it is clear we will need a better way of handling them that fits the needs of planners, regulators, analytical providers and equipment suppliers.

For questions, comments and further discussion, contact us at info@pterra.us

Feedback from Readers

  • “Let me just say that nothing has hindered our purchases of models more than software patents. We have spent thousands and thousands and thousands of dollars on legal fees rewriting contracts to avoid the legal liabilities of software patents. If you patent software, we are exposed to patent liability. If you use someone else’s patented software and then sell us your software, then we are exposed to the patent liability. We have no control over that exposure and it can be very large.In contrast, copyright liability is much more limited. We can handle that. Wherever possible it is better to use copyrights rather than patents.”  Name withheld upon request.

 

Read more