Jeffrey D. Taft
The use of electronic communications in conjunction with electric power systems dates back to almost the beginning of electricity in the United States, not surprising given the geographically dispersed nature of electric power infrastructure. As power system technology advanced, communications shifted from analog to digital, and uses expanded to include grid state telemetry transport, protection and control, metering data transport, electricity market participation, interorganizational coordination, and customer communication.
The advent of Internet Protocol (IP)-based communications and the wide adoption of digital and computing technologies at all levels of the power system hierarchy coincided with the increasingly ubiquitous nature of communications connectivity. By 2010, a number of models existed for the use of digital communications in electric utilities, including the model introduced by Cisco Systems. That model has 11 tiers of communications (not physical networks) that span the electric system from local distribution to regional operation. Due to the structure of the electric sector, no single electric utility contains all 11 tiers, but increasingly, the evolution of the electric grid requires connectivity and interoperability across those tiers. Individual electric utilities often had (and some still have) multiple physical communication systems for any given tier, and so it is not unusual for a distribution utility to have eight to 10 or more communication systems. IP communication technology offered the opportunity to converge disparate networks, and so utilities could and did move from special-purpose proprietary networks to more standardized multipurpose multiservice networks. At the same time, most utilities took advantage of IP networks operated by telecommunication services providers (SPs) for some of their communication needs while reserving other critical functions such as teleprotection for private utility networks.
Wireless communications providers have transitioned through multiple generations of increasingly capable technologies since the days of the Global System for Mobile Communications (GSM), to the point where 5G is widely available, and 6G is being planned. Edholm’s Law of Bandwidth, articulated back in 2004, predicted a doubling of communication bandwidth about every 18 months, with wireless speeds initially lagging behind wireline speeds but rising somewhat faster. Applying that rule to the data transfer rates available back then suggests we would have mobile data rates of 400–600 Mb/s now, and in fact, we are doing that presently (although the carriers say theoretical rates go much higher: 10–20 Gb/s). The advent of 5G has been hailed in some quarters as solving all electric utility communication issues, including connectivity with nonutility-owned energy resources and all manner of Internet of Things (IoT) devices.
Technology time horizons, especially for consumer technologies (which include mobile telephony), are driven by relentless competition for better-faster-cheaper just to survive, let alone flourish and achieve market leadership. It is not so much that technology products cease to function; it is more that their value propositions are overwhelmed by the cost/benefits of the next generation—with the pace of development eventually leading to the wholesale abandonment of legacy technology as focus and adoption shifts, thus leaving no manufacturers, no replacement parts, and little or no value left in the previous generation of technology. This sequence plays out constantly among wireless carriers as they tout their network reliability, faster speeds, better coverage, and the latest unlimited plans.
A technology must be mature, secure, and widely available before electric utilities can begin to roll it out, which means the technology is in volume production and more than 25% through its lifecycle by the time a utility can realistically make a decision to deploy. Utility rollout over an entire service territory typically takes more than five years, including a planning cycle that selects available technology, accomplishes the feasibility testing, and obtains regulatory approval. That timeline leaves about 10 years of useful life for deployments that often have 20-year life expectations. Imagine a mythical grid device that used the original iPhone as a core component. By the time a utility would have been able to place its first purchase order, the original iPhone would already be unavailable.
Additional challenges exist for the utility use of 5G. A good portion of 5G functionality—the number of devices that can be supported, support for the IoT, support for peer to peer, high bandwidth, low latency, and connectionless services—relies on 5G’s “small cell” architecture and use of high radio frequencies called millimeter waves. Small cells are portable miniature base stations intended to be placed every 250 m or so throughout cities. The use of the millimeter-wave radio spectrum means that the radio signals do not propagate well through walls and purposefully have a very limited range so that frequencies can be reused with manageable interference. Some SPs have chosen to use somewhat lower frequencies for coverage reasons while sacrificing data rates.
A challenge for the electric utilities is that the SP deployment of small cells is not based on the locations and requirements of the electric utilities and so may not provide the necessary coverage for electric utilities until late in the 5G deployment cycle. Another challenge is that in dense urban areas, where small cells are most likely to be deployed, some of the electric utility infrastructure is below ground and inaccessible to millimeter waves.
The cost of deployment of 5G is another significant consideration for electric utilities. It is far more than just the cost of the technology—consider the labor involved in touching millions of electric meters in a service area. Add to this the lifecycle meshing friction problem (utility upgrade cycles generally do not align with telecommunications technology lifecycles). The combination of these issues has caused some electric utilities to declare that they would skip 5G entirely. A few have resorted to the deployment of private 4G networks.
Because most electric utilities make use of a combination of private communication networks, telecommunication SP networks, and the Internet, the issue of cybersecurity is ever present. The cybersecurity problem is compounded for electric utilities due to the connectivity principle; any connectivity, even intermittent, represents a potential cybersecurity vulnerability. The situation is worse for electric power systems because of the dual connectivity issue; electric power systems have both communication connectivity and electric connectivity—the combination presents more complex vulnerability issues than communications connectivity alone.
The complications in the use of digital communications for electric utilities and the grid are increasing due to several important trends.
If these changes seem unlikely, consider that the Australian Energy Market Operator (AEMO) has adopted the concept of a step change grid as outlined in the Pacific Energy Institute white paper “A Gambit for Grid 2035.” AEMO is using this concept as the basis for dealing with the very high levels of solar photovoltaic penetration in Australia as well as public policy goals regarding decarbonization. Already new forms of grid control are being tested there, leading to new grid data transport requirements.
Despite their obvious importance, communication networks are often treated as an afterthought in grid modernization, under the presumption that when the other aspects of a grid are fixed, the communications network is automatically defined to fit. This poor thinking has led to brittle electric grid system architectures and the apparently never-ending device interoperability standards debacle as planners and power system engineers leave the communications component until last and then often in the hands of IT groups.
The foregoing points to what I see as the largest challenge facing the electric utility industry regarding communication networks—the lack of whole-of-system architectural understanding, thinking, and appropriate methodology being applied to the grid and grid communications together. This issue is especially noticeable in the grid research/development community, where knowledge of modern communication network technology is lacking, and architecture methodology is too often represented by nothing more than the Smart Grid Architecture Model (SGAM). SGAM is a paradigm that was developed for the purpose of comparing smart grid implementations while ignoring all aspects of grid and system structure, which makes it unusable for architecture work despite its name.
To clarify, consider these basic architectural principles.
Failure to apply good architectural thinking often results in unnecessary entanglement, high costs, stranded investments, and unrealized benefits. It is also a leading reason why so many grid products are improperly scoped, which leads to interoperability and system integration issues. One of the many benefits of proper architecture is that a well-planned grid structure simplifies downstream decisions and frees up engineers and developers working on individual components or systems to employ creativity with the assurance that unintended consequences will not crop up to hamper or even invalidate their work. In other words, it provides a framework that coordinates the efforts of many organizations, in planning, in design, and in operation to ensure system cohesiveness and efficiency.
Consider an example of how architectural methods enable communications to resolve the problem of brittle siloed systems that use back-end integration. Siloing is a common way to structure a set of distribution system applications, with each application having its own set of data sources and even its own communication network. Such architectures provide interapplication data exchange via back-end point-to-point connections or service buses. However, this arrangement is antiresilient in the extreme; problems or failures with one system can easily cascade to the others, and any changes or additions require extensive reintegration efforts and expense (with integration costs often reaching or exceeding five times the system product cost).
This flawed arrangement can be remedied by an architectural approach that breaks the siloes up into layers, with the electrical infrastructure, the grid sensors, and a multiservice communication network making up the layers of a distribution platform [the same can be done at the transmission level, as we showed in the NASPInet 2.0 meta-architecture developed for the North American Synchrophasor Initiative (NASPI)]. Authorized applications connect to the platform to obtain needed data. The communications network can provide direct publish-and-subscribe functionality without the need for a service bus or data broker. This new structure completely changes the location and nature of interfaces and consequent interoperability requirements, resulting in simpler integration and improved resilience because the applications are decoupled from each other.
As the uses of communications for grid data transport increase (due to the trends mentioned previously), the problem of effective transportability of the data via the communication network becomes increasingly crucial. The usual approach to performing transportability evaluation is based on the analysis of individual use cases. That method is fundamentally weak because it is not possible to ensure that a complete set of use cases has been defined. Architectural methods provide a better way of applying advanced structural analyses based on graph theory. Such methods can examine the transportability of a network for all possible use cases simultaneously, even ones that have not yet been recognized.
The resilience of communication networks can also be treated analytically using a direct product of grid architectural methods called resilience algebra (structural analytic resilience quantification). This method provides a practical way to evaluate network resilience and the resilience consequences of changes to the network. It applies equally well to electric networks and communication networks and to the combination of both, along with other connected codependent networks, such as for water and gas distribution and transportation. It explicitly takes component characteristics and structure into account, making it useful as a design tool, which is not true of the resilience evaluation methods in common use.
The grid we need for the 21st century does not just use communications; it depends on communications as a core infrastructure. Communications technologies will continue to advance, mostly driven by consumer and enterprise business needs. Given that electric utilities represent fewer than 0.5% of SP revenues, it is not reasonable to expect that SPs will cater to utility needs (for example, SPs do not give electric utilities the first priority during disasters and do not provide guaranteed bounded latencies for data packet delivery). Thus, it is likely that electric utilities will continue to use a mix of private and SP networks for communications. Operational networks increasingly will need deterministic scheduling and message delivery to meet demanding real-time control requirements.
Most importantly, communication network architectures must be intrinsic elements of larger convergent infrastructure architectures. We need such sector-coupled architectures to build 21st-century urban services platforms that can provide the integrated transport of electricity, data, fuels, food, water, people, and things. Only rigorous and proper architectural methodology can yield such results.
Digital Object Identifier 10.1109/MPE.2023.3288597
Date of current version: 21 August 2023
1540-7977/23©2023IEEE