《1. Introduction》

1. Introduction

Climate change concerns are driving electric utilities to find ways to reduce greenhouse gas emissions while continuing to meet the demand for reliable electricity. Primary among these methods is the adoption of renewable generation as a major component in the generation resource portfolio. The growth of renewable resources has reached a level in some electricity interconnections such that existing frequency regulation resources are being called upon to react to deviations more often than in the past [1]. In response, utilities are sometimes forced to schedule and dispatch more costly reserves and/or curtail less costly renewables. This response increases the effective cost of renewables by requiring the purchase of additional reserves at prices that are higher than the marginal cost of the intermittent resources [2].

An alternative to employing additional reserve and regulation resources is to enable load to respond to frequency deviations in a manner that is similar to generation. This general approach was originally proposed more than 30 years ago [3]. Simulation studies [4] and demonstrations [5,6] have shown the potential for loads to serve as short-term fast-acting virtual generators and act as a frequency regulation resource that can contribute to primary regulation.

Conventional direct load control has focused primarily on the use of load as an under-frequency load shedding resource. The control models of this type of resource are primarily based on the impulse response of loads to large deviations in frequency [7]. However, for the purposes of frequency regulation, load control design must examine the small signal stability of the system [8]. The latter approach considers more than just the magnitude of the total installed base of controllable load [9,10]; it also considers the aggregate load control gain, closed-loop control feedback effects, and any load state diversity impacts arising from resource utilization.

The lack of participation by load in organized energy markets is an important barrier to demand response technology [11]. In addition, the cost, capacity, and reliability of the communication systems for controllable loads undermine the confidence utilities have in using loads as a reliable substitute for dispatchable generation [12]. There can also be significant uncertainty regarding the amount of load that will be available to respond, the duration with which it will respond, and the magnitude of the rebound when it is released [13]. Finally, changes in the allocation of generation resources can impact transmission capacity and N−1 contingency reliability resource selection, and can lead to additional operational costs [14].

There is a long history of using load as a resource, beginning with demand-side management (DSM) programs and time-of-use (TOU) rates. DSM programs exploited seasonal long-term demand elasticity through energy efficiency measures in order to defer capacity additions by holding down peak loads as load-growth rates waned in industrialized nations. TOU programs were an effective strategy for obtaining the sustained price-based control of peak load using diurnal mid-term demand elasticity. Some of this capability has been transferred to short-term elasticity by using the pseudo-storage potential of thermostat loads [15]. Peak-time rebates, critical-peak prices, and real-time price signals have been used to more directly reveal the short-term elasticity of demand [16].

In the case of real-time price (RTP) demand response systems, price-discovery is an important challenge [17] that has been addressed through the development of so-called “transactive control.” In these bi-directional systems, information about the available resources and their reservation prices(A reservation price is defined as the price at which a resource will decline to participate. For a producer, this is a lower price constraint; and for a consumer, it is an upper price constraint.) is collected from demand resources and included in a double-auction market where both the supply curve and demand curve are used to discover the price at which supply will equal demand. This mechanism has been used to solve real-time resource capacity allocation problems at the utility scale [18] but has yet to be carefully studied for regulation resource allocation [19].

The main purpose of this paper is to review and synthesize design requirements, implementation considerations, and validation approaches for agent-based simulations that can assist in the design of load control strategies. The simulations can then help address the renewable integration challenges that utilities confront as they try to mitigate the greenhouse gas emissions of their conventional generation fleets. Such simulation environments must capture all the salient features of the electromechanical dynamics of the interconnection, the dispatchable and renewable generation resources, the market designs and market participants, control area and balancing authority operations, and both the unresponsive and responsive loads. At the same time, it must remain computationally tractable in order to study large interconnected regions where inter-jurisdictional interactions are important.

This paper is structured as follows. In Section 2, we review the agent-based methods used to solve quasi-steady models of interconnections, generation resources, and markets, with particular attention to the sub-hourly behavior of the system. Section 3 focuses on the problem of modeling individual and aggregated loads and load control at this time scale. Validation challenges and preliminary results loosely based on the Western Electricity Coordinating Council (WECC) planning model are discussed in Sections 4 and 5.

《2. System model》

2. System model

Modeling the composite behavior of highly complex interconnected systems has been a challenge for engineers since the early days of digital simulators [7]. Recent advances in agent-based computing have helped overcome many of the barriers to simulation, particularly with respect to finding the solution to multiple systems of differential equations where the subsystem models are fundamentally incompatible [20]. GridLAB-D™ is an example of a simulation environment that overcomes some of these challenges, in spite of the fact that its implementation raises issues regarding validation [21]. In particular, the lack of analytic solutions and proofs of stability continue to impair the usability of agent-based time-domain simulation as a control system design tool. Nonetheless, agent-based simulations are very useful as an environment in which to experiment, gain experience and insight, and quickly demonstrate by modus tollens when a particular proposition or strategy fails to work as intended.

It has been previously observed that the bandwidth of renewable intermittency, short-term demand response, and frequency regulation coincide, as shown in Figure 1. This particular alignment between the primary operating bandwidth of demand response and wind intermittency presents both an opportunity and a challenge for system planners. The possible coupling of demand response and intermittent resources means that any feedback mechanisms and delays can give rise to instabilities if controls are not properly designed. However, for the same reasons, well-designed control can give rise to highly efficient performance, both from an economic and a control performance perspective.

《Fig. 1》

Fig.1 Temporal-scales for various electricity system processes.

《2.1. Markets》

2.1. Markets

Generating units cannot be started, stopped, or moved through their operating range without incurring additional costs. In general, the problem of setting the output power that a unit should run at in real-time is based on the area control feedback from the system, as it tries to follow load and adjust for generator output fluctuations. The determination of what range of output power is possible for the next hour is based on what the unit has been doing in the past few hours. The financial impact of this autocorrelation is addressed through the two-settlement system [22]. This system decouples the real-time trades from any forward trades and guarantees that resources behave in real time as though the forward trades had not taken place, making them indifferent to errors in forward markets, and therefore removing any incentive to use the forward markets to increase profits in the real-time markets, or vice versa.

In a standard two-settlement system, contracts-for-differences require loads to pay generators the difference between the contract price and spot prices. This requirement applies even when the difference requires the generator to pay the consumer, and permits either party to deviate from the contract if a profitable opportunity to do so arises without adversely affecting the other party. If they trade over a potentially congested tieline, a financial transmission right provides the same guarantee with respect to transmission prices. The two-settlement system provides assurances that inefficient forward trades are corrected in real time without risk to the traders. Ex-post pricing cases can introduce spot price differences, which impose transmission costs on traders that cannot be hedged. Contracts-for-differences do not avoid these inefficiencies, and risks remain. This subject is an area of ongoing research in market design.

A unit commitment exercise determines which generating units and demand response resources to allocate and what level of production or demand is physically feasible and most economically efficient over any given time interval. The market affects how this problem is solved because only incentive-compatible market designs will induce generators and loads to voluntarily and accurately provide the data needed to correctly solve the unit commitment problem. Power pools solve the unit commitment problem directly by ensuring an incentive-compatible market design, whereas power exchanges ignore the unit commitment problem, forcing the generators and loads to take up the problem, and thus avoiding the incentive-compatibility question altogether.

In most organized markets, generating unit commitment schedules are developed hourly for each control area one day ahead. Generation resource availability is described using supply bids. The combined supply curve for all the dispatchable generation is added to the forecast of intermittent generation. Each unit, or fraction thereof, is committed in merit order from the lowest to the highest cost. The process is similar for demand curves, except that demand response is committed in merit order from highest to lowest willingness to pay. Demand response resources are described using demand bids. Additionally, tieline exchanges are incorporated from unit commitment (hourly scheduling) through the economic dispatch (five-minute redispatch) to regulation process.

The combined effect of these processes is illustrated in Figure 2. The supply and demand curves are cleared, given the available supply and both the responsive demand QR and unresponsive demand QU. The solution to the economic dispatch and optimal power flow is the interconnection-scale power flow when the global surplus is maximized over all control areas, which may require non-zero tieline flows. The main scheduler is not the subject of this paper, but its hourly outputs are vital to dispatch and regulation problems.

《Fig. 2》

Fig.2 Hourly market for resource scheduling with exports.

The unit commitment problem can lead to the absence of classical equilibrium in a power exchange, which advocates of power pools point to as a serious flaw [22]. The problem is a precursor of the renewable revenue adequacy problem in the sense that some generators that are part of the optimal dispatch may not cover their start-up costs. The power pool solves the problem by setting the price to the variable cost at all times and offering side-payments to cover any start-up costs needed to follow the dispatch. Regulation reserve markets offer a suitably structured market mechanism to determine these side-payments separately from the primary energy market.

The control area scheduling problem includes the unit commitment and dispatch problems, which are central to the operation of bulk power systems. Resource intermittency is generally regarded as problematic for operations because of their limited predictability. The optimal selection of which conventional units to run (unit commitment) and the optimal output levels (dispatch) change in the presence of renewable resources [23] and can be expected to change further in the presence of significant demand response. Most solutions to this problem address supply intermittency only and use Monte Carlo methods [24], probability density functions for combined load and wind [25], or probabilistic methods of cost assessment [26]. Unit commitment with demand response has been considered as an optimization problem [27] and in conjunction with wind power [28,29. Stochastic unit commitment has been proposed as well [30] and can address the combined impact of wind power and demand response uncertainty.

These methods all require a time-domain simulation to solve the scheduling problem explicitly. However, for the purposes of an agent-based simulation, it appears to be sufficient to discover the optimal outcome using distributed methods and avoid altogether the forecasting and central day-ahead unit commitment problem. Solving these problems “just in time” using market-based methods such as transactive control allows a simulation to be constructed under the assumption that the system will maximize global surplus, if such an assumption is not within the hypothesis being tested.

It is useful to realize that the two-settlement system of energy market operation assures us that the simulation is indifferent to the absence of a day-ahead market model. Any inaccuracies up to and including the complete absence of a forward price are cancelled by the real-time market operation [22], provided the market design is incentive compatible and all resources bid their true costs. Unless we seek to study incentive compatibility or strategic bidding, it is only necessary to model the real-time markets. The same principle can be extended to all multi-settlement methods of allocating regulation resources—it is not necessary to model tertiary (e.g., hourly) or secondary (e.g., five-minute) dispatch markets because only the cost of primary (e.g., four-second) regulation responses to actual frequency deviations will result in direct payments. It is on this basis that we consider regulation control in relation to market-based dispatch problems.

《2.2. Regulation》

2.2. Regulation

In most systems today, the energy markets we discussed above are not connected to the regulation process, and ancillary service markets are sometimes implemented to address this shortcoming. While it is our goal to change this situation, it is necessary that we review how regulation is currently done before discussing how it might be connected to energy markets.

The interconnection frequency is computed based on the balance of supply and demand, the inertia and damping go together. Control areas are operated separately as wholesale energy markets with multiple time horizons converging on the real time. Generation units under the primary frequency control (governor/speed droop) react to any frequency deviation considering their deadbands, while units under secondary frequency control respond to tieline deviations as well. The role of a secondary control system is to bring back the tieline flows to their schedule and also to zero out the steady-state frequency deviation using the most economical generation units. For this purpose, the area control error (ACE) is computed in order to be used by select generators to regulate their power output. Loads are operated as retail transactive energy markets with at least a capacity dispatch and possibly forward energy markets. The dispatch markets are similar to those demonstrated in the Olympic and Columbus demonstrations [16,18], that is, a distribution capacity market for customer load and distributed generation, given a feeder constraint on bulk power supply. The system frequency control diagram is shown in Figure 3. Control area regulation is divided into three components when load is responsive to frequency: grid-friendly load ( L), droop-controlled generation ( GD), and ACE-controlled generation ( GA). Both load and droop are driven exclusively by deviations in frequency, while ACE generation is driven by both tieline flow error and frequency deviations.

《Fig. 3》

Fig.3 System frequency and control area export regulation control diagram.

Regulation control is based on deviations in frequency and tieline flows from the hourly unit commitment, economic dispatch, and optimal flow schedules, the details of which are beyond the scope of this paper. Primary regulation control is implemented in part as generator droop control, under-frequency load shedding, and so-called grid-friendly loads; and in part as a response to the ACE signal. The ACE signal is updated roughly(In general, supervisory control and data acquisition (SCADA) systems do not guarantee that all devices are sampled at exactly the same time or rate.) every four seconds in each control area using the formula:

where ( eA− eS) is the deviation of the actual net exports eA over tielines from the scheduled net exports eS; B is the frequency bias of the control area; and ( f− fS) is the deviation of the interconnection frequency f from the scheduled frequency fS. Note that the ACE signal is typically filtered. This filter can be modeled using the transfer function 1/(1+ sTA), where the value of TA is typically greater than 10 s.

《2.3. Agent-based modeling》

2.3. Agent-based modeling

As numerical simulation complexity grew with advances in computing power during the 1990s, agent-based modeling became more popular. Today, it represents a departure from the classical simulation approach in which the model embeds the expected equilibrium based on the time-domain solution into systems of differential equations representing the individual elements’ behaviors. Agent-based simulations instead represent the individual component and subsystem behaviors, which allows the outcome to emerge from the interactions between endogenous and exogenous conditions. Agent-based models allow for a more natural “bottom-up” description and are more flexible in how complex they can be and what can be observed during the simulation [31]. In particular, they allow different levels of aggregation and approximation to be utilized concurrently, which makes them particularly well-suited for inter-disciplinary simulation studies. While these advantages are important and typically drive the choice of agent-based simulation over more classical simulation methods, model validation is a very important challenge with agent-based simulation.

Within 20 years of the advent of practical agent-based simulations, hundreds of articles and publications on the subject of agent-based modeling methods had appeared and a consensus began to emerge on the current practices for fields of study, software use, simulation purposes, and in particular validation techniques and criteria given the specifications of the simulation. In their survey of the literature, Heath et al. [32] found six key challenges inherent in using agent-based modeling tools that are independent of the field, tool, or problem:

(1)The development of agent-based modeling tools needs to be independent of the software that implements the simulation and results need to be published with details of the software and numerical methods used to obtain them so that others can reproduce the results.

(2)The development of agent-based modeling needs to progress as an independent discipline within the simulation discipline with a common language that extends across domains.

(3)Simulation designers need to set expectations for their agent-based models so that these match their intended purposes.

(4)Complete descriptions of the simulation must be available so that others can independently evaluate the appropriateness and effectiveness of the models at supporting the results.

(5)The models used must be completely validated and documented in the article.

(6)Statistical and non-statistical validation techniques need to be specifically designed and developed in order to convey performance objectives to those building the models.

These challenges are made all the more difficult to address because of issues that are intrinsic to agent-based simulation. The first is the dichotomy between the ease with which we capture the macroscopic behavior of the system and the difficulty of capturing microscopic behavior characteristics for individual agents. The second is that agent-based simulations are particularly useful for those simulating highly non-linear transient phenomena, for which analytic methods are not always available and are often difficult to apply generally. Finally, the amount of data that can be collected from an agent-based simulation typically far exceeds the amount of data available from the real-world systems that it simulates, making comparison challenging even with the most robust statistical and analytical methods [33]. In spite of these considerations, agent-based simulations are generally considered to be well-suited to problems involving power dispatch using market-based mechanisms [21].

《3. Resource modeling》

3. Resource modeling

The multi-layer/multi-temporal model of supply and demand that we seek requires scheduling information from the wholesale market clearing at the hourly level to be incorporated into the five-minute dispatch market bids. Similarly, information from the five-minute dispatch market clearing must be incorporated into the regulation control. In this section, we examine the supply and demand models for the scheduling, dispatch, and regulation in order to discern what information needs to be exchanged.

《3.1. Supply》

3.1. Supply

Supply bidding behavior is the same for scheduling and dispatch, and is represented using base and marginal prices for different unit classes (e.g., renewable, base-load, mid-load, and peak-load), as shown in Table 1.

《 Tab.1》  

 Tab.1 Generating unit dispatch prices and capacity mix.

These values are used to construct an aggregate asymptotic supply curve [34]:

where qw is the amount of wind power dispatched; qm is the maximum available generating capacity, including wind power and reserves, and the curve parameters that fit the price and resource mix shown in Table 1 are

Three types of generating units must be modeled for the regulation system: hydraulic, thermal reheat, and thermal non-reheat. The plant transfer functions (power output with respect to power control) of controlled units, including their governors are as follows.

Hydraulic units:

Thermal reheat units:

Thermal non-reheat units:

where TG = 0.2 s, TR = 0.5 s, RT = 0.38, TW = 1.0 s, TRH = 7.0 s, TCH = 0.3 s, RP = 0.05, and FHP = 0.3 are typical values [7]. The transfer function of renewable units is zero because they do not provide either droop or ACE response. The combined response of all controlled generation types is described by the transfer function G = ωh Gh + ωs Gs + ωc Gc, where ωh, ωs, and ωc are the fractions of hydraulic, reheat, and non-reheat generating units under control.

Note that the incorporation of the dispatch clearing into the regulation system is not specified, because this is an area of ongoing research and there is no consensus in the literature regarding how this should be done. Indeed, models such as the present one are required to support this type of research.

《3.2. Price responsive demand》

3.2. Price responsive demand

The composite load model was introduced to represent the aggregate load on feeders and correctly reflect the impact of changing end-use load composition [35]. While this model reproduces many of the load behaviors seen in distribution systems, including motor stalling and thermal protection, it does not include some important behaviors related to demand response control that can lead to large-scale system dynamics when loads are used as a reliability resource. In particular, it does not represent the feedback effect of state-based bidding in real-time pricing systems, nor grid-friendly frequency response behaviors such as those demonstrated in the Olympic project. Unfortunately, many aggregate demand response models are too complex to model using low-order linear models [36]. Although some alternative load control designs offer the possibility of modeling fast-acting aggregate demand response using very low-order load models [37].

Demand dispatch behavior can be represented in part using the random-utility model [38]. This model has been used in consumer valuation studies and comparative judgment consumer problems [39]. It seems appropriate to use the randomutility model for transactive control systems because the model makes two key assumptions that hold for transactive systems:

• The consumer’s choice is a discrete event in the sense that a consumer (or a device acting on the consumer’s behalf) must make an all-or-nothing decision, such as to run or not to run the air-conditioner. The consumer (or device) cannot choose to run at part-load for the next interval.

• The consumer’s attraction to a particular choice is a random variable that changes very slowly in time and in this case corresponds to the comfort preference. We use the term “attraction” in the retailing sense, but we could just as well use the term “utility” to be consistent with economic theory. Regardless, it is the randomness of the comfort preference that is essential to this assumption, and it is assumed that the devices acting on behalf of consumers will rationally choose the outcome with the highest utility based on the consumer’s indicated preference for comfort.

In the absence of prior knowledge of the quantities demanded by consumers, the derivation of the aggregate demand curve is based on discrete choice statistics for thermostats whose temperatures are constrained to a finite domain. For example, thermostats must choose a bid price as the reservation price above which they are willing to forgo demand. This is an exclusive choice of the bid price to submit, and it is a necessary choice insofar as it is required to enable consumption at the clearing price. For a dichotomous choice, the reasoning is as follows: U is the consumer benefit (utility in economic theory) that the thermostat obtains from taking a particular action given the consumer’s preferences. This net benefit can be assumed to depend on an unobservable characteristic α that has a logistic distribution and an observable characteristic β that has a logistic distribution. The net benefit is defined as U= α + βx+ ϵ, where x is the consumer’s decision and ϵ is the random independent error. The action corresponding to that choice will be taken if U>0. The relative probability of taking the action is then

The optimal consumer bid has a price that maximizes the benefit while minimizing the cost of a positive outcome. This condition is satisfied when the marginal benefit of the positive outcome equals the marginal cost of a negative outcome. In the absence of a reliable price forecast, the probability of this condition is 1/2 when x= −α/β, given that the consumer’s present condition is 50% satisfied. Put in terms of a thermostat, this is equivalent to bidding the price p corresponding to the current observed temperature Ta, given the desired temperature Td and comfort K for an expectation for the mean price PA and its variance P2D, given recent history, that is, p =± K( Td− Ta)/ PD + PA, where the sign of K will depend on whether a heating or cooling regime prevails. The consumer’s comfort preference is the dominant term in the quantity β, which the thermostat uses to make choices on behalf of the consumer. Thus the parameters for each consumer should be given as a function of difference ∆ T = Td− Ta between the household’s actual indoor air temperature and the consumer’s desired temperature. A consumer’s utility is

where pc is the clearing price of power, and assuming that the random independent error is normal; ϵ→ 0, for a large number of customers.

The transactive control system used in the demonstration projects is at equilibrium when device state diversity is maximized and the total load is steady. This quasi-steady state occurs when the distribution of bids is symmetric about the mean price with the same relative variance. Rescaling for the physical quantities of an arbitrary system with unresponsive demand QU subject to the prices p and responsive demand QR, the total demand at the prices p is [34]

where η<0 is the short-term elasticity of demand. Here the values 2 η and −2 η/ PA represent the aggregate values of α and β, respectively, taken over all the consumers.

This curve does not accurately represent the non-steady behavior of demand response. In particular, when the diversity of load states is disturbed by a large price deviation, the curve skews to the left then to the right as the loads respond and recover from the price disturbance. Modeling the aggregate behavior of demand response following diversity disturbances is an ongoing area of research, but the overall behavior is to cancel the effect of any disturbance within the time constant of the state diversity decay.

《3.3. Grid-friendly load》

3.3. Grid-friendly load

Grid-friendly loads such as those studied in the Olympic demonstration provide very fast frequency response. A variety of under- and over-frequency grid-friendly strategies have been proposed over the years [3,40−42]. The specifics of these strategies vary, and no single model can be developed for the purpose of this paper. But in general, we can summarize the expected characteristics of any grid-friendly response as follows.

• The initial response is very fast, reaching its peak in about one second.

• The peak response is largely proportional to the frequency deviation and continues to be proportional for more than 10 seconds.

• The response decay corresponds to a zero integral error feedback load recovery delay that is typically less than two minutes, although under certain conditions the decay can take longer.

A transfer function for load that exhibits these behaviors is of the form:

where the fast response time constant takes a typical value of TL = 0.2 s and integral error feedback gain KL = 0.02.

《3.4. Joint resource dispatch》

3.4. Joint resource dispatch

The economic dispatch of short and mid-term demand response is designed to occur sequentially in multiple markets. The hourly expected price PA of energy is determined from the wholesale energy markets and is used to set the expected price in the five-minute retail capacity market. Demand response resources use this average price and an expectation of price volatility to submit bids for curtailable capacity to the dispatch market, which is cleared against the available supply, as shown in Figure 4.

《Fig. 4》

Fig.4 Real-time (five-minute) resource dispatch double auction (left) and demand resource control (right).

This dispatch price is transmitted every five minutes to all controllable resources within the control area. Although resources respond according to their bids, they should do so in a manner that is designed to avoid a pure step response to the price change. For loads that respond faster than generators, this may be achieved by adding a filter to the incoming dispatch price signal such that in the aggregate we have

where tC is the time at which the market was cleared; QC( tC) is the dispatch quantity; and TL is a decay rate, which should not exceed the rate at which the control area can follow load, for example, about 98% response in 10 s using only generation resources. This suggests that a reasonable value is TL≈ 2.5 s. As fast-acting demand resources are added, this value decreases. In the Olympic demonstration, the aggregate frequency response was on the order of 90% in 0.4 s, or TL = 0.2 s [5], which is determined by the time constant of the local frequency measurement filter, and is the value used in Section 4.

Note that we opt not to use a constant ramp because, as with the step input, the response may create undesirable marginal stability problems with the load control system. While step or ramp inputs introduce one or two zero poles, respectively, the decay input introduces a single negative real pole at s = −1/ TL with no stability concerns.

《3.5. Regulation costs》

3.5. Regulation costs

The price of regulation control using so-called “grid-friendly” loads is based on the marginal price of demand and supply energy dispatch, RD and RS, respectively, in units of $•(MW2•h)−1:

 

These marginal dispatch energy prices provide linearized prices of energy per MW of supply and demand for their respective contribution to regulation control over the coming five minutes. As slopes of the supply and demand curve, they are the basis for pricing supply and demand regulation resource responses as a function of the magnitude of response required to return the system to schedule. This regulation price is

where ∆ Qreg is the amount of additional power needed to bring both frequency and tielines exchange back to schedule; the current value of ACE is a reasonable approximation of this quantity. The marginal energy prices are also used to compute participation factors for supply and demand regulation resource allocation:


These participation factors are the gains on the supply and demand regulation control that would result in economically optimal regulation. The question of how these are incorporated into regulation controls remains an open area of research.

Although the marginal dispatch energy prices for regulation are often different for supply and demand resources, for any given frequency deviation, the price of regulation energy for all resources responding to that deviation will be the same, regardless of whether it is a supply or demand resource, as illustrated in Figure 5.

《Fig. 5》

    

Fig.5 Regulation resource response price when eS = 0.

Any change in frequency ∆ f will result in a change of net exports from the control area ∆ Q, which corresponds to a change in energy price ∆ P. The change in energy price will always be that which is required to induce the total change in supply and demand in order to adjust control area exports such that it provides the expected 5% frequency droop response. All supply resources that provide regulation services through droop only should not be paid more than the total regulation cost:

to provide regulation response at that time.

Because the original dispatch cost of PC QC has already been paid at dispatch time, it does not need to be collected again, so only the cost of the regulation of energy deviation from dispatch is considered. For speed droop control units, the payment is only for actual regulation performance ∆ QSDC:

The compensation provided to demand response resource is similarly computed based on the actual regulation control response ∆ QDR:

For supply resources that respond to ACE, the computation must include the compensation for rectifying tieline deviations. So we use the actual response of ACE control units ∆ QACE:

Taking all these into account, for each dispatch interval (0, T) the total regulation cost is

This mechanism acts like a Dutch auction, insofar as the fastest moving resources capture the highest prices and slower resources can only receive payment for their lower value and delayed response. This mechanism is the essential foundation of the downward substitutability that is necessary for real-time regulation markets, with respect to five-minute dispatch and hourly scheduling.

The regulation cost is not necessarily collected from the units that cause regulation action, such as fluctuating loads, intermittent renewables, and generating units that do not follow redispatch. To correctly account for these costs, the regulation price must be applied as a penalty to native resources that deviate from the schedule and/or the five-minute redispatch. Having such a deviation penalty eliminates the necessity to implement separate imbalance markets. However, local regulation prices cannot be simply applied to tieline deviations, because the prices may differ on each end of the tieline. This problem appears to be an area that requires further research.

The overall structure of the regulation model in the context of scheduling and dispatch is summarized in Figure 6. In summary, supply and demand bids arrive hourly from energy, capacity, and regulation resources to construct the supply and demand curves that are used to determine the hourly average price PA and the tieline schedule eS. The average price and tieline schedule are used by the five-minute dispatch to determine the price and quantity for redispatch every five minutes, as well as the regulation marginal prices for supply and demand, RS and RD, respectively. Regulation responses are measured each second in order to determine ① the quantity deviation ∆ Q required to maintain the tieline schedule and ② the regulation price PR required to obtain that quantity. Any fluctuations in regulation price PR are used to estimate the price standard deviation PD for the next dispatch interval, and the quantity deviation ∆ Q is used to adjust the next dispatch and the next schedule so that step responses to schedule and dispatch changes account for the existing state of the system.

《Fig. 6》

Fig.6 Inter-temporal information flow diagram.

《4. Validation》

4. Validation

Simulation validation is often considered using Zeigler’s hierarchy of model validity [43], that is, replicative, predictive, and structural. Variations on this taxonomy exist [44,45], but for the purposes of validating agent-based simulations, Klügl proposes using only two levels [33].

• Face validation. This is an assessment of the model that is completed in three steps: ① Animations are observed by human experts in order to assess whether the macroscopic behaviors of the simulation replicate those of the real-world system; ② the outputs of the simulation are assessed by a human expert in order to determine whether they are plausible, given the conditions; and ③ a human expert assesses whether the system’s interaction with any particular agent is appropriate from the agent’s perspective.

• Empirical validation. This is also performed in three steps: ① sensitivity analysis to show the effects of different parameters; ② calibration to determine the appropriate values to use; and ③ statistical validation using different data sets to ensure that the model is not just highly tuned to a particular scenario.

Three alternative methodological approaches have been developed in agent-based economics and are probably applicable to agent-based engineering: indirect calibration, the Werker-Brenner approach, and the history-friendly approach [46]. Indirect calibration is more microscopic in its focus and performs validation first and then indirectly calibrates the model by focusing on parameters that are consistent with output validation. The Werker-Brenner approach is perhaps the most relevant for calibrating agent-based engineering models, because it includes a Bayesian inference procedure to validate output [47], which allows each model specification to be assigned a likelihood based on the compatibility of the theoretical realization with the empirical realization. This method is called “methodological abduction” and allows only shared characteristics that hold for both the model and the real system to be used, provided that the model is not based on any false premises. Windrum et al. argue that this approach has the advantages of reducing the number of degrees of freedom, avoiding the pitfalls of validation based on a small number of historical datasets, and providing a more rigorous methodology for simulations that are grounded in empirical data [46]. As with the Werker-Brenner method, the history-friendly method also performs calibration first, but can more readily incorporate anecdotal or casual knowledge. However, it tends to be more microscopic in its focus, like the indirect calibration method.

Windrum et al. [46] also point out some important research gaps in the current research on agent-based model validation. In particular, none of the current methods overcome the problem of over-parameterization of the agent-based models. Realistic assumptions at the individual agent level often lead to many degrees of freedom at the macroscopic level, allowing the model to generate any result and therefore reducing the explanatory power of the model to little more than a random walk. Causality between assumptions and results also becomes very difficult to study. Typically, this problem is addressed by reducing the number of degrees of freedom, which leaves the modeler with many alternative worlds from which to choose.

A second problem that has not been satisfactorily addressed is the interpretation of the counterfactual outputs of the model. It is not clear that the probability of observing a particular output from the model is at all representative of the probability of observing the same output in the real world, nor are we certain how to go about assessing whether or to what degree the model is explanatory.

Finally, the availability, quality, and bias of empirical datasets is a significant consideration in the model validation process. Not all records are retained, and it is quite typical to find that only “interesting” events are recorded or that the “uninteresting” data was deleted, essentially embedding a potential critical bias in the empirical data.

《5. Results and discussion》

5. Results and discussion

The validation of agent-based models of joint economic-power system models is still an immature science. We use the discussion in Section 4 as a general guide to help illustrate the approaches to validation we present in this section. Three elements of this model are examined in order to illustrate some of the validation methods discussed. We first examine the open-loop response of a single control area to disturbances in frequency and tieline exchanges resulting from fluctuations in renewable generation output throughout the interconnection. Next, we examine the closed-loop response of the system to a loss-of-generation contingency in another control area. Finally, we examine the change in system cost of regulation in the presence of demand response.

《5.1. Control area response》

5.1. Control area response

The validation of regulation dispatch was conducted on the control area, generation, and load models working open loop in an interconnection—that is, such that the frequency and tieline flows are affected by the interconnection as a boundary condition and do not affect the interconnection itself. The control area operating assumptions are shown in Table 2, and the simulation results are shown in Figure 7.

《Tab.2》

Tab.2 Validation parameters for a single control area.

《Fig. 7》

Fig.7 Open-loop control area test. (a) Frequency and ACE; (b) power regulation.

The results indicate that the model presents an acceptable regulation response at the control area level, given reasonable assumptions regarding five-minute redispatch, renewable intermittency, and the availability of demand response resources. In particular, a significant loss of renewable generation corresponding to a wind overspeed cutout is shown starting around minute 35 and lasting about 20 min. The availability of additional demand response shows significant decrease in both the magnitude and variance of the ACE signal in response to identical exogenous frequency and tieline fluctuations received by the control area, indicating an improvement in control system performance in the presence of 11% versus<1% demand response.

《5.2. System response》

5.2. System response

The validation of under-frequency response was conducted on the peak hour of the peak day by observing a single control area response to a 1% (of system) generation loss in another control area in the interconnection, given a closed-loop response for all control areas in the interconnection. The interconnection and control area model parameters are shown in Table 3.

《Tab.3》

Tab.3 Interconnection model parameters.

The ten-second and five-minute closed-loop system responses are shown in Figures 8 and 9, respectively, for different levels of demand response availability. The increasing amount of fast response in load shedding is observed in Figure 8(c) as increasing demand response (DR) is dispatched. The corresponding recovery over the following two minutes is observed in Figure 9(c). In addition, it is also apparent from Figure 8(a) and (b) that increasing demand response dispatch decreases the magnitude of the frequency excursion and the amount of generation that is required to maintain exports. Overall, the total exports remain consistent for all demand response dispatch levels, indicating that the overall impact on the system can be expected to be relatively insensitive to redispatch every five minutes.

《Fig. 8》

Fig.8 Interconnection under-frequency response (ten-second window). (a) Frequency; (b) ΔGene­-r­ation; (c) ΔLoad; (d) ΔExports.

《Fig. 9》

Fig.9 Interconnection under-frequency response (five-minute window). (a) Frequency; (b) ΔGene-r­ation; (c) ΔLoad; (d) ΔExports.

《5.3. Regulation cost》

5.3. Regulation cost

The regulation costs for the closed-loop system scenario above are shown Table 4. The introduction of an additional 10% demand response resource has a significant impact on regulation costs, reducing the overall cost of regulation by 65%. In addition, there is a significant increase in the regulation payments to demand response from 2.4% to 22% of the total regulation payments from the control area.

《  Tab.4》

  Tab.4 Regulation costs by resource type.

The dispatch, regulation, and deviation penalty prices are shown in Figure 10 for the study control area. The downward substitutability of resources is clearly visible, as five-minute dispatch prices are lower than real-time regulation prices. The deviation penalties correspond strongly, but not exactly, to the regulation price. This difference is caused by tieline deviations that cannot be accounted for by local dispatch deviation penalties collected within the control area. The mechanism for determining the penalties for tieline deviations requires reconciling the penalty prices in the area linked by the tieline, a capability that is not yet supported by this model.

《Fig. 10》

Fig.10 Demand response impact on (a) dispatch and regulation prices, and on (b) deviation penalty prices.

《6. Conclusions》

6. Conclusions

This paper has presented an overview of the technical modeling requirements, implementation structure and algorithms, and validation techniques that are necessary for quasi-steady agent-based simulations of interconnection-scale models that is needed in order to perform regulation response studies with integrated renewable generation and controllable loads. We present approaches for modeling aggregate controllable loads that can be implemented in the same economic and control modeling framework as generation resources when performing interconnection planning and operations research with significant demand response deployed in the presence of intermittent renewable generation.

Agent-based simulations are increasingly expected to be the basis of extensive system research in demand response control design, renewable integration studies, control area performance optimization strategies, and market design studies. Model performance and system parameters typical of an interconnection approximately the size of the WECC and a control area about 1/100 of the size of the system are used to validate the methods presented. The results demonstrate that modeling approaches using agent-based methods produce the expected macroscopic system and control area behavior both in the absence of and in the presence of varying amounts of demand response.

The following open research questions have yet to be addressed by the present model. First, computing the hourly schedule for optimal flows that maximizes global surplus remains an unmodeled process and must be provided as a boundary condition. Second, the interconnection is currently modeled as a monolithic machine, but in fact many individual control areas have links of varying electromechanical and economic strength between them. Third, regulation of tieline deviations is assumed to be on generation only at the other end of the tieline, when it fact it is most likely based on a similar mix of generation and demand response. Finally, tieline deviation costs cannot be fully recovered if schedule and dispatch deviation penalties are not levied against all participants, including load and intermittent generation.

《Acknowledgements》

Acknowledgements

This work was funded in part by Natural Resources Canada and by the US Department of Energy’s Pacific Northwest National Laboratory, which is operated by Battelle Memorial Institute for the US Department of Energy under Contract DE-AC05-76RL01830.

《Compliance with ethics guidelines》

Compliance with ethics guidelines

David P. Chassin, Sahand Behboodi, Curran Crawford, and Ned Djilali declare that they have no conflict of interest or financial conflicts to disclose.

Nomenclature
      ACE: area control error (MW)
      B: frequency bias of the control area (MW•Hz−1)
     c0: supply curve cost parameter ($•(MW•h)−1)
     c1: supply curve scaling parameter ($•(MW•h)−1)
     c2: supply curve scarcity parameter (unitless)
     D: system damping coefficient (unitless)
     eA: actual tieline exports from the control area (MW)
     eS: scheduled tieline exports from the control area (MW)
     f: current interconnection frequency (Hz)
     fS: scheduled interconnection frequency (Hz)
     G( s): generation control transfer function (MW/MW)
     KA: generation ACE control fraction (pu•PG)
     KL: load recovery response integral error feedback gain (unitless)
     L( s):demand response function (MW/MW)
     M: system inertial constant (s)
     p: price function variable ($•(MW•h)−1)
     PA: expected average energy price for the current scheduling interval ($•(MW•h)−1)
     PB: bid price in five-minute dispatch market ($•(MW•h)−1)
     PG: total firm generation (MW)
     PL: dispatched responsive load (MW)
     Pload: local load disturbances (MW)
     Psystem: system disturbances (MW)
     Pwind: local wind generation disturbances (MW)
     P( q): supply price function ($•(MW•h)−1)
     QA: expected hourly average (i.e., scheduled) total dispatch quantity in a control area (MW)
     QC: actual five-minute total dispatch quantity in a control area (MW)
     QD: demand dispatch quantity in a control area (MW)
     QR: total response load (MW)
     QS: supply dispatch quantity in a control area (MW)
     QU: total unresponsive load (MW)
     Q( p): demand quantity function (MW)
     qm: maximum generation capacity (MW)
     qw:renewable (non-dispatchable) generation quantity
     RD:marginal price of demand response at dispatch quantity ($•(MW2•h)−1)
     RS:marginal price of supply at dispatch quantity ($•(MW2•h)−1)
     s: transfer function complex frequency variable (Hz)
    TA:area control error filter time constant (s)
    Ta: actual indoor air temperature (°C)
    Td: desired indoor air temperature (°C)
    TG:generation speed governor time constant (s)
    Th:maximum indoor air temperature (°C)
    T1:minimum indoor air temperature (°C)
    TL:load frequency response time constant (s)
    TR: generation reset time (s)
    TS: indoor air temperature set-point (°C)
    t: time variable (s)
    tC: market clearing time (s)
   x: consumer utility function decision variable ($•(MW•h)−1)
   α: unobservable consumer utility decision parameter (unitless)
   β: observable consumer utility decision parameter (MW•h•$−1)
   ∆ P: price impact of net quantity deviation in control area ($•(MW•h)−1)
   ∆ Q: net control area dispatch deviation (MW)
   η: short-term (i.e., five-minute) elasticity of demand (unitless)
   ρD: demand regulation participation factor (unitless)
   ρS: supply regulation participation factor (unitless)