Australia OIS Extracting Information from Financial Market Instruments

Abstract

Financial market prices contain information about market expectations for economic variables, such as inflation or the cash rate, that are of interest to policymakers. This article describes four financial market instruments that are particularly useful for this, and documents how market expectations and other useful information can be derived from them. In particular, it describes how overnight indexed swap rates and government bond yields can be used to estimate a zero-coupon yield curve and infer market expectations for risk-free interest rates, and how inflation swap rates and inflation-indexed government bond yields can be used to infer market expectations for the inflation rate.

Introduction

Financial market data are often used to extract information of interest to policymakers, such as market expectations for economic variables. The prices of interest rate securities are particularly useful for obtaining information about expectations of future risk-free interest rates and future inflation rates, as well as for estimating risk-free zero-coupon yield curves.

The first part of this article discusses how data from the overnight indexed swap (OIS) market and the government bond market can be used to estimate risk-free zero-coupon yield curves and obtain information about market expectations of the path of risk-free rates. OIS contracts directly reference the cash rate, making it relatively easy to extract market expectations from them, but they are only liquid out to around one year in maturity. To obtain estimates of zero-coupon risk-free interest rates beyond one year, models can be used to estimate a zero-coupon yield or forward curve from the yields on Commonwealth Government securities (CGS). The yield curve gives the interest rate agreed today for borrowing until a date in the future, while the forward curve gives the interest rate agreed today for overnight borrowing at a date in the future. The forward curve can be used as an indicator of the path of expected future cash rates, but importantly it becomes less reliable as the tenor lengthens because of the existence of various risk premia, for example term premia. No attempt is made in this article to adjust for these risk premia and so they will affect the estimated zero-coupon curves.[1]

The second part of this article discusses how data from inflation swaps and the inflation-indexed Treasury capital indexed bond (CIB) market can be used to obtain estimates of inflation expectations. Conceptually, inflation swaps can be used in a similar way to OIS contracts, and CIBs can be used in a similar way to CGS, to extract information on expected inflation. In practice, inflation swaps tend to be the more useful source of information as there are very few inflation-indexed bonds on issue and the CIB market is somewhat less liquid than CGS. Inflation swaps are also traded at a larger number of tenors and have maturities extending from 1 to 30 years. Again risk premia, including liquidity and term premia, are present in the CIB and inflation swap markets, and so will affect the estimates.

Extracting Information on Cash Rate Expectations

Overnight indexed swaps are frequently traded derivative instruments where one party pays another a fixed interest rate on some notional amount in exchange for receiving the average cash rate on the notional amount over the term of the swap. The cash rate is the rate on unsecured loans in the overnight interbank market, which is the Reserve Bank’s (RBA) operational target for monetary policy. Banks and other market participants use trades in OIS to manage their exposure to interest rate risk. For example, a market participant expecting a reduction in the cash rate may choose to trade on this expectation by entering an OIS contract where they receive a fixed rate and pay the actual cash rate over the period of the swap; a party with a lower expectation of a reduction in the cash rate may enter the opposite transaction. OIS rates therefore provide direct information on market expectations of monetary policy.

The OIS market has grown considerably since its inception in 1999. As at June 2011 there were $3.2 trillion of OIS contracts outstanding, and turnover in the year to June 2011 was around $6.6 trillion (Graph 1). Since OIS rates reflect the return from investing cash overnight over the term of the swap, and there is only an exchange of interest – not notional principal amounts – these transactions involve very little term or counterparty credit risk. An important point, however, is that these risks in OIS are not zero, as is often assumed, and are likely to increase, along with the associated risk premia, in times of stress.[2] Generally though, OIS rates tend to be lower and less volatile than other money market rates of similar maturity. For example, bank bill futures contracts, which reference the 90-day bank bill swap (BBSW) reference rate, are liquid but are less useful for extracting unbiased cash rate expectations because they incorporate a greater degree of credit risk which can change, and has changed, over time.

Graph 1
Graph 1: OIS Outstanding

OIS contracts trade for relatively short terms, generally of less than one year. Of the total amount of OIS contracts outstanding in June 2011, around 40 per cent was for contracts with a term of less than 3 months, 26 per cent was for contracts with terms of between 3 and 6 months and 33 per cent was for terms of between 6 and 12 months (Graph 2).

Graph 2
Graph 2: OIS Outstanding by Tenor

OIS have advantages over the 30-day interbank cash rate futures contracts trading on the ASX. These contracts are similar in concept to OIS, but they are exchange-traded and have fixed maturity dates as opposed to fixed tenors. Also, less trading occurs in these contracts than in OIS, especially for contracts of over three months. The relatively high level of liquidity that usually exists in OIS markets means that they are typically quoted with small bid-offer spreads, which helps users to derive more accurate measures of market expectations of the cash rate. Another theoretical advantage of OIS is that, being a derivative instrument, the supply of OIS contracts is not fixed; supply factors can influence the pricing of physical securities, such as bank bills and certificates of deposit.

The use of the OIS market to gauge cash rate expectations does, however, present some challenges. OIS rates can sometimes be distorted by a lack of liquidity as well as positioning from market participants, for example those wishing to trade on the basis of views about the likelihood of large and unexpected ‘tail events’ adversely affecting economic conditions. They also incorporate some term and counterparty credit risk as discussed earlier. These distorting factors are more likely to be relevant during times of heightened uncertainty about the economic and financial outlook, as has been the case recently.

OIS rates nonetheless provide a useful and simple source of data for estimating cash rate expectations out to one year. If, for example, the fixed rate in an OIS is trading below the current cash rate, this would indicate that, on average, market participants are expecting the RBA to ease monetary policy over the term of the swap. By comparing the fixed rates for swaps of different maturities, it is possible to assess both the magnitude of the expected change in the cash rate and the timing of these changes. As a simplified example, assume that the day before an RBA Board meeting:

  • the current cash rate is 4.25 per cent;
  • the 30-day OIS rate (i.e. the fixed rate) is 4.00 per cent; and
  • the 60-day OIS rate is 3.875 per cent.

The 30-day OIS rate of 4.00 per cent suggests that market participants are, on balance, expecting the cash rate over the next 30 days to average that rate. If for the sake of simplicity it is assumed that the Board will only move the cash rate in 25 basis point increments – whereas the market can often expect larger adjustments – then it follows that financial market participants expect the RBA to cut the cash rate by 25 basis points at the next day’s Board meeting.[3] Comparing the 30-day and 60-day OIS rates also indicates what markets are expecting to happen to the cash rate at the subsequent RBA meeting. If the market is expecting that the cash rate will average 4.00 per cent for the next 30 days and 3.875 per cent for the next 60 days, then the market must be expecting the cash rate during the second 30-day period to average 3.75 per cent (that is, (4.00 + 3.75) / 2 = 3.875).

Market expectations of the cash rate can vary substantially over time. At the time of writing this article, expectations of the cash rate for the middle of 2012 were around 4 per cent, up from around 3 per cent late last year when concerns stemming from the European sovereign debt crisis weighed heavily on sentiment about the economic outlook (Graph 3).

Graph 3
Graph 3: Forward Cash Rates

While OIS rates provide information about the short end of the yield curve, they are less useful for the longer end, as they cease to be regularly traded for maturities beyond around one year. At longer maturities, the natural risk-free interest rates to consider are those on CGS (other ‘risk-free’ bonds exist, such as government-guaranteed bank bonds, but such bonds typically trade with a significant liquidity premium relative to CGS so they are not considered here). There are currently 18 CGS lines on issue, with remaining terms to maturity ranging from less than 1 year to a little over 15 years.

There are a number of factors to consider when using CGS yields to calculate longer-term risk-free interest rates. First, investors in a 10-year bond with coupons receive a cash payment not only in 10 years time, when the bond matures, but every 6 months leading up to maturity. This in turn means that the interest rate associated with the bond – the yield to maturity – is not the risk-free interest rate for borrowing for 10 years, but rather a combination of the 10-year interest rate, which applies to the principal payment, as well as the various interest rates applying to the coupons paid over the life of the bond. Second, the limited number of CGS on issue also means that one can only look at interest rates to certain dates in the future. Estimating zero-coupon yield and forward curves resolves these problems: the impact of coupons on bond prices is explicitly modelled and removed, and the estimated curves allow the gaps in between bond maturities to be ‘filled in’.

Details of the estimation method are provided in Appendix A. For data, prior to 2001 Treasury notes for maturities extending up to one year into the future are used, and from 2001 onwards OIS rates for maturities extending up to one year are used (the OIS market became liquid enough to provide reliable pricing around this time, while Treasury notes were not issued between mid 2002 and early 2009). CGS yields are used for maturities greater than 18 months into the future (bonds with short maturities can be relatively illiquid in comparison with longer-dated CGS).

As such, the yield curves that are estimated combine data from both the OIS and CGS markets, with the implicit assumption that the interest rates attached to all instruments in both markets are largely free of credit and liquidity risk premia, and therefore comparable. To the extent that this does not hold, it will flow through to the estimated curves. The existence of term premia, being the extra compensation demanded for investing for a longer period of time, is another complicating factor. Again no attempt is made to account for term premia and so any term premia in OIS rates or bond prices will be incorporated in the estimated curves.

Notwithstanding these caveats, estimated zero-coupon forward, yield and discount curves as at 21 February 2012 are given in Graph 4. The discount curve gives the value today of receiving one dollar in the future; it starts at one (one dollar today is worth one dollar) and slopes down (one dollar today is worth more than one dollar in the future). Although the discount curve looks linear at this scale, it is not. The forward and yield curves start at the prevailing cash rate. As discussed earlier, abstracting from the existence of risk premia, the forward rate can be read as giving a rough indication of the market-implied expectation for the cash rate. On this basis, as at 21 February 2012, OIS rates and CGS prices implied that market participants expected the cash rate to fall over the year ahead before rising again over subsequent years. The yield curve is essentially an average of the forward curve and so looks broadly similar to, but is generally smoother than, the forward curve.

Graph 4
Graph 4: Zero-coupon Curves

Graph 5 provides a longer perspective on the data, showing zero-coupon forwards since 1993 at the 1-, 3- and 5-year horizons. These discount, yield and forward curves are available to the public on the RBA website.

Graph 5
Graph 5: Zero-coupon Forwards

Zero-coupon discount, yield and forward curves can be used in a number of applications. A common way to use this kind of data is as an input for discounting future cash flows, be they cash flows from real assets such as toll roads or power stations, or cash flows from financial assets such as shares or bonds. This discounting essentially assigns a current dollar value to future payments or receipts and is most easily achieved using a discount curve, although to discount risky cash flows a discount curve that incorporates an appropriate risk premium should be used.

Zero-coupon yield curves are also useful for analysing the government bond market itself; for example, the deviation of traded bond prices from prices implied by the fitted zero-coupon yield curve (that is, the pricing error made in fitting the model) may indicate that certain bonds are cheap or dear relative to other bonds with similar maturities.

Another use is in economic modelling. Economists are interested in the interaction of financial markets and the real economy, including the effect that interest rates have on the real economy. To study these relationships zero-coupon yields should be used, not yields to maturity (see, for example, Spencer and Liu (2010) for a recent study of economic and financial linkages).

There is also a large amount of literature on the estimation of the term premia present in government bonds. This literature attempts to decompose zero-coupon yields into pure cash rate expectations and a term premia component, and thereby derive better estimates of expectations (this article does not attempt to adjust for term premia). Term premia are also of interest in their own right, as they give an indication of the excess return an investor can expect from investing for a longer time period. Term premia estimation requires zero-coupon yields as the basic input into estimation (see, for example, Duffee (2002) for a US study on term premia, or Finlay and Chambers (2008) for an Australian study).

Extracting Information on Inflation Expectations

Reliable and accurate estimates of inflation expectations are important to central banks given the role of these expectations in influencing future inflation and economic activity. These expectations are also important for organisations that manage inflation-linked assets or liabilities. Although surveys provide some guidance on the expected path of inflation, inflation-linked securities have the advantage of providing more timely and frequently updated information on market expectations of inflation.

A widely used market-based measure of inflation expectations is a break-even inflation (BEI) rate calculated as the difference between the yields of nominal CGS and CIBs.[4] The current BEI rate at the 10-year horizon is around 2¾ per cent, suggesting that the market expects average inflation over the next 10 years to be within the RBA’s 2–3 per cent inflation target (Graph 6). For shorter maturities, markets currently expect inflation to be closer to 2½ per cent.

Graph 6
Graph 6: Break-even Inflation Rate

One limitation with using the bond market to gauge inflation expectations is the small number of CIBs on issue; there are only five bonds currently on issue, with maturities around every five years from 2015 to 2030. In comparison, there are 18 CGS lines on issue with maturities spanning 2012 to 2027. Hence, the bond market offers a limited number of pricing points from which to extract measures of inflation expectations for a broad range of tenors. This lack of pricing points also makes it more difficult to derive forward measures of expected inflation, which measure expectations of inflation at some point in the future.[5]

In addition, there are maturity mismatches between CGS and CIBs. For example, the current 10-year CGS matures in July 2022 whereas the closest CIB matures in February 2022. As a result, a 10-year BEI rate must be derived by interpolation. Further adjustments must also be made to account for compounding effects on yields since CGS pay semi-annual coupons while CIBs pay quarterly coupons.

However, the most serious shortcoming of the BEI rate derived from bonds is that it captures investors’ liquidity preferences for different types of bonds. With outstanding CIB issuance 13 times smaller than CGS, CIBs can be less liquid than CGS, and investors who wish to hold highly liquid assets will have a stronger preference for CGS. This liquidity preference effect can be very pronounced during periods of heightened uncertainty such as in 2008 where ‘flight-to-safety’ bids put significant downward pressure on nominal bond yields (as noted earlier, any such distortion will also be incorporated in the estimated nominal zero-coupon curves) (Graph 7). More broadly, with CGS yields trading with a liquidity premium relative to CIBs, BEI rates can be artificially compressed and so give a distorted measure of inflation expectations. The low BEI rates in 2008 and 2009 were not all driven by liquidity effects, however, since the financial crisis had led market participants to become more pessimistic about future economic conditions.

Graph 7
Graph 7: Break-even Inflation Rate

Because of these limitations, inflation swaps have become an increasingly popular alternative source of information on inflation expectations. Their key advantage is that they provide direct and readily available measures of inflation expectations with no need for interpolation, since swaps are traded at the main tenors of interest such as 3-, 5- and 10-years. Also, as derivatives, the supply of inflation swaps is not constrained, meaning that in theory, inflation swap rates are generally not distorted by liquidity preference effects.

An inflation swap is a transaction whereby the inflation payer pays the actual inflation rate in exchange for receiving a fixed payment (Figure 1). The actual inflation payment is based on the most recently available quarterly consumer price index at the maturity of the swap. The fixed payment approximates the expected value of inflation over the term of the swap and is analogous to the BEI rate derived from bond prices. In this sense, inflation swaps operate in a similar fashion to OIS contracts, but with a different reference rate (CPI inflation instead of the overnight cash rate) and longer terms to maturity. Fixed rates for inflation swaps are readily available for terms out to 30 years.

Figure 1
Figure 1: Example of Cash Flows of a Zero-coupon Inflation SwapRead description

The most common form of inflation swap in the market is the zero-coupon inflation swap. Here only one cash payment is made at the maturity of the swap, representing the difference between the fixed rate and actual inflation over the term of the swap. This means that counterparty credit risk is minimal and inflation swap rates are not affected by periodic coupon payments. Zero-coupon inflation swaps have become more popular over recent years, especially between 2003 and 2009 when CIB issuance ceased.

In terms of hedging flows, the main receivers of inflation in the inflation swap market are pension funds that use swaps to match their long-term inflation-linked liabilities. Liability matching has had a significant impact on making the inflation swap market in Australia a more recognised alternative to inflation-indexed bonds. Demand to pay inflation in swaps (and receive a fixed rate) mainly stems from infrastructure project providers that want to hedge their inflation-linked assets or revenue streams. This can be done by issuing a nominal bond and entering into an inflation swap with an investment bank. This has boosted the size of the inflation swap market, which is an over-the-counter market where intermediaries such as prime brokers play an important market-making role.

Investors can also trade inflation swaps based on their views about future inflation. For example, if an investor expects a higher rate of inflation than that implied by the fixed rate of a swap, the investor would enter a swap contract, receive actual inflation and pay the fixed rate. This is achieved through a single transaction instead of separate trades in nominal and inflation-indexed bonds, which bear funding costs and suffer from maturity mismatches. Inflation swaps are also used in conjunction with nominal bonds to replicate an inflation-indexed bond. This allows investors to overcome bond maturity mismatches as well as any potential shortage of inflation-indexed bonds.

Despite the recent growth in inflation swaps, the market remains small compared with those for other derivatives such as interest rate swaps. There are no official data to measure the total size and activity levels in the inflation swap market accurately, although a survey by the Australian Financial Markets Association (AFMA) estimated that as at May 2011 there were $24 billion of inflation swaps outstanding, and turnover over the year to June 2011 was $11.6 billion (AFMA 2011).

Since 2008, measures of implied inflation captured by 3-, 5- and 10-year inflation swaps have ranged between 1¼ per cent and 4 per cent (Graph 8). Mimicking the pattern observed for the BEI rate from the bond market, inflation swap rates over 2008 also fell to low levels, suggesting that market participants were moderating their inflation expectations. Over recent years, however, these inflation expectations have reverted to around 2–3 per cent.

Graph 8
Graph 8: Inflation Swap Rates

Since inflation swap rates are zero-coupon, it is simple to use the framework in the previous section to derive forward inflation rates, which measure expectations of inflation at some point in the future (Graph 9). Forward inflation rates derived from swaps at the 3-, 5- and 10-year horizons have also fluctuated in a wide range over recent years; as these forward rates represent expected inflation at a point in the future, they are generally more volatile than the (zero-coupon yield) measures shown in Graph 8, which represent expected inflation over a period up until a point in the future. Overall, current forward measures of inflation are also around 2 to 3 per cent, albeit slightly above 3 per cent at the 10-year horizon.

Graph 9
Graph 9: Forward Inflation Swap Rates

Inflation expectations in the swap market broadly track the BEI rate in the bond market, but current 5- and 10-year measures appear to show that inflation expectations in the swap market are somewhat higher than those in the bond market; over the first half of 2009 the divergence of the swap market from the bond market was even greater, with inflation swap rates being up to 50–70 basis points higher than BEI rates implied by bonds (Graph 10). One reason for this lower BEI rate from the bond market is the liquidity preference effect discussed earlier. This effect was particularly pronounced over the first quarter of 2009 when inflation swap rates normalised faster in the aftermath of the financial crisis than bond yields, which retained a large liquidity premium.

Graph 10
Graph 10: Break-even Inflation from Bond and Swap Pricing

Another reason swap rates could be higher relates to hedging. Intermediaries in the swap market, who play an important market-making role, sometimes hedge their positions in the inflation-indexed bond market. This market can be relatively less liquid and compensation for this hedging risk may bias up inflation swap rates.

Term premia also tend to cause structurally higher inflation swap rates because the fixed-rate payer will demand compensation for the inherent uncertainty about the expected amount of inflation over the term of the swap. This premium can change for a variety of reasons including an increase in uncertainty about the inflation rate or changes in investors’ inflation tolerance (term premia can also affect CIBs).

Conclusion

Financial markets provide a significant amount of information about expectations of the cash rate, risk-free rates and inflation. Extracting expectations from market measures is not always straightforward, however, and results should be viewed with some caution. Measures derived from the government bond market can contain liquidity preference effects that are particularly problematic in times of heightened uncertainty. Some measures, such as zero-coupon interest rates, are not directly observable and must be estimated from bond yields using a variety of assumptions. Nonetheless, as well as providing some information on risk-free rates, estimates of zero-coupon rates are useful in economic modelling, in estimating risk premia and for discounting cash flows. The RBA will be publishing a constructed series of zero-coupon yield, forward and discount curves on its website. While derivative instruments such as OIS and inflation swaps provide more straightforward measures of market expectations, and are regularly updated as these markets are actively traded, the prices of these instruments contain various risk premia, which tend to bias implied expectations.

Appendix A

There are a number of established methods for estimating zero-coupon curves, which all give broadly similar results (see, for example, Bolder and Gusba (2002)). The method used in this article – the Merrill Lynch Exponential Spline model – does not estimate the yield or forward curve directly, but instead estimates the discount curve, from which the zero-coupon yield and forward curves can be recovered.[6] The discount curve is modelled as a linear combination of a number of underlying curves, called basis functions, which are fixed functions of time. That is, it is assumed that the discount curve can be written as:

(A1)
Intuitive description – the discount curve is modelled as the sum of a number of basis functions multiplied by coefficients that must be estimated. Literal description – d of t equals the sum over j of a subscript j multiplied by b of t subscript j.

where bj(t) are basis functions, and aj are the (to be estimated) coefficients that, when multiplied with the basis functions, give the discount curve. The price of a bond, which can be observed, is simply each cash flow (consisting of coupon payments and principal) multiplied by the appropriate discount curve value. For example, if the cash flows of a bond are denoted by ct then the bond price, P, can be written as:

(A2)
Intuitive description – the price of a bond is the sum of its cash flows multiplied by the appropriate discount factors. Literal description – P equals the sum over t of c subscript t multiplied by d of t.

Taking the two equations above together, the cash flows ct are known, and the basis functions bj(t) are fixed functions of time, so the only unknowns are the coefficients attached to the basis functions, aj. The same discount curve is used to price all bonds in the market, which allows the coefficients to be estimated. The model allows this estimation to be done within a standard regression framework, which is simple and fast (see Appendix A of Finlay and Chambers (2008) for further details).

 

Brian Twomey

Anthony Downs’s An Economic Theory of Democracy

The Median Voter: Fact or Fiction?
The History of a Theoretical ConceptPrepared for Presentation at the Annual Meeting of the Western Political Science Association
March 25-27, 1999
Robert G. Boatright
Department of Political Science
The University of Chicago
5828 S. University Avenue
Chicago, Il 60637

robb@polisci.spc.uchicago.edu

        To an extent that many political scientists are only dimly aware, the median voter theorem has infiltrated much of American political science. Even among those who do not work in the area of formal modelling, the predictions of candidate convergence and proximity voting govern much of both theoretical and empirical literature on electoral competition. This is not to say that we always find what we predict; instead, it is to say that we frequently look for these two occurrences, even if only to take note of our failure to find them.

Bernard Grofman notes of Anthony Downs’s An Economic Theory of Democracy, the first political science text to explicate the logic of spatial candidate competition, that

As a seminal work, An Economic Theory of Democracy suffers from the triple dangers of (1) being forever cited but rarely read, with its ideas so simplified as to be almost unrecognizable, (2) being regarded as outmoded or irrelevant, (3) having its central ideas so elaborated by ostensible refinements that what was good and sensible about the original gets lost amidst the subsequent encrustations (Grofman 1993: 3).

In this essay, I certainly do not dispute Grofman’s claims. Grofman’s words are contained in the introduction to an edited volume designed to reread Downs with an eye towards correcting wayward interpretations of his theory. In this essay, however, I seek to assess the very effects of the “calamities” of which Grofman speaks upon the study of political parties. Furthermore, I seek to clarify means by which lack of empirical support for Downs’s candidate convergence prediction can be used not to dismiss his claims but to second them.

In pursuing this exercise, it is necessary to treat the median voter theorem not as a mathematical proof but as a theory – as a theory which, despite the mathematical rigor that has been applied to explication of its various facets, should be considered on level ground with its predecessors. The median voter model should be read as a response to the “responsible parties” theory propounded by the 1950 American Political Science Association report and other normative theories of political party behavior dating back into the early years of the twentieth century. Downs’s work effectively put an end to such normative theorizing about what political parties should do; if it could be demonstrated that political parties would never take political scientists’ advice seriously, what was the point in offering advice at all?

Few have considered, however, means by which this debate might be re-addressed by the very tenets of the Downsian model. Downs and many of his successors have argued that disputation of the empirical predictions of his theory does not undermine the theory itself. They have claimed that to find that any of the theory’s predictions are not borne out brings into question the empirical support for one or more of the theory’s assumptions, but such a finding has no effect upon the internal validity of the theorem itself (Downs 1959). This seems a fair claim, but adherence to this claim has not stopped formal theorists from tinkering with various components of the model in order to prescribe variants or close relatives of the model which have greater empirical support than does the “pure” median voter model itself.

This type of activity, however, runs the risk of making the median voter model unfalsifiable. If we limit its application only to events in which it occurs, we have effectively established a theory with no empirical import at all. As Martin Diamond points out in his early review of Downs, a weakened median voter hypothesis is no model at all:

The revised “fundamental hypothesis” would have to read: Some politicians formulate policy only for the rewards of office and some do not, and which behavior is decisive is a matter for study each time, all of which would leave political science in the difficult but fascinating position it was in before economic models were offered in succor (Diamond 1959: 210).

Diamond’s claim might be read in two ways. The quantitative political scientist may read it as a statement that “the outliers are what is of most interest,” that Diamond’s claim is that if we cannot explain nonconvergence in a systematic way, the outliers – the candidates who do not adopt “rational” positions – will be the candidates who are of the most interest and have the most effect upon politics. A student of 1950s political and sociological theory – a student of Leo Strauss, for instance – might read Diamond’s claim as a broader statement that the scientific study of politics cannot explain political change or innovation. It is a claim that “rational” political behavior is uninteresting, and political “action” cannot be subsumed under theories of rationality (See Arendt 1958: 41-42).

Diamond’s argument also poses a tremendous obstacle to those who would seek to adapt Downs for the sake of empirical inquiry. We cannot merely say that some candidates behave in accordance with Downs’s precepts and some do not, nor can we say that Downs’s theory holds when the tenets of his theory can be shown to exist and it does not when such tenets do not hold. Instead, a theory of candidate convergence must demonstrate that there is a systematic logic to nonconvergence as well as to convergence – that we can predict when convergence will occur and when it will not occur without resorting to ex post facto analysis.

I recognize that such a task is a formidable one, and in this paper I do not purport to have discovered such a theory. Instead, I argue that the roots for such a theory may be located in one of the least explored of Downs’s assumptions – that of simultaneity in candidate positioning. Where candidates adopt positions sequentially, the logic of candidate competition and convergence is altered, but it is altered in ways that can be systematically identified and explained, and it can be amended in ways that can lead to accurate predictions of candidate divergence.

In order to arrive at this argument, I proceed in this paper first to restate the historical context of Downs’s theory, with particular attention to debates about responsible political parties and to debates about pluralism and the definition of political power. Second, I briefly note the fundamental assumptions of the median voter theory, the level of empirical support for these assumptions, and the refinements or revisions to empirical findings which formal modelers have undertaken in order to adapt economic modeling to better testing. Third, I discuss the lack of attention which has been paid to the simultaneity assumption and ways in which discarding or limiting this assumption re-opens many of the theoretical and normative debates which Downs’s theory closed. I do not seek to provide a formal theory myself because I believe that the results of a sequentiality assumption should and can be stated, at least for the purposes of this essay, without the “encrustations” of which Grofman speaks.

The Historical Context: Closing a Debate

        The study of political parties is at least as old as the discipline of political science in America. In the late nineteenth and early twentieth century, Woodrow Wilson, A. Lawrence Lowell, Henry Jones Ford, and others debated how best to conceive of political parties’ function and membership. Ford (1914: 295-296) argued that parties were somewhat democratic organizations, oligarchically controlled but with the tacit support of the voters. In this period, only the Russian political scientist Moisei Ostrogorski (1902) confined party membership to those actually employed by the party. Ostrogorski’s work appears to have been relegated largely to the fringes of this debate at the time, although it was rediscovered in the 1950s and is now frequently cited.

This discussion of parties was, as was much of contemporaneous political science, highly normative. It revolved around the question of how political parties should behave, and it was taken – especially in the case of Wilson – as prescription for how parties should behave and who should control them. It raised, however, a somewhat more empirical question which has persisted – is democracy best served when parties strive to appear identical, or is the practice of democracy restricted by party similarities, insofar as voters are given no real choice between platforms?

By the 1950s, several leading political scientists had concluded that Ostrogorski was correct – that voters and parties were best conceived of as two distinct entities. V. O. Key (1958: 378-380) conceived of parties in three parts – voters who supported and identified with the party, the party organization, and those members of the party who held governmental office. E. E. Schattschneider (1942: 35-64) argued that democracy existed between parties, but not within parties; party “membership” was a facade. Parties nonetheless had a duty to “frame political questions” for consumption, and were thus driven by forces of the political “market” to create a product tht reflects public opinion, even without the direct input of the public in framing the issues.

Oddly, Schattschneider’s introduction of the market metaphor did not stop him from chairing the American Political Science Association working group which produced Towards a More Responsible Two-Party System, one of the few direct political statements published under the imprimatur of the American Political Science Association. This report, published in 1950, called for the parties to present coherent, yet divergent, packages of policy proposals to the public. The public could then make an informed choice about the direction in which it wished American public policy to go. Furthermore, it called upon parties to design long-range plans that would “cope with the great problems of modern government.” In a 1992 retrospective on Schattschneider’s work, John Kenneth White cites several leading political scientists of later decades who attested to the report’s status as the most significant work in the area of political parties of its time. The report also played a role in reviving interest in earlier debates on political parties. Austin Ranney’s summary of the views of early twentieth century theorists of political parties appeared soon afterwards (Ranney 1954).

To a large extent, Downs’s An Economic Theory of Democracy, published only seven years later, put an end to this normative debate. If the APSA report was formulated in response to a perceived crisis in party government, Downs’s work seems to have arisen from no such concern. Downs seems blissfully unaware or uninterested in the “responsible parties” debate. His bibliography does include Key, but he makes no reference to Schattschneider, the APSA report, or any of the report’s antecedents. If we are to trust his recollection of the development of his project (Downs 1993), An Economic Theory of Democracy was written very rapidly, and it was inspired more by his own personal political experiences and his encounters as an economics graduate student with Schumpeter’s analysis of party competition than it was by current trends in political science.

Downs’s work exposes, however, the inconsistency of pairing a market theory of political parties with normative calls for the parties to espouse contrasting viewpoints and to design long-range plans for government. Employing Hotelling’s theory of economic competition, Downs demonstrated that a rational political party would, in two-party competition, seek out an ideological position in the middle of the electorate’s preference distribution. The two parties would then, under something approximating full information conditions, mimic each other, thus encouraging voters to make decisions not about policy, but about non-issue traits. The parties would, among other things, be ambiguous about their positions on controversial issues or avoid addressing such issues entirely; incorporate seemingly incompatible positions into their platforms; and seek to avoid long-run solutions to problems in order to maximize their present electoral fortunes. In such a scenario, there is complete separation of the voter and the party. The party operates as the producer of policy, and insofar as the two-party system functions in an oligarchical manner, the voter, or consumer, would have to take what was offered by the parties. Normative arguments such as those contained in the APSA report were rendered somewhat moot by this line of reasoning; the fault, if there was one, lay with the median voter himself, and no amount of exhortation by an elite cadre of political scientists would sway the parties from their vote-maximizing strategies.

The Downsian disputation of the APSA report’s tenets need not be stopped here, however. Riker, in recounting the differences between Downs and the APSA report, notes that “political science and political events have passed the adherents of ‘responsible parties’ by.” (Riker 1982: 63) Not only was the report wrong on empirical and logical grounds, however; it was wrong on normative or moral grounds:

Its implicit purpose was to sharpen the partisan division as it then existed and thus to ensure that the winners kept on winning. As the status quo was then in favor of the Democrats, the report should be regarded as a plan for a political system in which Democrats would always win and Republicans always lose. . . Although some people saw that the report was bad description, almost no one saw that it was profoundly immoral – a sad commentary on the state of the profession (Riker 1997: 191-192).

These are, perhaps, words only a political scientist could write; the call for political parties to differentiate themselves has largely disappeared from political science, but it is still common on newspaper editorial pages. A brief perusal I undertook shows editorialists as diverse as George Will, Barbara Ehrenreich, and E. J. Dionne lamenting the lack of difference between party platforms.

Responsible party theorists are conspicuously absent from the response which greeted Downs’s work. The most glowing review of An Economic Theory of Democracy was penned by Charles Lindblom, who had also been instrumental in securing a publisher for the book. Lindblom writes that

While economists have made the most of a seriously defective system, political scientists have permitted a kind of perfectionism to inhibit serious, explicit system-building. In talking with political scientists, I am often struck by their dissatisfaction with theoretical proposals that do not promise a rough fit to the phenomena to be explained, while economists have happily elaborated, to take an example, a theory of the firm that is still a caricature of the phenomena described (Lindblom 1958: 241).

While Lindblom hailed Downs for bringing into political science a model that was largely free of concern for empirical support, most reviews predictably dwelt upon the model’s fit with empirical data. Almond (1993) summarizes several of these reviews; with the exception of the above-quoted Diamond review, most voiced rather qualified support for Downs but expressed doubt that his theory would find much support in political phenomena. In a debate with W. Hayward Rogers, Downs responds to several questions Rogers raises about empirically testing his predictions by noting that lack of empirical support does not invalidate his model as a deductive proposal; instead, it indicates that one or more of the assumptions is not borne out in the population upon which the test is being conducted (Downs 1959; Rogers 1959). Johnson (19xx) reiterates this claim, disputing the notion that lack of empirical support dooms the model. After all, few of the tenets of responsible parties theory are even conducive to empirical tests.

The fact that Downs’s theory purports to be positive rather than normative did at least shift the debate over political parties to his own turf. As Rabinowitz and MacDonald (1989) note, the most evident example of this is the introduction of scaling questions about political candidates on the National Election Survey.

Downs’s work bears an uneasy relationship, however, to one dominant strain of contemporaneous political science, however. He adopts numerous tenets of pluralism. Most notably, he directly cites two statements of Dahl and Lindblom regarding both descriptive and normative issues. In setting out definitions early in the book, he explicitly borrows Dahl and Lindblom’s definition of “governments” as

organizations that have a sufficient monopoly of control to enforce an orderly settlement of disputes with other organizations in the area. . . Whoever controls government usually has the “last word” on a question (Downs 1957: 22, citing Dahl and Lindblom 1953: 42).

Later, Downs notes that democratic control over government, a normative precept, can be tested in his model. He approvingly cites Dahl and Lindblom’s further definition of “political equality” as a circumstance in which

Control over governmental decisions is shared so that the preferences of no one citizen are weighted more heavily than the preferences of any other one citizen (Downs 1957: 32, citing Dahl and Lindblom 1953: 41).

At the time Downs was writing, however, the task of pluralists, to identify and define political power, was also being brought into question. In the economic model, the relationship between the parties is relatively simple – one party has power, the other wants it. Bachrach and Baratz (1962) propose a somewhat more complicated version of power. In a representative government, the exertion of power is manifested in the establishment of an agenda. In the pluralist approach, all popular grievances are recognized and acted upon, and all may thus participate to some degree in decision-making. According to Bachrach and Baratz, and as conceptualized later by Gaventa (1980), power may be exercised by the exclusion of some ideas from the political agenda entirely, and also by “influencing, shaping, or determining [one’s] very wants.” (Gaventa 1980: 12) By extension, the convergence of policy options presented to the voters has profound normative implications, insofar as the very preferences of voters are shaped by it. If this holds true, party convergence may not even be a result of parties catering to voters, but of a tacit collusion by parties in policies which will be offered to them.

Power theorists such as Bachrach and Baratz did not take on the normative implications of the median voter theorem directly. In taking issue with the pluralist definition of power, however, they were implicitly taking issue with the ability to draw any sort of normative inferences about the comparative normative status of party convergence or divergence. They were also, however, creating a significant measurement problem for pluralist theory. Baumgartner and Leech (1998: 60) note that in the wake of this debate,

the concept of power was not banished from political science, but scholars for the most part reacted by abandoning their interest in those questions. . . Scholars moved on to other fields that did not have at their core such a difficult concept.

Perhaps because the median voter theorem has so infrequently been the subject of normative debate, or because its conception of power is rarely considered by those who explore the ramifications of the model, this particular aspect of the model and the questions it raises have rarely been considered.

These three strains of political science, then – the developing field of formal models, responsible party theories, and pluralism – and the conflict between them created a context for Downs of which Downs himself may have been unaware. To a significant extent, debate about the median voter theorem has been about empirical accuracy; the other debates that preceded Downs have largely been left behind by political science. Those who have sought to develop Downs’s ideas further, or to present alterations of his model may have sought to defend themselves against charges of being uninterested in empirical accuracy, but the major refinements of Downs have all taken as their starting point propositions which have greater empirical support than do those of Downs. Because of these efforts, however, it can be shown that altering any of Downs’s assumptions bring his entire model into question. And in doing so, many of the debates which his work appears to have closed off may be re-opened. In the next section, I examine the empirical roots of work that has tinkered with his model, and I illustrate ways in which these adjustments collectively work to re-open questions of party responsibility and of the exercise of power.

The Median Voter Model and its Refinements

        As articulated by Downs (1957: 114-141), the median voter model is a model of party, not candidate, competition. Party convergence is predicated upon seven claims about party and voter behavior:

1) A political party is a “team of men seeking to control the governing apparatus by gaining office in a duly constituted election.” (Downs 1957: 25) Each member within the party thus shares the same goals, and each member takes policy positions as a means towards gaining office.

2) Voters judge parties based upon the proximity of the parties on policy issues to the voters’ own preferred position. Voter preferences can be reduced to a unidimensional policy space. They are single-peaked and monotonically declining from the voter’s ideal point. Voters prefer the party closest to them, the party that maximizes their utility (or minimizes their disutility) in this function. Voter preferences are exogenous to the actions of parties.

3) All potential voters vote; there are no abstentions.

4) Parties are free to position themselves at any point along the preference distribution.(1)

5) Parties have full information regarding the distribution of voter preferences.

6) Parties choose positions simultaneously. One party cannot know ex ante where the other party will position itself, although following Assumption One, each party should presume the other to take positions rationally.

7) Party utilities are defined by the number of votes they receive; parties are vote maximizers.

Given these seven assumptions, the result in a two-party election will be convergence at the median of the distribution of voter preferences.

Throughout both these assumptions and those refinements or alterations that follow, only three basic variables are in play: information about voter preferences or other candidates’ strategies; expected or potential outcomes of a given pairing of party positions; and the location of candidate issue positions themselves. These definitions themselves have been relatively uncontroversial in work that has followed Downs, but the assumptions outlined above have been disputed and altered. Empirical questions about each of the above assumptions have preceded theoretical work on the effects of each alternate assumption.
Assumption One: The Composition and Function of Parties

Assumption One was among the first tenets of the median voter model to be questioned. Most studies have shown that, at least in the American case, parties are not unified teams (see, for instance, Mayhew 1986). In addition, geographical representation and the heterogeneity of the American electorate would give lie to the notion that a unified party platform would be in the interest of vote-maximizing politicians. It thus seems inconsistent for Downs to describe parties as unified “teams” yet also to posit that their members are election-oriented.

At first glance, this might seem to be merely a small terminological problem. If we substitute candidate competition for party competition and if we then use the median voter model to study only individual elections we can proceed through the remainder of the model. Downs himself notes that the presumption of a unitary actor is necessary to avoid messy discussions of intra-party conflict; that is, he does not deny that intra-party dissension over policy exists, but it is not a concern of his model. Spatial models that have followed Downs’ assumptions rather faithfully have either referred solely to candidates rather than parties (see Shepsle 1972) or have discussed both without inconsistency of results (Page 1978).

The candidate/party distinction has not been easily finessed by others, however. As Schlesinger (1975, 1994) points out, the Downsian party is composed solely of office-holders and office-seekers. It is only one wing of Key’s (1958) tripartite division of the party in office, the party organization, and the party in the electorate. Downs’s parties emphatically do not include the electorate. This exclusion is necessary to maintain the relationship of parties as producers to voters as consumers. Voters exert a discipline upon parties by making their preferences known and choosing among two products, but they are unable to act in concert to allow themselves differentiated products.

In addition, voters are not presumed by Downs to be motivated by the same concerns as are politicians. Downs assumes that all voters vote sincerely; that is, they vote for the party whose policies they most prefer, and their benefit derives from seeing these policies enacted, not from the spoils of holding office. Voters have far less to gain from having their preferred party hold office than does the party itself.

Both prominent critics of Downs and proponents of alternate models have questioned the empirical applicability of this distinction between the preferences of voters and those of the Downsian party. Riker (1963) and Riker and Ordeshook (1968) have proposed models in which parties divide the benefits of office amongst themselves – in which the positions taken by parties are not positions of ideology, but positions regarding the optimal division of benefits amongst those within the party. Similarly, Aldrich (1995) and Aldrich and Rohde (1997) propose a “conditional party government” model in which party members collude in order to divide all benefits amongst themselves at the expense of the opposing party. Neither of these theories explicitly includes voters within the party, but they can, as Schlesinger notes, be read as attempts to include voters within the party. They are, he claims, “shareholder” models in which the voters have a stake in the party’s fortunes.

This framework, in which individual benefits – slices of a distributional pie – are the goal of voters rather than satisfaction of ideological preferences, does not necessarily yield different results than does the median voter model. An optimal strategy for parties is still to take the position which spreads benefits to a bare majority of voters. That is, if voters are arrayed unidimensionally in terms of their specific demands, the voter in the middle of this distribution holds the most leverage over both parties, and both parties will cater to this voter. Such a conception has implications for Assumptions Six and Seven, however. First, if Assumption Six is relaxed, if the parties move sequentially and if the first party does not take its position rationally, the second party would, in the Downsian conception, take a position right next to that of the first party in order to maximize votes. In the Riker and Ordeshook conception, however, the second party still would seek out the median voter; allocating benefits among a bare majority would maximize the benefits to each member. Thus, the Riker and Ordeshook model predicts a median position for the victorious party (and thus a median outcome) regardless of whether the strategy of the opposing party is known or unknown. Second, considering voters as party shareholders means that parties are not, as Assumption Seven states, vote-maximizers; instead, they seek to maximize benefits, which they do by maximizing their probability of winning.

Aldrich (1995) and Aldrich and Rohde (1997) utilize a similar allocation-of-benefits model to illustrate reasons for party divergence. Although again their model considers parties in government – more specifically, parties in the legislature – they argue that a model in which log-rolling exists will produce divergence in that it is the party median rather than the general median which governs the policy positions offered. This model relies upon relatively strong parties and a two-step process in which positions are first generated within the party through a median voter process, and then are offered to the general legislature. The voter at the legislative median still votes for that position closest to him, but he is choosing between two policies which are somewhat far from his ideal point. Such a model may also be used to explain the production of party platforms and the process by which party primaries or caucuses produce candidates. It does not, however, allow updating of strategies between stage one and stage two. Aldrich (1995: 20-21) notes that such a model must include at least some voters in the conception of party – it is the party activists, who are motivated by policy benefits rather than by pure office-seeking, who will be most active in developing the positions between which the median voter must choose.

Both of these models rely in part upon analyzing the intra-party conflict which Downs so studiously sought to avoid as a precedent to investigating the positions offered to voters. While the Riker and Ordeshook model makes sharp breaks with Downs in that it does not require the presumption of simultaneous movement, neither model makes explicit claims about simultaneity. Both, however, can be read as models which derive from empirical criticisms of the strict market relationship of parties and voters specified by Downs, and both introduce dynamics which alter Downs’s assumptions about the composition and goals of political parties.
Assumption Two: Proximity Voting, Unidimensionality, and Single-Peakedness

Another early line of empirical criticism of Downs was raised by adherents of the Michigan school of voting behavior study. In one of the most trenchant critiques of Downs, Stokes (1963) took issue with the assumption of proximity voting. In The American Voter, Campbell, Converse, Miller, and Stokes (1960) had found that voters had relatively ill-defined policy preferences; that they had scant information about candidates’ policy positions; that they frequently voted for candidates based upon party identification, personal attributes of the candidates, and other heuristics that were not necessarily related to ideological proximity; and that they rarely considered policy alternatives in a unidimensional liberal-conservative framework. Although these findings have been debated by public opinion scholars, they raise questions about whether single-peaked preferences, unidimensionality, and proximity voting are realistic assumptions for a model of voting behavior.

Of these three empirical issues, the argument against proximity voting is by far the most significant for reconsidering the model. Single-peakedness is, as Hinich and Munger (1996: 35) note, a necessary condition for proposing unidimensional equilibrium. One could certainly propose “all or nothing” situations in which preferences are not single-peaked. A voter might, for instance, prefer to allocate a large amount of resources to solve a particular policy problem, but this voter’s second most-preferred position might be to allocate no resources at all to this problem rather than to allocate an amount which is not large enough to solve the problem. Such situations may well exist, but if policy positions are to be averaged by the voter and placed upon a single liberal-conservative dimension, it seems far-fetched to propose that single-peakedness does not occur.

The specific claim above is only relevant if the policy space is unidimensional. Again, this is an empirical issue which has little import for the internal coherence of the unidimensional model. Much of the work in spatial modeling since Downs has been devoted to the quest for equilibrium in multi-dimensional models. Enelow and Hinich (1984) have published the most comprehensive investigation of multidimensional models. Where there are two or more dimensions, convergence does not occur, as one position can always be defeated by another (McKelvey and Ordeshook 1976). This cycling problem would, if the other conditions of the Downsian model held, ensure that incumbents are always defeated. It has brought about numerous studies of the process of agenda-setting, especially in small groups such as legislative committees. At heart, however, the dimensionality of the policy space is an empirical issue. As Iverson (1994) and Klingemann, Hofferbert, and Budge (1994) argue in comparative studies of politics in several countries, the actual number of policy dimensions in mass elections appears to be quite small. There may be more than one dimension, but there are rarely more than two.

Ferejohn (1993) argues that there is compelling theoretical reason for unidimensionality in mass elections as well. Positing a multidimensional space seems inconsistent with Downs’s work on voters’ information costs. Voters may be psychologically unable or unwilling to process multidimensional information, and they may prefer to seek to place candidates’ positions into a unidimensional space even if candidates do not seek to frame their positions in such a manner. Because of their own limited resources, candidates must economize on the transmission of information to voters, and will thus seek to transmit unidimensional information. Ferejohn notes, however, that this is a somewhat ad hoc argument. He finds more compelling the notion that unidimensionality is the only way for voters to enforce discipline upon candidates, to hold them responsible for their policy commitments. It is the only way that candidates can be accountable to voters, and as such, unidimensional ideologies may be created not by candidates but by the public as a means of framing policies. This is also not an airtight defense of the unidimensional model – it reads as a rather normative defense – but it is a compelling argument for remaining open to its viability in mass elections.

Concomitant with the debates over unidimensionality and single-peakedness is concern over the assumption of proximity voting. If there truly is a single dimension, then single-peakedness seems relevant, or at least empirically testable. If the policy space is multidimensional and an empirical study does not account for this, preferences which are truly single-peaked over each individual dimension but are taking multiple dimensions into account may appear not to be single-peaked in the unidimensional model. Questions also exist about the identification of these dimensions. The unidimensional liberalism/conservatism dimension may, for instance, be broken down into an economic liberalism/conservatism and a social liberalism/conservatism dimension; voters may prefer government regulation of economic matters yet be against government regulation on social issues (Enelow and Hinich 1984). Dimensions which are not strictly ideological may also exist; for instance, voters may evaluate candidates on a liberalism/conservatism dimension but then also evaluate them on a “leadership” or “charisma” dimension. In such a case, the second dimension ought not to exhibit anything approaching a normal distribution – voters may differ in their evaluation of a candidate’s charisma or the importance they place upon it, but it seems problematic to assume that voters would not prefer more charisma to less charisma, for instance.

The question in such models of how voters weight different dimensions has also been held to be of importance in unidimensional models. In a series of articles over the past decade, Rabinowitz and colleagues (Rabinowitz and MacDonald 1989; MacDonald and Rabinowitz 1993a, 1993b, 1997, 1998; Rabinowitz and Listhaug 1997; Morris and Rabinowitz 1997) have proposed a “directional theory of issue voting” which dispenses entirely with the proximity voting assumption. Instead of voting based on proximity, they argue, voters have only a diffuse “for or against” sentiment over ideological alternatives (albeit they make some allowance for proposals that are too extreme) and a particular degree of intensity about their preferences on these issues. Rabinowitz and MacDonald review developments in National Election Survey questions and conclude that there is not strong evidence that voters do array issues spatially. If voters only take a directional pro/con position on policy proposals, candidates have a “realm of acceptability” in which issue positions they may take. Voters may be more attracted to a candidate far from their “true” ideal point but on the same side as the voter than to a candidate who is closer to their ideal point but on the opposite side on an issue. There is little middle ground here; issues are framed in a yes or no manner, and voters will evaluate candidates’ position based on which side they are on and weight these positions according to how intensely they feel about the particular issue. Thus, parties will converge on an issue where there is consensus but will diverge where the electorate is polarized.

This model also raises empirical problems. Gilljam (1997) disputes Rabinowitz et al’s empirical support for their argument, and Merrill and Grofman (1997) join Gilljam in arguing that the directional model mixes voters’ subjective evaluations of parties with an attempt to place parties objectively on a policy dimension. The fact that voters may make errors in evaluating candidates does not discredit the proximity voting model, nor does the introduction of a preference intensity dimension. Merrill and Grofman also argue, in an argument which may bring Assumption Six into question, that tests for directionality in voting actually measure attempts voters and candidates make to confront uncertainty or lack of information.

In sum, these debates about voters’ behavior seem compelling in evaluating voting and election outcomes, but they have limited import for studying candidate strategies if candidates do not share these models’ quarrels with unidimensionality and proximity voting. That is, if candidates believe that their ideological statements will be evaluated solely on the liberalism/conservatism dimension, they will take positions that accord with a unidimensional model whether or not voters truly do evaluate them along these lines.
Assumption Three: Abstentions

The Downsian claim that there are no abstentions may be relaxed without affecting the model if either (a) the position of abstainers can be known, or (b) abstentions are not systematic – i.e. if candidates converge at the median in a single dimension then those voters on the extreme left and right have the same probability of abstention and will cancel each other out. Research on differences between voters and nonvoters has generally supported the second of these conditions. Wolfinger and Rosenstone (1980) have found, for instance, that if all Americans voted in presidential elections the outcome would be little different than it is in practice, where a large minority of eligible voters choose not to vote. The possibility exists, however, that candidates may mobilize disenchanted voters by taking noncentrist positions, and this phenomenon may indeed occur in some elections. Mobilization of potential supporters is certainly a goal of most candidates’ campaigns for office.

Downs himself does devote attention to the effects of abstention upon electoral outcomes (Downs 1957: 260-276). It is significant, however, that the Hotelling model upon which he draws in the median voter model is generally viewed as a model of competition between producers of goods with an inelastic demand function – for instance, of grocery stores or gasoline stations. Given equivalence of product, consumers will prefer the business located closest to them, but they cannot do without food, for instance, if the grocery store is farther away from their home than they would like. Likewise, one might argue, all voters are subject to their government’s laws; they cannot opt out of citizenship if their government does not enact policies they prefer. To extend a Hotelling model with barriers to entry to an unnecessary good – ice cream, for instance – would not alter its results unless consumers on one side of town were able to punish the ice cream stand for moving far away from them by declining to purchase ice cream while consumers on the other side of town were not.

This possibility is explored by Hirschman (1970) in his description of the problems of exit and voice in politics. If some consumers exit – or if some voters abstain – from supporting a firm or a party, the firm or party may not notice if it attracts as many new customers or voters as it loses by shifting its position. If, however, we have a two-stage process in which these individuals can make threats to exit without actually doing so, they may force the firm or party to take a position closer to their ideal point. This is the exercise of voice – an attempt by customers to change the practices of a firm rather than to escape from it. This can only occur where consumers have some sort of bargaining power. To return solely to the political context, such bargaining power may entail the threat of abstention or the threat of supporting an alternate candidate en bloc. It also may involve inspiring activists and mobilizing voters to pressure the party into taking a particular position. Because, somewhat paradoxically, the individuals most likely to exercise voice are those most loyal to the party and least likely to exit without warning, their threats may well be taken seriously by the party. These threats to punish the party in the short run in order to exact benefits in the long run spell trouble for office-seekers, whose time horizon is shorter than is the time horizon of activists. Hirschman’s conception still utilizes differences in motivations for office-seekers and other party members, but it certainly includes these activists within the party in the initial stage where voice occurs.

We know from empirical research on party conventions and caucuses that the most extreme members of the American parties are those most likely to attempt to exercise voice prior to the election or the selection of candidates (see, for instance, Bartels 1988 on primary voters and Sullivan, Pressman, Page, and Lyons 1974 on convention delegates). The Hirschman model seems somewhat inapplicable to a one-shot game, but if there is a multi-stage process occurring, where voice can be exercised prior to the adoption of issue positions, his model does produce a “curvilinear disparity” (May 1973) in which members attempt to exact benefits from leaders prior to the establishment of positions, and in which divergent positions may result. As Stokes (1998) points out, the leaders themselves must come from somewhere, and they are more likely than not to come from the activist ranks within the parties and to share some of these individuals’ ideological preferences.

Because these members have a longer time horizon than do office-seekers, they may remain loyal to the party even in a losing effort. Indeed, they may prefer a losing effort to a winning effort if it enhances the long-run prospects of having their preferences satisfied. Again, where the simultaneity assumption is discarded and where voters or candidates are able to gauge their ex ante probability of victory in an election at Time A, candidates who gauge their probability of winning to be equivalent at a number of different positions may be expected to take that position among those which maximizes their proximity to those party members who are exercising voice. This may be the case with a candidate certain of victory or a candidate certain of defeat. Election at Time A would certainly be presumed to be the most important goal for a candidate, but election (or re-election) at Time B may also carry some weight in the candidate’s calculus.
Assumption Four: Freedom of Party Movement

The threat of abstention imposes some limitations on party movement, but these are limitations of a particular type – they hamper movement toward the median because of strictly ideological preferences of party members. A somewhat different concern that has been raised by students of political mandates and political credibility is that candidates may not appear credible in the adoption of particular ideological positions. Voters may not believe that a candidate will actually pursue the policies claimed (that is, will remain at the issue positions taken prior to the election) if that candidate is elected. This may preclude a candidate from taking a median position.

This may occur in two ways, both of which are dependent upon a multi-stage game. First, an incumbent may be evaluated based upon her record. If voters vote retrospectively – that is, based upon what a candidate has done in the past and how well her past record compares with her campaign pronouncements – they may punish or fail to believe a candidate who advocates positions which differ from her past record. Comparative studies such as those of Klingemann, Hofferbert, and Budge (1994) and Przeworski and Stokes (1995) have evaluated the mechanisms by which voters may enforce accountability upon parties or candidates to ensure that once candidates are elected they actually seek to enact the policies they propose in their campaigns. An incumbent may be constrained by her past record from taking some positions.

This is not a major concern for the Downsian model; after all, even if one candidate is an incumbent, she presumably was elected in the first place because she took issue positions which satisfied the median voter. A candidate may be judged by voters to be inept or dishonest, but this ought not to alter the nature of issue competition. Another concern, however, is that if candidate emergence is itself considered a multi-stage process, a candidate may already have established a record as an advocate of a particular ideological position. Candidates may not actually be able to move towards the median; doing so may damage their credibility. This line of reasoning is frequently used to explain the failures of presidential candidates – it is said that candidates cannot shed the positions they have taken to win nomination once they proceed to the general election (Aldrich 1980). It is also used to explain the problems of office-holders who seek an office with a different constituency and thus a different preference distribution – for instance, members of the House of Representatives seeking election to the Senate. These candidates may seek to move towards the positions preferred by their prospective new constituency (see Rohde 1979), but they run the risk of losing credibility through “flip-flopping,” through taking contradictory positions at different points in time.

Finally, the movement of parties or candidate may be limited by party reputation; this seems to accord with Downs’s prohibition of “leap-frogging.” I noted above that leap-frogging poses no problems for a simultaneous movement model with two parties. If we again look at elections in a multi-stage process, however, party reputation may impose limitations upon movement. A Democrat may not, for instance, be able to take a position to the right of a Republican opponent because he would lose credibility. We might assume, for instance, a relatively liberal Republican incumbent who has established a position to the left of the electorate’s median. Were there no restraints on movement, the Democrat should win by establishing a position slightly to the right of the Republican, thereby conceding normally Democratic votes to the Republican and garnering Republican votes in return. If credibility is an issue, however, Republicans might not believe this Democrat to truly be more conservative than her opponent and might discount her issue positions.

Yet again, these criticisms suggest the problems of a simultaneous movement, one-stage issue competition model. They do no damage to the internal consistency of the median voter model, but they raise empirical questions about its ability to describe mass elections.
Assumption Five: Full Information

Perhaps the strongest assumption of the median voter model is its dual command regarding information – that voters know where candidates stand and that candidates know where voters stand. Downs devotes much of his book to arguments about why voters have little incentive to gather information about candidates. It does seem likely that voters will not be particularly well-informed, but this should have little import for the basic structure of competition unless voters are systematically uninformed – that is, voters who would prefer one candidate have little information while those who would prefer another do have information about the candidates. Low voter information might be another reason for candidates to systematically mobilize or inform particular groups, but in the absence of knowledge of the opposing candidate’s positions, it does not lead to alteration of the convergence prediction. Probabilistic voting theory, as exemplified by the work of Hinich, Ledyard, and Ordeshook (1972), Coughlin (1975), and Hinich and Munger (1995: 168) has made advances in modeling the behavior of voters given beliefs about candidate positions, but it does not affect candidate convergence unless it means that voters use non-ideological heuristics such as candidates’ personal attributes as means of reducing their uncertainty about candidate positions (Hinich 1977).

Of greater import, however, is the assumption of complete information on the part of the candidates about voter preferences. Downs’s model is deterministic – that is, it assumes that candidates know the expected outcome given any particular preference distribution. If candidates cannot know the distribution of voter preferences with certainty, however, they may take suboptimal positions based upon their subjective assessment of voter preferences. Erroneous assessments of voter preferences make a convenient scapegoat for candidates who take non-centrist positions.

For candidate divergence to occur, however, candidates must either be completely uninformed, must have different amounts of information, or must have different types of information. The first of these conditions would, if true, make any sort of formal theory of candidate strategies futile – it would have candidates behaving with no observable election-oriented incentive whatsoever. The second and third, however, seem quite plausible. Ferejohn and Noll (1978) present a theory of information asymmetries in which information about voter preferences is available to each candidate, but is costly. Such would be the case, for instance, for privately held, proprietary public opinion polls. In such situations, the wealthier candidate would obviously have an advantage. They might also, however, prefer to avoid policy issues and ideological appeals altogether in their campaign, so as to entice voters to evaluate them on other grounds.

Such an explanation may again account for divergence on issues, but again, it explains such divergence as a function of errors made by the candidates. Were the candidates in possession of information, they would still follow a median voter strategy. Even if candidates prefer to steer their campaign away from ideology, they must still take some issue positions, and there is no logic to adopting these positions without respect to beliefs about the median voter and the distribution of voter preferences.

Low information might also lead to rhetorical or heresthetical(2) appeals on the part of candidates – that is, if candidates are uncertain what voters’ preferences are, they may seek to influence voters’ preferences in order to bring them more in line with their own. Appeals to social norms, for instance, might influence voters’ beliefs about what their preferences are. Riker (1990) argues that, in fact, this is the function of campaigns. In addition, Kingdon (1993), Stoker (1992), and Hardin (1995) all make an argument that voters’ or citizens’ beliefs are not strictly self-interested or outcome oriented, and as such rhetorical appeals may be effective. Certainly voters are not omniscient. However, evidence is lacking that candidates have the resources to actually persuade voters to alter their preferences. If candidates can know voters’ preferences, it certainly seems more cost effective for them to follow voters’ preferences rather than to try to change them.

In electoral competition, however, the full information requirement for candidates is not as demanding as it may seem. First, candidates do have means available for gauging public opinion. Some, such as opinion polls, are costly. Others, such as gathering knowledge of the past behavior of the electorate, are not. In addition, if candidates move sequentially, the second mover has the additional advantage of observing the first mover’s positions. The second mover may thus either copy the positions of the first candidate, or if she thinks the first candidate has made an incorrect assessment of voter preferences she can take a different position. If we do assume that the liberalism/conservatism dimension is the appropriate dimensional distribution function for voters’ preferences, taking a position on this continuum which roughly approximates the electorate’s median does not require superhuman information-gathering efforts.
Assumption Six: Simultaneity

One relatively unexplored tenet of the Downsian model is the assumption that candidates choose positions simultaneously. In one sense, the simultaneity assumption can be relaxed without altering the outcome of the model. Simultaneous positioning necessarily implies a lack of information about one’s opponent’s position; hence, there is a presumption of rationality for each candidate. This ensures that each candidate will seek a median position regardless of what the other candidate’s position is. As the above discussion of Riker and Ordeshook shows, however, simultaneity is a necessary assumption for a median voter outcome where candidates are vote maximizers. It is not a necessary assumption where candidates seek to build a minimum winning coalition. In the latter circumstance, each candidates should seek a median position even if that candidates has knowledge that her opponent has failed to take such a position.

This seems like a rather rare defect to the model, however; this instance only occurs in cases where one candidate is irrational or misinformed and the other candidates knows the first to be irrational or misinformed. Furthermore, the simultaneity assumption may be discarded if the campaign is seen as a repeated give-and-take. If candidates have frequent opportunities to update their strategies, to assess their opponent’s positions, and to revise their own positions, a gradual movement toward the median on the part of both candidates results.

This circumstance can only happen, however, where there is freedom of movement and where movement is not particularly costly. This presumption seems ill-suited to most campaigns. Changing positions may be costly in terms of candidate credibility, and if one candidate has a pre-existing advantage, as in the case of incumbency, positions taken over a long period of time – over a term in office, for instance – may be difficult to alter. Thus, while simultaneity may seem a rather restrictive assumption, assuming unlimited updating may also be difficult to support.

The tendency documented by Fiorina (1981) and noted by Downs (1957: 41) for voters to vote retrospectively suggests a two-stage game in which the candidate who moves first – generally the incumbent – can “capture” a particular position on the dimension. Other models have sought to account for incumbents’ advantage, but they have not done so in the explicit context of a sequential movement framework. Feld and Grofman (1991; also Grofman 1993, Merrill and Grofman 1997) have developed a theory of “incumbent hegemony” (see Stokes 1998) in which incumbent have a “benefit of the doubt” zone, a zone of invulnerability around their spatial position. Here, incumbents give the incumbent the benefit of the doubt if their positions seem relatively close to theirs because of nonpolicy attributes of the incumbent. If this zone includes the electorate’s median, the incumbent cannot be defeated. They extend this model beyond the unidimensional framework to argue that where it exists, the two-dimensional instability described by McKelvey and Ordeshook does not exist. In this scenario, the incumbent need not be precisely at the electorate’s median, only somewhat close to it. Thus, an incumbent might also be able to maximize utility in regard to secondary, non-vote-maximizing goals.

The Feld and Grofman model assumes simultaneity, but it hints at a two- or more stage process. They demonstrate that, where this benefit of the doubt accrues to incumbents, “certain centrally located points will defeat any challenger by a substantial margin.” (Feld and Grofman 1991: 117) Should a potential challenger suspect that this will transpire, competition and candidate entry will be deterred. Thus, a sort of two-stage process transpires where an incumbent establishes a central position and a potential challenger decides whether or not to run.

Groseclose (1997) does not make direct reference to Feld and Grofman, but his model of two-candidate competition where one candidate has a personal advantage is quite reconcilable with Feld and Grofman. Groseclose notes that any personal advantage, no matter how small, causes the Downsian equilibrium to disappear. Again, candidates choose positions simultaneously, but the advantage held by one candidate is exogenous and is known. Should this transpire, candidate know that if indeed they do converge, the candidate with the personal advantage will be the unanimous winner. Groseclose assumes “non-policy triviality” – that is, that the personal advantage is not so large that there is no pair of positions where the disadvantaged candidate wins. Given this, the disadvantaged candidate will gain votes by moving away from the center if the advantaged candidate is at the center, and by moving towards the center if the advantaged candidate moves away from it. There is thus substantial allowance for candidate divergence. Groseclose closes by arguing that as the personal advantage of one candidate grows, the disadvantaged candidate adopts a more and more extreme position. This scenario is equivalent to Feld and Grofman’s benefit-of-the-doubt scenario.

Each of these models, as well as the incumbent hegemony model of Snyder (1994), assumes the establishment of a non-ideological advantage but simultaneous establishment of positions. Retrospective voting, however, a factor which has been acknowledged as rational behavior by spatial theorists at least as far back as Downs (1957: 41), must be considered at least in part to be retrospective evaluation of the ideological pronouncements of a party or candidate. As such, it is difficult to imagine the establishment of a personal advantage on the part of an incumbent which is completely devoid of issue positioning. The incumbent must take positions while in office and before the true extent of her “benefit of the doubt” or personal advantage is known; a vote-maximizing incumbent thus has an incentive to adopt a median position as early as possible – before competition arises.
Assumption Seven: Vote Maximization

By this point, it should be evident to the reader that rejection of one of the assumptions stated above has implications for the feasibility of entertaining the subsequent assumptions. For instance, if one disputes the Downsian definition of parties, it is difficult to assume that parties are solely vote maximizers. If parties do not have freedom of movement or if they do not take positions simultaneously, it is difficult to support the idea of parties as vote-maximizers because vote maximization in a losing cause may have scant utility to a party. If voter preferences are entirely known by parties, then the result of any election is virtually assured given a set of policy positions, and a party which cannot adopt a centrist position is a certain loser.

These problems bring into play two objections to the assumption that parties are vote maximizers: first, that parties do have vote maximization as a primary goal as opposed to maximizing benefits or probability of winning election; and second, that even if parties are vote-maximizers, that they are solely vote-maximizers, to the exclusion of any other type of secondary goals.

A common early line of criticism against Downs is that the market analogy has limited utility in describing politics precisely because parties gain little by winning by overwhelming majorities or in losing close elections. Barry (1970) and Przeworski and Sprague (1971) point out that in market competition, a firm always benefits from greater sales or more market share, while a party does not necessarily benefit from votes beyond a narrow majority or plurality. This argument is systematized by Riker and Ordeshook, who substitute benefit maximization to vote maximization; in such a scenario, a party seeks to ensure victory, and thus might prefer to seek as many votes as possible where the preferences of voters are somewhat uncertain. In a probabilistic, simultaneous mover model, maximizing votes and maximizing probability of winning may be coterminous (Coughlin 1975). Where a party’s probability of winning at any particular position can be known, however, that party may have a variety of positions with an equivalent probability of winning.

If simultaneity is not assumed, and where there are exogenous factors such as an incumbency advantage, this condition may occur in two different circumstances. First, an advantaged party may have a range of winning positions. Second, a disadvantaged party may have no winning position. In the first circumstance, a party with a benefit-of-the-doubt zone and full information about voter preferences can take any position within that zone. In the second, a party with knowledge that its opponent has taken a winning position has a choice of many positions, all of whose probability of winning is zero. Where parties position themselves sequentially, the party which chooses a position second may have a range of winning positions if the first mover has taken a suboptimal position, or it may be able to adopt any position without affecting its probability of winning (because it has no chance of winning) if the first mover has an advantage and has taken a position rationally.

These may seem to be relatively extreme circumstances, but they do necessitate the introduction of secondary goals for the parties in order to make any claims at all about rational position-taking. Even if the extreme nature of the above is reduced somewhat – where the probability of winning is not one or zero, but is highly restricted and there are secondary concerns for the parties, a party’s decision-making calculus may be affected. This begs the question of what these secondary concerns might be.

Relaxation of the first assumption to include activists or voters within the party, as well as considering the threat of voice or exit which results from relaxing the third assumption, introduces noninstrumental policy preferences for the party. That is, in addition to preferring to either maximize votes or to maximize probability of winning, candidates or parties may prefer to maximize proximity to their “true” or ex ante preferred positions. Where vote-maximization is posited, there is always a trade-off between votes and noninstrumental policy concerns; even where one candidate has a significant advantage, there are votes to be gained or lost through movement within the ideological space. Several formal theorists (Groseclose 1997; Wittman 1977, 1983a, 1983b; Chappell and Keech 1986) have sought to model the trade-off between the two, assigning weights to each concern and constructing a utility measurement which accounts for both concerns. If probability of winning is posited as the dominant concern, however, a deterministic, sequential, and full-information model throws such secondary incentives into sharp relief – there is nothing else to guide candidate or party position-taking across a range of positions where probability of winning is equivalent.

Reaching such a point involves disputing Assumptions One, Three, Six, and Seven. The only crucial dispute, however, is with Assumption Six; the other assumptions must necessarily be discarded when simultaneity is not assumed.

Secondary utility concerns have been inserted into hypothetical models, most notably in the work of Wittman and Chappell and Keech. These concerns are not directly measurable because they are idiosyncratic characteristics of each candidate. We cannot measure the actual preferences of candidates; even if we were to ask them what they “truly” believe about policy issues, it seems unlikely that they would claim to be advocating policies which deviate from their ex ante beliefs for the sake of being elected or gaining votes. As Canon (1990: 27-30) notes, however, a candidate who truly believes he has little chance of winning has less incentive to compromise his position; the very fact that he has chosen to run indicates that he is guided by his devotion to a cause, his desire to bring greater attention to his own ex ante preferences, or his desire to induce his opponent to address these issues. He will only make himself – and his fellow partisans – unhappy by deviating from such positions. Should this candidate find himself in a position to win, however, he may reason that even should he compromise his positions he will still no worse in regard to these issues than his opponent. Where candidates positions themselves sequentially, such a candidate seems particularly likely to emerge.

Implications of Altering the Simultaneity Assumption

        The assumptions of the median voter model are thus a set of dominos – if one is knocked down, the rest follow. To say this is not to argue that the median voter model contains internal contradictions, nor is it to say that the model should be knocked down. The basic intuition of the model, that given the assumptions enumerated above, candidates will adopt similar ideological positions, has been used to great effect in analysis of committee behavior and other smaller-scale phenomena. It has also been a useful tool for the study of many elections – but certainly not enough to pass empirical muster. It may well be that this is because one or more of the assumptions contained therein rarely hold, but a model cannot be proven or disproven if its failure requires us to test its assumptions.

The purpose of this paper has been to argue that while the introduction of positive models of political behavior ended much of the normative debate in the discipline about appropriate actions of political parties, these questions have not entirely vanished. The introduction of a sequential component into the median voter framework brings about several alterations in the other assumptions:

– Sequential movement implies that political parties must, at some times, produce candidates whose primary goal is not to win office, because attaining political office may not be feasible where the first mover holds an advantage.

– Parties in a sequential movement model may, then, share some of the preferences attributed to voters.

– In a sequential movement model, parties who choose positions second have information about the strategies of those candidates who move first. In such circumstances, holding full information about voter preferences is not entirely necessary – there is a threshold beyond which information about voter preferences serves no purpose.

– Vote maximizing strategies yield no benefits for candidates who choose positions second and are at a disadvantage; gaining votes does not alter a candidate’s probability of victory.

These alterations pose several theoretical issues for debate, issues which parallel the normative concerns which were debated prior to the introduction of the median voter model. First, can party preferences be isolated in instances where parties have multiple optimal strategies which maximize their probability of winning? In the case of candidates certain of defeat, can the positions taken be said to reflect the noninstrumental preferences of their party? If so, do these positions represent clear and divergent policy prescriptions? It may be somewhat paradoxical to look for the voice of the party in the campaigns of losing candidates, but these positions are not exclusively the province of losing candidates. Rather, we can be certain that these are positions taken for noninstrumental reasons, while similar positions taken by victorious candidates cannot securely be attributed to anything other than a desire to maximize votes. A victorious liberal candidate who represents an overwhelmingly liberal district may take the same positions as a defeated liberal candidate running in an overwhelmingly conservative district against a conservative incumbent. In the first case, the victorious candidate may be following either his or her true beliefs or merely catering to voter preferences; in the second, the defeated candidate certainly cannot be said to be seeking to gain support through such positions. The relevant question, then, is whether this defeated candidate speaks for his party, or whether his views are idiosyncratic, personal beliefs.

This scenario poses a somewhat paradoxical agenda for advocates of responsible parties. It might lead to a call for increased attention to disadvantaged candidate – to calls for campaign finance reform or public financing of campaigns, for instance. Such a call would not, according to the logic of the adjusted median voter scenario I have proposed, yield significantly different incumbents. First, divergent races occur because one candidate has no chance of victory. If that candidate’s probability of winning is increased, there is no reason not to expect that candidate to eschew his or her positions and adopt a more centrist strategy. In attempting to reward the provision of clear choices, we would have eliminated them. Second, even if this were not to happen, the candidate who is not following a strategy designed to capture the median voter would still be defeated, for the simple fact that her positions do not match those of a majority of voters. We would end up, as Riker’s above criticism of the responsible parties model notes, merely perpetuating the dominance of the party in power. In the end, we are left with the depressing conclusion that divergent party agendas exist right beneath our noses, but the more we seek to reward such strategies, the more they recede from our grasp.

Second, a focus on the similarity between the platforms of disadvantaged candidates – an emphasis, for example, on commonalities between challengers to incumbents which cuts across ideological or partisan lines – may help us to identify which issues are kept off of the agenda. Given the plethora of issues which confront the average member of a large legislative body, it seems difficult to argue that any particular issues are kept off of the agenda. To a large extent, however, one would expect a competitive challenger to be essentially reactive – to be addressing issues on which the incumbent appears to be vulnerable. Campaigns of less competitive candidates are free of this restraint. Candidates who run against popular opponents and who have no chance of victory have the luxury of being able to speak about anything they choose, to adopt any issue stance they choose. Are issues introduced in such campaigns which are not introduced by more competitive candidates? If so, what is the merit of such issues? Do they represent valid or innovative policy proposals, or are they merely idiosyncratic causes of these candidates?

Altering the assumptions of the median voter model thus does reintroduce valid normative questions, albeit in a different form than they took before its emergence. It does seem rather beside the point to argue about whether parties as a whole should present the voters with divergent yet responsible agendas. The question may be, instead, do they have “true”agendas which do diverge? And what is the import of this for policymaking, if indeed such divergence has any import at all? The area to look for responsible parties and electoral choice is not, in an age where overwhelming majorities of incumbents are re-elected, in the campaigns of incumbent office holders. It just may be, however, available when we consider the campaigns of nonincumbents.
Brian Twomey