EUR/USD is not only overbought but crucial break points are located at 1.1181, 1.1212 and 1.1277. Most vital break at 1.1277 would see volatility increase exponentially as a new range would materialize from 1.1277 to 1.2204 or 927 pips as opposed to current 1.1277 to 1.0300 lows or 977 pips. Both ranges are equally spaced at 977 and 927 pips.

New interest for EUR/USD longs are located above 1.1277.

This week is EUR correction time and this includes EUR/PLN.

Overall currency market prices are in the 3rd week of dead to perfect neutral positions and this includes all 28 currency pairs. This means no dramatic moves are expected as rallies will be sold and drops will be bought and the week will end with zero progress to trend positions. The explanation to neutral positions is ranges remain in serious contraction mode.

Both EUR/PLN and USD/PLN prices are located below vital break point lines at 4.4574 and 4.0679. A mis position exists in either USD/PLN or EUR/PLN. This mis position situation exists to all USD Vs EUR emerging market currencies.

This week again is multiple trades per currency pair to maximize profit pips.



Short 1.1109 and 1.1131 to target 1.0980.

Short below 1.0959 to target 1.0895.

Long 1.0895 to target 1.0938.

long 1.0969 to target 1.1002.

Overall, 259 total pips available to trade. Retain this number for trade results at week’s end.


Short 1.5358 and 1.5379 to target 1.5137.

Short 1.5117 to target 1.5029.

Long 1.5029 to target 1.5109.

Long 1.5128 to target 1.5197.

Total 458 available pips to trade.


Target last week at 2.0222 now 2.0124.


Long 3.9981 and 3.9894 to target 4.0592.

Long 4.0679 to target 4.1028.

Short 4.1028 to target 4.0766.

Short 4.0669 to target 4.0592.


Long 4.4267 and 4.4114 to target 4.4498.

Long 4.4574 to target 4.4957.

Short 4.4957 to target 4.4766.


Brian Twomey

Weekly Trade Results: GBP/USD and GBP/NZD

Weekly GBP/USD trade was instructive to its weekly rise and drop points.

The trades and results as posted and analyzed by its long and short points.


Long 1.2124 and 1.2099 to target 1.2395. Must cross 1.2149, 1.2174, 1.2199, 1.2226, 1.2251, 1.2276, 1.2301, 1.2326, 1.2351 and 1.2376.

Lows 1.2165, Highs 1.2362.

Entry miss by 41 pips. Target miss by 33 pips. Dead stopped exactly between 1.2351 and 1.2376.

Recall last week’s NZD/USD trade. Exact same scenario as GBP/USD this week. NZD/USD missed entry and target. Yet the target at 0.6223 achieved Tuesday. Targets are exact and must achieve it destinations by mathematical law.

The price message. On the first rise above, GBP/USD broke 1.2351 to trade at 1.2352. The 1 pip message was GBP/USD would trade higher. Its did. It stopped at 1.2362 exactly and 1 pip short of exact mid point at 1.2363. The message was GBP/USD then heads lower.

Every traded pip contains deep meaning to overall price paths. Better examples to above scenarios exist to involve a more understandable price path. While this point may seem trivial because it involves 1 pip, the message remains clear.
If the entry and target traded correctly then the trade profit was +271 pips.

2. GBP/USD 2nd Trade

Long above 1.2431 to target 1.2533
Result: never traded.

3. GBP/USD 3rd Trade

Cautious short 1.2395 to target 1.2293
Result: Target at 1.2395 never traded however target achieved destination. If 1.2395 traded then trade result was +102 Pips.
2 trades, + 373 pips but never traded.

GBP/USD despite a 200 + rise this week failed to break 1.2431 for higher. The result is GBP made zero progress to its overall price path.


Long 1.9964 and 1.9946 to target 2.0222

Lows 1.9759, Highs 1.9899

Entry off by 187 pips. how wonderful is this.

Point of note is GBP/USD actually rose 197 Pips and GBP/NZD dropped 187 pips.

No problem here as was shown to the EUR/CAD trade.

Trade Options

Add 1 lot and trade to break even from the first lot. Add 1 lot and trade to target. Or do nothing and trade to target. Never never a loss. Ever see a loss from many many weekly trades posted. Nope. Nothing but profits and trade repair on missed entry with result of profits.


Brian Twomey

FX Trading Price Path

  Charts and Outside Events are The Enemy of FX traders
A trade contains an entry and  target price, a start & stop point. This price path must complete its mission by math laws.
The price path is set on the previous Friday, to enter trades on Sunday & exit with profits by Friday.

Fully, 90% of all information exposed to traders throughout any given week is irrelevant to the price path because a price path cannot be stopped by any trader, central bank  or outside event such as news.

Understanding a price path allows traders to literally set entries and  targets on Sunday, exit by Friday with profits & never watch markets or charts or screens all week.

The chart and focus on outside events is actually the enemy of the trader because a chart doesn’t show a true price path and because a market price is never correct. What can a chart reveal to an incorrect price.

Its not what a trader earned from a trade that is relevant but what did the trader miss in pips that traded without profit. A 100 pip trade is fine but if 300 pips Traded without profit, then actually, how good was the trade or trader expertise to overall trading and strategy.

Vast majority of traders are speculators and  set out to earn profits to stay in the game. 1% are experts with full price knowledge. Full price knowledge means every traded pip is known to its location and mathematical context. The 1% irrelevance to a chart is replaced with ability to eyeball any price and fully understand the overall price context and all relevant information for a profitable trade. The 1% crowd is rarely seen nor known to wider trader audiences.
                   Brian Twomey

Weekly Trades: GBP/USD And GBP/NZD

Another week to continuous trading to maximize profit pips and earn all traded pips available. And offered most vital points to watch and follow the trades to target. Also offered to those to take profits when satisfied anywhere along the price path. This week is GBP/USD from the low end of GBP to also trade GBP/NZD at the top end. This week 4 trades per currency are offered.
Recall last week’s EUR/USD trades offered 2 longs and 2 shorts to continuously trade throughout the week. The 2 longs profited 191 pips while the shorts earned +79 pips for a total of 270 pips.

The close price forecast for Friday was just above 1.0860 and the last target to the shorts was 1.0836. Based on the close price forecast, it was apparent 1.0836 had a chance to fail its destination. It did and EUR/USD closed at 1.0898. However the last short earned 16 pips.

Same old story to the trades: No Charts, No Graphs, No Stops, no market blather talk necessary. Most Important is no screen watching throughout the week as its not necessary. Entries and targets are set on Sunday and exit by Friday. A possible missed entry is quickly repaired to either break even or trade to target and earn profits. Never a loss incurred.

Weekly Trades


Long 1.2124 and 1.2099 to target 1.2395. Must cross 1.2149, 1.2174, 1.2199, 1.2226, 1.2251, 1.2276, 1.2301, 1.2326, 1.2351 and 1.2376.

Long above 1.2431 to target 1.2533. Must cross 1.2456, 1.2481, 1.2506 and 1.2531.

Short 1.2533 to target 1.2482.

Cautious short 1.2395 to target 1.2293


Long 1.9964 and 1.9946 to target 2.0222. Must cross 2.0001, 2.0036, 2.0072, 2.0108, 2.0144, 2.0180 and 2.0216.

Long above 2.0258 to target 2.0367. Must cross 2.0294 and 2.0330.

Short 2.0367 to target 2.0294.

Cautious short 2.0222 to target 2.0149


Brian Twomey

Weekly Trades: GBP/USD and GBP/NZD

Another week to continuous trading to maximize profit pips and earn all traded pips available. And offered most vital points to watch and follow the trades to target. Also offered to those to take profits when satisfied anywhere along the price path. This week is GBP/USD from the low end of GBP to also trade GBP/NZD at the top end. This week 4 trades per currency are offered.


Long 1.2124 and 1.2099 to target 1.2395. Must cross 1.2149, 1.2174, 1.2199, 1.2226, 1.2251, 1.2276, 1.2301, 1.2326, 1.2351 and 1.2376.

Long above 1.2431 to target 1.2533. Must cross 1.2456, 1.2481, 1.2506 and 1.2531.

Short 1.2533 to target 1.2482.

Cautious short 1.2395 to target 1.2293


Long 1.9964 and 1.9946 to target 2.0222. Must cross 2.0001, 2.0036, 2.0072, 2.0108, 2.0144, 2.0180 and 2.0216.

Long above 2.0258 to target 2.0367. Must cross 2.0294 and 2.0330.

Short 2.0367 to target 2.0294.

Cautious short 2.0222 to target 2.0149


Brian Twomey Contact for trades brian@btwomey.com


Australia OIS Extracting Information from Financial Market Instruments


Financial market prices contain information about market expectations for economic variables, such as inflation or the cash rate, that are of interest to policymakers. This article describes four financial market instruments that are particularly useful for this, and documents how market expectations and other useful information can be derived from them. In particular, it describes how overnight indexed swap rates and government bond yields can be used to estimate a zero-coupon yield curve and infer market expectations for risk-free interest rates, and how inflation swap rates and inflation-indexed government bond yields can be used to infer market expectations for the inflation rate.


Financial market data are often used to extract information of interest to policymakers, such as market expectations for economic variables. The prices of interest rate securities are particularly useful for obtaining information about expectations of future risk-free interest rates and future inflation rates, as well as for estimating risk-free zero-coupon yield curves.

The first part of this article discusses how data from the overnight indexed swap (OIS) market and the government bond market can be used to estimate risk-free zero-coupon yield curves and obtain information about market expectations of the path of risk-free rates. OIS contracts directly reference the cash rate, making it relatively easy to extract market expectations from them, but they are only liquid out to around one year in maturity. To obtain estimates of zero-coupon risk-free interest rates beyond one year, models can be used to estimate a zero-coupon yield or forward curve from the yields on Commonwealth Government securities (CGS). The yield curve gives the interest rate agreed today for borrowing until a date in the future, while the forward curve gives the interest rate agreed today for overnight borrowing at a date in the future. The forward curve can be used as an indicator of the path of expected future cash rates, but importantly it becomes less reliable as the tenor lengthens because of the existence of various risk premia, for example term premia. No attempt is made in this article to adjust for these risk premia and so they will affect the estimated zero-coupon curves.[1]

The second part of this article discusses how data from inflation swaps and the inflation-indexed Treasury capital indexed bond (CIB) market can be used to obtain estimates of inflation expectations. Conceptually, inflation swaps can be used in a similar way to OIS contracts, and CIBs can be used in a similar way to CGS, to extract information on expected inflation. In practice, inflation swaps tend to be the more useful source of information as there are very few inflation-indexed bonds on issue and the CIB market is somewhat less liquid than CGS. Inflation swaps are also traded at a larger number of tenors and have maturities extending from 1 to 30 years. Again risk premia, including liquidity and term premia, are present in the CIB and inflation swap markets, and so will affect the estimates.

Extracting Information on Cash Rate Expectations

Overnight indexed swaps are frequently traded derivative instruments where one party pays another a fixed interest rate on some notional amount in exchange for receiving the average cash rate on the notional amount over the term of the swap. The cash rate is the rate on unsecured loans in the overnight interbank market, which is the Reserve Bank’s (RBA) operational target for monetary policy. Banks and other market participants use trades in OIS to manage their exposure to interest rate risk. For example, a market participant expecting a reduction in the cash rate may choose to trade on this expectation by entering an OIS contract where they receive a fixed rate and pay the actual cash rate over the period of the swap; a party with a lower expectation of a reduction in the cash rate may enter the opposite transaction. OIS rates therefore provide direct information on market expectations of monetary policy.

The OIS market has grown considerably since its inception in 1999. As at June 2011 there were $3.2 trillion of OIS contracts outstanding, and turnover in the year to June 2011 was around $6.6 trillion (Graph 1). Since OIS rates reflect the return from investing cash overnight over the term of the swap, and there is only an exchange of interest – not notional principal amounts – these transactions involve very little term or counterparty credit risk. An important point, however, is that these risks in OIS are not zero, as is often assumed, and are likely to increase, along with the associated risk premia, in times of stress.[2] Generally though, OIS rates tend to be lower and less volatile than other money market rates of similar maturity. For example, bank bill futures contracts, which reference the 90-day bank bill swap (BBSW) reference rate, are liquid but are less useful for extracting unbiased cash rate expectations because they incorporate a greater degree of credit risk which can change, and has changed, over time.

Graph 1
Graph 1: OIS Outstanding

OIS contracts trade for relatively short terms, generally of less than one year. Of the total amount of OIS contracts outstanding in June 2011, around 40 per cent was for contracts with a term of less than 3 months, 26 per cent was for contracts with terms of between 3 and 6 months and 33 per cent was for terms of between 6 and 12 months (Graph 2).

Graph 2
Graph 2: OIS Outstanding by Tenor

OIS have advantages over the 30-day interbank cash rate futures contracts trading on the ASX. These contracts are similar in concept to OIS, but they are exchange-traded and have fixed maturity dates as opposed to fixed tenors. Also, less trading occurs in these contracts than in OIS, especially for contracts of over three months. The relatively high level of liquidity that usually exists in OIS markets means that they are typically quoted with small bid-offer spreads, which helps users to derive more accurate measures of market expectations of the cash rate. Another theoretical advantage of OIS is that, being a derivative instrument, the supply of OIS contracts is not fixed; supply factors can influence the pricing of physical securities, such as bank bills and certificates of deposit.

The use of the OIS market to gauge cash rate expectations does, however, present some challenges. OIS rates can sometimes be distorted by a lack of liquidity as well as positioning from market participants, for example those wishing to trade on the basis of views about the likelihood of large and unexpected ‘tail events’ adversely affecting economic conditions. They also incorporate some term and counterparty credit risk as discussed earlier. These distorting factors are more likely to be relevant during times of heightened uncertainty about the economic and financial outlook, as has been the case recently.

OIS rates nonetheless provide a useful and simple source of data for estimating cash rate expectations out to one year. If, for example, the fixed rate in an OIS is trading below the current cash rate, this would indicate that, on average, market participants are expecting the RBA to ease monetary policy over the term of the swap. By comparing the fixed rates for swaps of different maturities, it is possible to assess both the magnitude of the expected change in the cash rate and the timing of these changes. As a simplified example, assume that the day before an RBA Board meeting:

  • the current cash rate is 4.25 per cent;
  • the 30-day OIS rate (i.e. the fixed rate) is 4.00 per cent; and
  • the 60-day OIS rate is 3.875 per cent.

The 30-day OIS rate of 4.00 per cent suggests that market participants are, on balance, expecting the cash rate over the next 30 days to average that rate. If for the sake of simplicity it is assumed that the Board will only move the cash rate in 25 basis point increments – whereas the market can often expect larger adjustments – then it follows that financial market participants expect the RBA to cut the cash rate by 25 basis points at the next day’s Board meeting.[3] Comparing the 30-day and 60-day OIS rates also indicates what markets are expecting to happen to the cash rate at the subsequent RBA meeting. If the market is expecting that the cash rate will average 4.00 per cent for the next 30 days and 3.875 per cent for the next 60 days, then the market must be expecting the cash rate during the second 30-day period to average 3.75 per cent (that is, (4.00 + 3.75) / 2 = 3.875).

Market expectations of the cash rate can vary substantially over time. At the time of writing this article, expectations of the cash rate for the middle of 2012 were around 4 per cent, up from around 3 per cent late last year when concerns stemming from the European sovereign debt crisis weighed heavily on sentiment about the economic outlook (Graph 3).

Graph 3
Graph 3: Forward Cash Rates

While OIS rates provide information about the short end of the yield curve, they are less useful for the longer end, as they cease to be regularly traded for maturities beyond around one year. At longer maturities, the natural risk-free interest rates to consider are those on CGS (other ‘risk-free’ bonds exist, such as government-guaranteed bank bonds, but such bonds typically trade with a significant liquidity premium relative to CGS so they are not considered here). There are currently 18 CGS lines on issue, with remaining terms to maturity ranging from less than 1 year to a little over 15 years.

There are a number of factors to consider when using CGS yields to calculate longer-term risk-free interest rates. First, investors in a 10-year bond with coupons receive a cash payment not only in 10 years time, when the bond matures, but every 6 months leading up to maturity. This in turn means that the interest rate associated with the bond – the yield to maturity – is not the risk-free interest rate for borrowing for 10 years, but rather a combination of the 10-year interest rate, which applies to the principal payment, as well as the various interest rates applying to the coupons paid over the life of the bond. Second, the limited number of CGS on issue also means that one can only look at interest rates to certain dates in the future. Estimating zero-coupon yield and forward curves resolves these problems: the impact of coupons on bond prices is explicitly modelled and removed, and the estimated curves allow the gaps in between bond maturities to be ‘filled in’.

Details of the estimation method are provided in Appendix A. For data, prior to 2001 Treasury notes for maturities extending up to one year into the future are used, and from 2001 onwards OIS rates for maturities extending up to one year are used (the OIS market became liquid enough to provide reliable pricing around this time, while Treasury notes were not issued between mid 2002 and early 2009). CGS yields are used for maturities greater than 18 months into the future (bonds with short maturities can be relatively illiquid in comparison with longer-dated CGS).

As such, the yield curves that are estimated combine data from both the OIS and CGS markets, with the implicit assumption that the interest rates attached to all instruments in both markets are largely free of credit and liquidity risk premia, and therefore comparable. To the extent that this does not hold, it will flow through to the estimated curves. The existence of term premia, being the extra compensation demanded for investing for a longer period of time, is another complicating factor. Again no attempt is made to account for term premia and so any term premia in OIS rates or bond prices will be incorporated in the estimated curves.

Notwithstanding these caveats, estimated zero-coupon forward, yield and discount curves as at 21 February 2012 are given in Graph 4. The discount curve gives the value today of receiving one dollar in the future; it starts at one (one dollar today is worth one dollar) and slopes down (one dollar today is worth more than one dollar in the future). Although the discount curve looks linear at this scale, it is not. The forward and yield curves start at the prevailing cash rate. As discussed earlier, abstracting from the existence of risk premia, the forward rate can be read as giving a rough indication of the market-implied expectation for the cash rate. On this basis, as at 21 February 2012, OIS rates and CGS prices implied that market participants expected the cash rate to fall over the year ahead before rising again over subsequent years. The yield curve is essentially an average of the forward curve and so looks broadly similar to, but is generally smoother than, the forward curve.

Graph 4
Graph 4: Zero-coupon Curves

Graph 5 provides a longer perspective on the data, showing zero-coupon forwards since 1993 at the 1-, 3- and 5-year horizons. These discount, yield and forward curves are available to the public on the RBA website.

Graph 5
Graph 5: Zero-coupon Forwards

Zero-coupon discount, yield and forward curves can be used in a number of applications. A common way to use this kind of data is as an input for discounting future cash flows, be they cash flows from real assets such as toll roads or power stations, or cash flows from financial assets such as shares or bonds. This discounting essentially assigns a current dollar value to future payments or receipts and is most easily achieved using a discount curve, although to discount risky cash flows a discount curve that incorporates an appropriate risk premium should be used.

Zero-coupon yield curves are also useful for analysing the government bond market itself; for example, the deviation of traded bond prices from prices implied by the fitted zero-coupon yield curve (that is, the pricing error made in fitting the model) may indicate that certain bonds are cheap or dear relative to other bonds with similar maturities.

Another use is in economic modelling. Economists are interested in the interaction of financial markets and the real economy, including the effect that interest rates have on the real economy. To study these relationships zero-coupon yields should be used, not yields to maturity (see, for example, Spencer and Liu (2010) for a recent study of economic and financial linkages).

There is also a large amount of literature on the estimation of the term premia present in government bonds. This literature attempts to decompose zero-coupon yields into pure cash rate expectations and a term premia component, and thereby derive better estimates of expectations (this article does not attempt to adjust for term premia). Term premia are also of interest in their own right, as they give an indication of the excess return an investor can expect from investing for a longer time period. Term premia estimation requires zero-coupon yields as the basic input into estimation (see, for example, Duffee (2002) for a US study on term premia, or Finlay and Chambers (2008) for an Australian study).

Extracting Information on Inflation Expectations

Reliable and accurate estimates of inflation expectations are important to central banks given the role of these expectations in influencing future inflation and economic activity. These expectations are also important for organisations that manage inflation-linked assets or liabilities. Although surveys provide some guidance on the expected path of inflation, inflation-linked securities have the advantage of providing more timely and frequently updated information on market expectations of inflation.

A widely used market-based measure of inflation expectations is a break-even inflation (BEI) rate calculated as the difference between the yields of nominal CGS and CIBs.[4] The current BEI rate at the 10-year horizon is around 2¾ per cent, suggesting that the market expects average inflation over the next 10 years to be within the RBA’s 2–3 per cent inflation target (Graph 6). For shorter maturities, markets currently expect inflation to be closer to 2½ per cent.

Graph 6
Graph 6: Break-even Inflation Rate

One limitation with using the bond market to gauge inflation expectations is the small number of CIBs on issue; there are only five bonds currently on issue, with maturities around every five years from 2015 to 2030. In comparison, there are 18 CGS lines on issue with maturities spanning 2012 to 2027. Hence, the bond market offers a limited number of pricing points from which to extract measures of inflation expectations for a broad range of tenors. This lack of pricing points also makes it more difficult to derive forward measures of expected inflation, which measure expectations of inflation at some point in the future.[5]

In addition, there are maturity mismatches between CGS and CIBs. For example, the current 10-year CGS matures in July 2022 whereas the closest CIB matures in February 2022. As a result, a 10-year BEI rate must be derived by interpolation. Further adjustments must also be made to account for compounding effects on yields since CGS pay semi-annual coupons while CIBs pay quarterly coupons.

However, the most serious shortcoming of the BEI rate derived from bonds is that it captures investors’ liquidity preferences for different types of bonds. With outstanding CIB issuance 13 times smaller than CGS, CIBs can be less liquid than CGS, and investors who wish to hold highly liquid assets will have a stronger preference for CGS. This liquidity preference effect can be very pronounced during periods of heightened uncertainty such as in 2008 where ‘flight-to-safety’ bids put significant downward pressure on nominal bond yields (as noted earlier, any such distortion will also be incorporated in the estimated nominal zero-coupon curves) (Graph 7). More broadly, with CGS yields trading with a liquidity premium relative to CIBs, BEI rates can be artificially compressed and so give a distorted measure of inflation expectations. The low BEI rates in 2008 and 2009 were not all driven by liquidity effects, however, since the financial crisis had led market participants to become more pessimistic about future economic conditions.

Graph 7
Graph 7: Break-even Inflation Rate

Because of these limitations, inflation swaps have become an increasingly popular alternative source of information on inflation expectations. Their key advantage is that they provide direct and readily available measures of inflation expectations with no need for interpolation, since swaps are traded at the main tenors of interest such as 3-, 5- and 10-years. Also, as derivatives, the supply of inflation swaps is not constrained, meaning that in theory, inflation swap rates are generally not distorted by liquidity preference effects.

An inflation swap is a transaction whereby the inflation payer pays the actual inflation rate in exchange for receiving a fixed payment (Figure 1). The actual inflation payment is based on the most recently available quarterly consumer price index at the maturity of the swap. The fixed payment approximates the expected value of inflation over the term of the swap and is analogous to the BEI rate derived from bond prices. In this sense, inflation swaps operate in a similar fashion to OIS contracts, but with a different reference rate (CPI inflation instead of the overnight cash rate) and longer terms to maturity. Fixed rates for inflation swaps are readily available for terms out to 30 years.

Figure 1
Figure 1: Example of Cash Flows of a Zero-coupon Inflation SwapRead description

The most common form of inflation swap in the market is the zero-coupon inflation swap. Here only one cash payment is made at the maturity of the swap, representing the difference between the fixed rate and actual inflation over the term of the swap. This means that counterparty credit risk is minimal and inflation swap rates are not affected by periodic coupon payments. Zero-coupon inflation swaps have become more popular over recent years, especially between 2003 and 2009 when CIB issuance ceased.

In terms of hedging flows, the main receivers of inflation in the inflation swap market are pension funds that use swaps to match their long-term inflation-linked liabilities. Liability matching has had a significant impact on making the inflation swap market in Australia a more recognised alternative to inflation-indexed bonds. Demand to pay inflation in swaps (and receive a fixed rate) mainly stems from infrastructure project providers that want to hedge their inflation-linked assets or revenue streams. This can be done by issuing a nominal bond and entering into an inflation swap with an investment bank. This has boosted the size of the inflation swap market, which is an over-the-counter market where intermediaries such as prime brokers play an important market-making role.

Investors can also trade inflation swaps based on their views about future inflation. For example, if an investor expects a higher rate of inflation than that implied by the fixed rate of a swap, the investor would enter a swap contract, receive actual inflation and pay the fixed rate. This is achieved through a single transaction instead of separate trades in nominal and inflation-indexed bonds, which bear funding costs and suffer from maturity mismatches. Inflation swaps are also used in conjunction with nominal bonds to replicate an inflation-indexed bond. This allows investors to overcome bond maturity mismatches as well as any potential shortage of inflation-indexed bonds.

Despite the recent growth in inflation swaps, the market remains small compared with those for other derivatives such as interest rate swaps. There are no official data to measure the total size and activity levels in the inflation swap market accurately, although a survey by the Australian Financial Markets Association (AFMA) estimated that as at May 2011 there were $24 billion of inflation swaps outstanding, and turnover over the year to June 2011 was $11.6 billion (AFMA 2011).

Since 2008, measures of implied inflation captured by 3-, 5- and 10-year inflation swaps have ranged between 1¼ per cent and 4 per cent (Graph 8). Mimicking the pattern observed for the BEI rate from the bond market, inflation swap rates over 2008 also fell to low levels, suggesting that market participants were moderating their inflation expectations. Over recent years, however, these inflation expectations have reverted to around 2–3 per cent.

Graph 8
Graph 8: Inflation Swap Rates

Since inflation swap rates are zero-coupon, it is simple to use the framework in the previous section to derive forward inflation rates, which measure expectations of inflation at some point in the future (Graph 9). Forward inflation rates derived from swaps at the 3-, 5- and 10-year horizons have also fluctuated in a wide range over recent years; as these forward rates represent expected inflation at a point in the future, they are generally more volatile than the (zero-coupon yield) measures shown in Graph 8, which represent expected inflation over a period up until a point in the future. Overall, current forward measures of inflation are also around 2 to 3 per cent, albeit slightly above 3 per cent at the 10-year horizon.

Graph 9
Graph 9: Forward Inflation Swap Rates

Inflation expectations in the swap market broadly track the BEI rate in the bond market, but current 5- and 10-year measures appear to show that inflation expectations in the swap market are somewhat higher than those in the bond market; over the first half of 2009 the divergence of the swap market from the bond market was even greater, with inflation swap rates being up to 50–70 basis points higher than BEI rates implied by bonds (Graph 10). One reason for this lower BEI rate from the bond market is the liquidity preference effect discussed earlier. This effect was particularly pronounced over the first quarter of 2009 when inflation swap rates normalised faster in the aftermath of the financial crisis than bond yields, which retained a large liquidity premium.

Graph 10
Graph 10: Break-even Inflation from Bond and Swap Pricing

Another reason swap rates could be higher relates to hedging. Intermediaries in the swap market, who play an important market-making role, sometimes hedge their positions in the inflation-indexed bond market. This market can be relatively less liquid and compensation for this hedging risk may bias up inflation swap rates.

Term premia also tend to cause structurally higher inflation swap rates because the fixed-rate payer will demand compensation for the inherent uncertainty about the expected amount of inflation over the term of the swap. This premium can change for a variety of reasons including an increase in uncertainty about the inflation rate or changes in investors’ inflation tolerance (term premia can also affect CIBs).


Financial markets provide a significant amount of information about expectations of the cash rate, risk-free rates and inflation. Extracting expectations from market measures is not always straightforward, however, and results should be viewed with some caution. Measures derived from the government bond market can contain liquidity preference effects that are particularly problematic in times of heightened uncertainty. Some measures, such as zero-coupon interest rates, are not directly observable and must be estimated from bond yields using a variety of assumptions. Nonetheless, as well as providing some information on risk-free rates, estimates of zero-coupon rates are useful in economic modelling, in estimating risk premia and for discounting cash flows. The RBA will be publishing a constructed series of zero-coupon yield, forward and discount curves on its website. While derivative instruments such as OIS and inflation swaps provide more straightforward measures of market expectations, and are regularly updated as these markets are actively traded, the prices of these instruments contain various risk premia, which tend to bias implied expectations.

Appendix A

There are a number of established methods for estimating zero-coupon curves, which all give broadly similar results (see, for example, Bolder and Gusba (2002)). The method used in this article – the Merrill Lynch Exponential Spline model – does not estimate the yield or forward curve directly, but instead estimates the discount curve, from which the zero-coupon yield and forward curves can be recovered.[6] The discount curve is modelled as a linear combination of a number of underlying curves, called basis functions, which are fixed functions of time. That is, it is assumed that the discount curve can be written as:

Intuitive description – the discount curve is modelled as the sum of a number of basis functions multiplied by coefficients that must be estimated. Literal description – d of t equals the sum over j of a subscript j multiplied by b of t subscript j.

where bj(t) are basis functions, and aj are the (to be estimated) coefficients that, when multiplied with the basis functions, give the discount curve. The price of a bond, which can be observed, is simply each cash flow (consisting of coupon payments and principal) multiplied by the appropriate discount curve value. For example, if the cash flows of a bond are denoted by ct then the bond price, P, can be written as:

Intuitive description – the price of a bond is the sum of its cash flows multiplied by the appropriate discount factors. Literal description – P equals the sum over t of c subscript t multiplied by d of t.

Taking the two equations above together, the cash flows ct are known, and the basis functions bj(t) are fixed functions of time, so the only unknowns are the coefficients attached to the basis functions, aj. The same discount curve is used to price all bonds in the market, which allows the coefficients to be estimated. The model allows this estimation to be done within a standard regression framework, which is simple and fast (see Appendix A of Finlay and Chambers (2008) for further details).


Brian Twomey

Anthony Downs’s An Economic Theory of Democracy

The Median Voter: Fact or Fiction?
The History of a Theoretical ConceptPrepared for Presentation at the Annual Meeting of the Western Political Science Association
March 25-27, 1999
Robert G. Boatright
Department of Political Science
The University of Chicago
5828 S. University Avenue
Chicago, Il 60637


        To an extent that many political scientists are only dimly aware, the median voter theorem has infiltrated much of American political science. Even among those who do not work in the area of formal modelling, the predictions of candidate convergence and proximity voting govern much of both theoretical and empirical literature on electoral competition. This is not to say that we always find what we predict; instead, it is to say that we frequently look for these two occurrences, even if only to take note of our failure to find them.

Bernard Grofman notes of Anthony Downs’s An Economic Theory of Democracy, the first political science text to explicate the logic of spatial candidate competition, that

As a seminal work, An Economic Theory of Democracy suffers from the triple dangers of (1) being forever cited but rarely read, with its ideas so simplified as to be almost unrecognizable, (2) being regarded as outmoded or irrelevant, (3) having its central ideas so elaborated by ostensible refinements that what was good and sensible about the original gets lost amidst the subsequent encrustations (Grofman 1993: 3).

In this essay, I certainly do not dispute Grofman’s claims. Grofman’s words are contained in the introduction to an edited volume designed to reread Downs with an eye towards correcting wayward interpretations of his theory. In this essay, however, I seek to assess the very effects of the “calamities” of which Grofman speaks upon the study of political parties. Furthermore, I seek to clarify means by which lack of empirical support for Downs’s candidate convergence prediction can be used not to dismiss his claims but to second them.

In pursuing this exercise, it is necessary to treat the median voter theorem not as a mathematical proof but as a theory – as a theory which, despite the mathematical rigor that has been applied to explication of its various facets, should be considered on level ground with its predecessors. The median voter model should be read as a response to the “responsible parties” theory propounded by the 1950 American Political Science Association report and other normative theories of political party behavior dating back into the early years of the twentieth century. Downs’s work effectively put an end to such normative theorizing about what political parties should do; if it could be demonstrated that political parties would never take political scientists’ advice seriously, what was the point in offering advice at all?

Few have considered, however, means by which this debate might be re-addressed by the very tenets of the Downsian model. Downs and many of his successors have argued that disputation of the empirical predictions of his theory does not undermine the theory itself. They have claimed that to find that any of the theory’s predictions are not borne out brings into question the empirical support for one or more of the theory’s assumptions, but such a finding has no effect upon the internal validity of the theorem itself (Downs 1959). This seems a fair claim, but adherence to this claim has not stopped formal theorists from tinkering with various components of the model in order to prescribe variants or close relatives of the model which have greater empirical support than does the “pure” median voter model itself.

This type of activity, however, runs the risk of making the median voter model unfalsifiable. If we limit its application only to events in which it occurs, we have effectively established a theory with no empirical import at all. As Martin Diamond points out in his early review of Downs, a weakened median voter hypothesis is no model at all:

The revised “fundamental hypothesis” would have to read: Some politicians formulate policy only for the rewards of office and some do not, and which behavior is decisive is a matter for study each time, all of which would leave political science in the difficult but fascinating position it was in before economic models were offered in succor (Diamond 1959: 210).

Diamond’s claim might be read in two ways. The quantitative political scientist may read it as a statement that “the outliers are what is of most interest,” that Diamond’s claim is that if we cannot explain nonconvergence in a systematic way, the outliers – the candidates who do not adopt “rational” positions – will be the candidates who are of the most interest and have the most effect upon politics. A student of 1950s political and sociological theory – a student of Leo Strauss, for instance – might read Diamond’s claim as a broader statement that the scientific study of politics cannot explain political change or innovation. It is a claim that “rational” political behavior is uninteresting, and political “action” cannot be subsumed under theories of rationality (See Arendt 1958: 41-42).

Diamond’s argument also poses a tremendous obstacle to those who would seek to adapt Downs for the sake of empirical inquiry. We cannot merely say that some candidates behave in accordance with Downs’s precepts and some do not, nor can we say that Downs’s theory holds when the tenets of his theory can be shown to exist and it does not when such tenets do not hold. Instead, a theory of candidate convergence must demonstrate that there is a systematic logic to nonconvergence as well as to convergence – that we can predict when convergence will occur and when it will not occur without resorting to ex post facto analysis.

I recognize that such a task is a formidable one, and in this paper I do not purport to have discovered such a theory. Instead, I argue that the roots for such a theory may be located in one of the least explored of Downs’s assumptions – that of simultaneity in candidate positioning. Where candidates adopt positions sequentially, the logic of candidate competition and convergence is altered, but it is altered in ways that can be systematically identified and explained, and it can be amended in ways that can lead to accurate predictions of candidate divergence.

In order to arrive at this argument, I proceed in this paper first to restate the historical context of Downs’s theory, with particular attention to debates about responsible political parties and to debates about pluralism and the definition of political power. Second, I briefly note the fundamental assumptions of the median voter theory, the level of empirical support for these assumptions, and the refinements or revisions to empirical findings which formal modelers have undertaken in order to adapt economic modeling to better testing. Third, I discuss the lack of attention which has been paid to the simultaneity assumption and ways in which discarding or limiting this assumption re-opens many of the theoretical and normative debates which Downs’s theory closed. I do not seek to provide a formal theory myself because I believe that the results of a sequentiality assumption should and can be stated, at least for the purposes of this essay, without the “encrustations” of which Grofman speaks.

The Historical Context: Closing a Debate

        The study of political parties is at least as old as the discipline of political science in America. In the late nineteenth and early twentieth century, Woodrow Wilson, A. Lawrence Lowell, Henry Jones Ford, and others debated how best to conceive of political parties’ function and membership. Ford (1914: 295-296) argued that parties were somewhat democratic organizations, oligarchically controlled but with the tacit support of the voters. In this period, only the Russian political scientist Moisei Ostrogorski (1902) confined party membership to those actually employed by the party. Ostrogorski’s work appears to have been relegated largely to the fringes of this debate at the time, although it was rediscovered in the 1950s and is now frequently cited.

This discussion of parties was, as was much of contemporaneous political science, highly normative. It revolved around the question of how political parties should behave, and it was taken – especially in the case of Wilson – as prescription for how parties should behave and who should control them. It raised, however, a somewhat more empirical question which has persisted – is democracy best served when parties strive to appear identical, or is the practice of democracy restricted by party similarities, insofar as voters are given no real choice between platforms?

By the 1950s, several leading political scientists had concluded that Ostrogorski was correct – that voters and parties were best conceived of as two distinct entities. V. O. Key (1958: 378-380) conceived of parties in three parts – voters who supported and identified with the party, the party organization, and those members of the party who held governmental office. E. E. Schattschneider (1942: 35-64) argued that democracy existed between parties, but not within parties; party “membership” was a facade. Parties nonetheless had a duty to “frame political questions” for consumption, and were thus driven by forces of the political “market” to create a product tht reflects public opinion, even without the direct input of the public in framing the issues.

Oddly, Schattschneider’s introduction of the market metaphor did not stop him from chairing the American Political Science Association working group which produced Towards a More Responsible Two-Party System, one of the few direct political statements published under the imprimatur of the American Political Science Association. This report, published in 1950, called for the parties to present coherent, yet divergent, packages of policy proposals to the public. The public could then make an informed choice about the direction in which it wished American public policy to go. Furthermore, it called upon parties to design long-range plans that would “cope with the great problems of modern government.” In a 1992 retrospective on Schattschneider’s work, John Kenneth White cites several leading political scientists of later decades who attested to the report’s status as the most significant work in the area of political parties of its time. The report also played a role in reviving interest in earlier debates on political parties. Austin Ranney’s summary of the views of early twentieth century theorists of political parties appeared soon afterwards (Ranney 1954).

To a large extent, Downs’s An Economic Theory of Democracy, published only seven years later, put an end to this normative debate. If the APSA report was formulated in response to a perceived crisis in party government, Downs’s work seems to have arisen from no such concern. Downs seems blissfully unaware or uninterested in the “responsible parties” debate. His bibliography does include Key, but he makes no reference to Schattschneider, the APSA report, or any of the report’s antecedents. If we are to trust his recollection of the development of his project (Downs 1993), An Economic Theory of Democracy was written very rapidly, and it was inspired more by his own personal political experiences and his encounters as an economics graduate student with Schumpeter’s analysis of party competition than it was by current trends in political science.

Downs’s work exposes, however, the inconsistency of pairing a market theory of political parties with normative calls for the parties to espouse contrasting viewpoints and to design long-range plans for government. Employing Hotelling’s theory of economic competition, Downs demonstrated that a rational political party would, in two-party competition, seek out an ideological position in the middle of the electorate’s preference distribution. The two parties would then, under something approximating full information conditions, mimic each other, thus encouraging voters to make decisions not about policy, but about non-issue traits. The parties would, among other things, be ambiguous about their positions on controversial issues or avoid addressing such issues entirely; incorporate seemingly incompatible positions into their platforms; and seek to avoid long-run solutions to problems in order to maximize their present electoral fortunes. In such a scenario, there is complete separation of the voter and the party. The party operates as the producer of policy, and insofar as the two-party system functions in an oligarchical manner, the voter, or consumer, would have to take what was offered by the parties. Normative arguments such as those contained in the APSA report were rendered somewhat moot by this line of reasoning; the fault, if there was one, lay with the median voter himself, and no amount of exhortation by an elite cadre of political scientists would sway the parties from their vote-maximizing strategies.

The Downsian disputation of the APSA report’s tenets need not be stopped here, however. Riker, in recounting the differences between Downs and the APSA report, notes that “political science and political events have passed the adherents of ‘responsible parties’ by.” (Riker 1982: 63) Not only was the report wrong on empirical and logical grounds, however; it was wrong on normative or moral grounds:

Its implicit purpose was to sharpen the partisan division as it then existed and thus to ensure that the winners kept on winning. As the status quo was then in favor of the Democrats, the report should be regarded as a plan for a political system in which Democrats would always win and Republicans always lose. . . Although some people saw that the report was bad description, almost no one saw that it was profoundly immoral – a sad commentary on the state of the profession (Riker 1997: 191-192).

These are, perhaps, words only a political scientist could write; the call for political parties to differentiate themselves has largely disappeared from political science, but it is still common on newspaper editorial pages. A brief perusal I undertook shows editorialists as diverse as George Will, Barbara Ehrenreich, and E. J. Dionne lamenting the lack of difference between party platforms.

Responsible party theorists are conspicuously absent from the response which greeted Downs’s work. The most glowing review of An Economic Theory of Democracy was penned by Charles Lindblom, who had also been instrumental in securing a publisher for the book. Lindblom writes that

While economists have made the most of a seriously defective system, political scientists have permitted a kind of perfectionism to inhibit serious, explicit system-building. In talking with political scientists, I am often struck by their dissatisfaction with theoretical proposals that do not promise a rough fit to the phenomena to be explained, while economists have happily elaborated, to take an example, a theory of the firm that is still a caricature of the phenomena described (Lindblom 1958: 241).

While Lindblom hailed Downs for bringing into political science a model that was largely free of concern for empirical support, most reviews predictably dwelt upon the model’s fit with empirical data. Almond (1993) summarizes several of these reviews; with the exception of the above-quoted Diamond review, most voiced rather qualified support for Downs but expressed doubt that his theory would find much support in political phenomena. In a debate with W. Hayward Rogers, Downs responds to several questions Rogers raises about empirically testing his predictions by noting that lack of empirical support does not invalidate his model as a deductive proposal; instead, it indicates that one or more of the assumptions is not borne out in the population upon which the test is being conducted (Downs 1959; Rogers 1959). Johnson (19xx) reiterates this claim, disputing the notion that lack of empirical support dooms the model. After all, few of the tenets of responsible parties theory are even conducive to empirical tests.

The fact that Downs’s theory purports to be positive rather than normative did at least shift the debate over political parties to his own turf. As Rabinowitz and MacDonald (1989) note, the most evident example of this is the introduction of scaling questions about political candidates on the National Election Survey.

Downs’s work bears an uneasy relationship, however, to one dominant strain of contemporaneous political science, however. He adopts numerous tenets of pluralism. Most notably, he directly cites two statements of Dahl and Lindblom regarding both descriptive and normative issues. In setting out definitions early in the book, he explicitly borrows Dahl and Lindblom’s definition of “governments” as

organizations that have a sufficient monopoly of control to enforce an orderly settlement of disputes with other organizations in the area. . . Whoever controls government usually has the “last word” on a question (Downs 1957: 22, citing Dahl and Lindblom 1953: 42).

Later, Downs notes that democratic control over government, a normative precept, can be tested in his model. He approvingly cites Dahl and Lindblom’s further definition of “political equality” as a circumstance in which

Control over governmental decisions is shared so that the preferences of no one citizen are weighted more heavily than the preferences of any other one citizen (Downs 1957: 32, citing Dahl and Lindblom 1953: 41).

At the time Downs was writing, however, the task of pluralists, to identify and define political power, was also being brought into question. In the economic model, the relationship between the parties is relatively simple – one party has power, the other wants it. Bachrach and Baratz (1962) propose a somewhat more complicated version of power. In a representative government, the exertion of power is manifested in the establishment of an agenda. In the pluralist approach, all popular grievances are recognized and acted upon, and all may thus participate to some degree in decision-making. According to Bachrach and Baratz, and as conceptualized later by Gaventa (1980), power may be exercised by the exclusion of some ideas from the political agenda entirely, and also by “influencing, shaping, or determining [one’s] very wants.” (Gaventa 1980: 12) By extension, the convergence of policy options presented to the voters has profound normative implications, insofar as the very preferences of voters are shaped by it. If this holds true, party convergence may not even be a result of parties catering to voters, but of a tacit collusion by parties in policies which will be offered to them.

Power theorists such as Bachrach and Baratz did not take on the normative implications of the median voter theorem directly. In taking issue with the pluralist definition of power, however, they were implicitly taking issue with the ability to draw any sort of normative inferences about the comparative normative status of party convergence or divergence. They were also, however, creating a significant measurement problem for pluralist theory. Baumgartner and Leech (1998: 60) note that in the wake of this debate,

the concept of power was not banished from political science, but scholars for the most part reacted by abandoning their interest in those questions. . . Scholars moved on to other fields that did not have at their core such a difficult concept.

Perhaps because the median voter theorem has so infrequently been the subject of normative debate, or because its conception of power is rarely considered by those who explore the ramifications of the model, this particular aspect of the model and the questions it raises have rarely been considered.

These three strains of political science, then – the developing field of formal models, responsible party theories, and pluralism – and the conflict between them created a context for Downs of which Downs himself may have been unaware. To a significant extent, debate about the median voter theorem has been about empirical accuracy; the other debates that preceded Downs have largely been left behind by political science. Those who have sought to develop Downs’s ideas further, or to present alterations of his model may have sought to defend themselves against charges of being uninterested in empirical accuracy, but the major refinements of Downs have all taken as their starting point propositions which have greater empirical support than do those of Downs. Because of these efforts, however, it can be shown that altering any of Downs’s assumptions bring his entire model into question. And in doing so, many of the debates which his work appears to have closed off may be re-opened. In the next section, I examine the empirical roots of work that has tinkered with his model, and I illustrate ways in which these adjustments collectively work to re-open questions of party responsibility and of the exercise of power.

The Median Voter Model and its Refinements

        As articulated by Downs (1957: 114-141), the median voter model is a model of party, not candidate, competition. Party convergence is predicated upon seven claims about party and voter behavior:

1) A political party is a “team of men seeking to control the governing apparatus by gaining office in a duly constituted election.” (Downs 1957: 25) Each member within the party thus shares the same goals, and each member takes policy positions as a means towards gaining office.

2) Voters judge parties based upon the proximity of the parties on policy issues to the voters’ own preferred position. Voter preferences can be reduced to a unidimensional policy space. They are single-peaked and monotonically declining from the voter’s ideal point. Voters prefer the party closest to them, the party that maximizes their utility (or minimizes their disutility) in this function. Voter preferences are exogenous to the actions of parties.

3) All potential voters vote; there are no abstentions.

4) Parties are free to position themselves at any point along the preference distribution.(1)

5) Parties have full information regarding the distribution of voter preferences.

6) Parties choose positions simultaneously. One party cannot know ex ante where the other party will position itself, although following Assumption One, each party should presume the other to take positions rationally.

7) Party utilities are defined by the number of votes they receive; parties are vote maximizers.

Given these seven assumptions, the result in a two-party election will be convergence at the median of the distribution of voter preferences.

Throughout both these assumptions and those refinements or alterations that follow, only three basic variables are in play: information about voter preferences or other candidates’ strategies; expected or potential outcomes of a given pairing of party positions; and the location of candidate issue positions themselves. These definitions themselves have been relatively uncontroversial in work that has followed Downs, but the assumptions outlined above have been disputed and altered. Empirical questions about each of the above assumptions have preceded theoretical work on the effects of each alternate assumption.
Assumption One: The Composition and Function of Parties

Assumption One was among the first tenets of the median voter model to be questioned. Most studies have shown that, at least in the American case, parties are not unified teams (see, for instance, Mayhew 1986). In addition, geographical representation and the heterogeneity of the American electorate would give lie to the notion that a unified party platform would be in the interest of vote-maximizing politicians. It thus seems inconsistent for Downs to describe parties as unified “teams” yet also to posit that their members are election-oriented.

At first glance, this might seem to be merely a small terminological problem. If we substitute candidate competition for party competition and if we then use the median voter model to study only individual elections we can proceed through the remainder of the model. Downs himself notes that the presumption of a unitary actor is necessary to avoid messy discussions of intra-party conflict; that is, he does not deny that intra-party dissension over policy exists, but it is not a concern of his model. Spatial models that have followed Downs’ assumptions rather faithfully have either referred solely to candidates rather than parties (see Shepsle 1972) or have discussed both without inconsistency of results (Page 1978).

The candidate/party distinction has not been easily finessed by others, however. As Schlesinger (1975, 1994) points out, the Downsian party is composed solely of office-holders and office-seekers. It is only one wing of Key’s (1958) tripartite division of the party in office, the party organization, and the party in the electorate. Downs’s parties emphatically do not include the electorate. This exclusion is necessary to maintain the relationship of parties as producers to voters as consumers. Voters exert a discipline upon parties by making their preferences known and choosing among two products, but they are unable to act in concert to allow themselves differentiated products.

In addition, voters are not presumed by Downs to be motivated by the same concerns as are politicians. Downs assumes that all voters vote sincerely; that is, they vote for the party whose policies they most prefer, and their benefit derives from seeing these policies enacted, not from the spoils of holding office. Voters have far less to gain from having their preferred party hold office than does the party itself.

Both prominent critics of Downs and proponents of alternate models have questioned the empirical applicability of this distinction between the preferences of voters and those of the Downsian party. Riker (1963) and Riker and Ordeshook (1968) have proposed models in which parties divide the benefits of office amongst themselves – in which the positions taken by parties are not positions of ideology, but positions regarding the optimal division of benefits amongst those within the party. Similarly, Aldrich (1995) and Aldrich and Rohde (1997) propose a “conditional party government” model in which party members collude in order to divide all benefits amongst themselves at the expense of the opposing party. Neither of these theories explicitly includes voters within the party, but they can, as Schlesinger notes, be read as attempts to include voters within the party. They are, he claims, “shareholder” models in which the voters have a stake in the party’s fortunes.

This framework, in which individual benefits – slices of a distributional pie – are the goal of voters rather than satisfaction of ideological preferences, does not necessarily yield different results than does the median voter model. An optimal strategy for parties is still to take the position which spreads benefits to a bare majority of voters. That is, if voters are arrayed unidimensionally in terms of their specific demands, the voter in the middle of this distribution holds the most leverage over both parties, and both parties will cater to this voter. Such a conception has implications for Assumptions Six and Seven, however. First, if Assumption Six is relaxed, if the parties move sequentially and if the first party does not take its position rationally, the second party would, in the Downsian conception, take a position right next to that of the first party in order to maximize votes. In the Riker and Ordeshook conception, however, the second party still would seek out the median voter; allocating benefits among a bare majority would maximize the benefits to each member. Thus, the Riker and Ordeshook model predicts a median position for the victorious party (and thus a median outcome) regardless of whether the strategy of the opposing party is known or unknown. Second, considering voters as party shareholders means that parties are not, as Assumption Seven states, vote-maximizers; instead, they seek to maximize benefits, which they do by maximizing their probability of winning.

Aldrich (1995) and Aldrich and Rohde (1997) utilize a similar allocation-of-benefits model to illustrate reasons for party divergence. Although again their model considers parties in government – more specifically, parties in the legislature – they argue that a model in which log-rolling exists will produce divergence in that it is the party median rather than the general median which governs the policy positions offered. This model relies upon relatively strong parties and a two-step process in which positions are first generated within the party through a median voter process, and then are offered to the general legislature. The voter at the legislative median still votes for that position closest to him, but he is choosing between two policies which are somewhat far from his ideal point. Such a model may also be used to explain the production of party platforms and the process by which party primaries or caucuses produce candidates. It does not, however, allow updating of strategies between stage one and stage two. Aldrich (1995: 20-21) notes that such a model must include at least some voters in the conception of party – it is the party activists, who are motivated by policy benefits rather than by pure office-seeking, who will be most active in developing the positions between which the median voter must choose.

Both of these models rely in part upon analyzing the intra-party conflict which Downs so studiously sought to avoid as a precedent to investigating the positions offered to voters. While the Riker and Ordeshook model makes sharp breaks with Downs in that it does not require the presumption of simultaneous movement, neither model makes explicit claims about simultaneity. Both, however, can be read as models which derive from empirical criticisms of the strict market relationship of parties and voters specified by Downs, and both introduce dynamics which alter Downs’s assumptions about the composition and goals of political parties.
Assumption Two: Proximity Voting, Unidimensionality, and Single-Peakedness

Another early line of empirical criticism of Downs was raised by adherents of the Michigan school of voting behavior study. In one of the most trenchant critiques of Downs, Stokes (1963) took issue with the assumption of proximity voting. In The American Voter, Campbell, Converse, Miller, and Stokes (1960) had found that voters had relatively ill-defined policy preferences; that they had scant information about candidates’ policy positions; that they frequently voted for candidates based upon party identification, personal attributes of the candidates, and other heuristics that were not necessarily related to ideological proximity; and that they rarely considered policy alternatives in a unidimensional liberal-conservative framework. Although these findings have been debated by public opinion scholars, they raise questions about whether single-peaked preferences, unidimensionality, and proximity voting are realistic assumptions for a model of voting behavior.

Of these three empirical issues, the argument against proximity voting is by far the most significant for reconsidering the model. Single-peakedness is, as Hinich and Munger (1996: 35) note, a necessary condition for proposing unidimensional equilibrium. One could certainly propose “all or nothing” situations in which preferences are not single-peaked. A voter might, for instance, prefer to allocate a large amount of resources to solve a particular policy problem, but this voter’s second most-preferred position might be to allocate no resources at all to this problem rather than to allocate an amount which is not large enough to solve the problem. Such situations may well exist, but if policy positions are to be averaged by the voter and placed upon a single liberal-conservative dimension, it seems far-fetched to propose that single-peakedness does not occur.

The specific claim above is only relevant if the policy space is unidimensional. Again, this is an empirical issue which has little import for the internal coherence of the unidimensional model. Much of the work in spatial modeling since Downs has been devoted to the quest for equilibrium in multi-dimensional models. Enelow and Hinich (1984) have published the most comprehensive investigation of multidimensional models. Where there are two or more dimensions, convergence does not occur, as one position can always be defeated by another (McKelvey and Ordeshook 1976). This cycling problem would, if the other conditions of the Downsian model held, ensure that incumbents are always defeated. It has brought about numerous studies of the process of agenda-setting, especially in small groups such as legislative committees. At heart, however, the dimensionality of the policy space is an empirical issue. As Iverson (1994) and Klingemann, Hofferbert, and Budge (1994) argue in comparative studies of politics in several countries, the actual number of policy dimensions in mass elections appears to be quite small. There may be more than one dimension, but there are rarely more than two.

Ferejohn (1993) argues that there is compelling theoretical reason for unidimensionality in mass elections as well. Positing a multidimensional space seems inconsistent with Downs’s work on voters’ information costs. Voters may be psychologically unable or unwilling to process multidimensional information, and they may prefer to seek to place candidates’ positions into a unidimensional space even if candidates do not seek to frame their positions in such a manner. Because of their own limited resources, candidates must economize on the transmission of information to voters, and will thus seek to transmit unidimensional information. Ferejohn notes, however, that this is a somewhat ad hoc argument. He finds more compelling the notion that unidimensionality is the only way for voters to enforce discipline upon candidates, to hold them responsible for their policy commitments. It is the only way that candidates can be accountable to voters, and as such, unidimensional ideologies may be created not by candidates but by the public as a means of framing policies. This is also not an airtight defense of the unidimensional model – it reads as a rather normative defense – but it is a compelling argument for remaining open to its viability in mass elections.

Concomitant with the debates over unidimensionality and single-peakedness is concern over the assumption of proximity voting. If there truly is a single dimension, then single-peakedness seems relevant, or at least empirically testable. If the policy space is multidimensional and an empirical study does not account for this, preferences which are truly single-peaked over each individual dimension but are taking multiple dimensions into account may appear not to be single-peaked in the unidimensional model. Questions also exist about the identification of these dimensions. The unidimensional liberalism/conservatism dimension may, for instance, be broken down into an economic liberalism/conservatism and a social liberalism/conservatism dimension; voters may prefer government regulation of economic matters yet be against government regulation on social issues (Enelow and Hinich 1984). Dimensions which are not strictly ideological may also exist; for instance, voters may evaluate candidates on a liberalism/conservatism dimension but then also evaluate them on a “leadership” or “charisma” dimension. In such a case, the second dimension ought not to exhibit anything approaching a normal distribution – voters may differ in their evaluation of a candidate’s charisma or the importance they place upon it, but it seems problematic to assume that voters would not prefer more charisma to less charisma, for instance.

The question in such models of how voters weight different dimensions has also been held to be of importance in unidimensional models. In a series of articles over the past decade, Rabinowitz and colleagues (Rabinowitz and MacDonald 1989; MacDonald and Rabinowitz 1993a, 1993b, 1997, 1998; Rabinowitz and Listhaug 1997; Morris and Rabinowitz 1997) have proposed a “directional theory of issue voting” which dispenses entirely with the proximity voting assumption. Instead of voting based on proximity, they argue, voters have only a diffuse “for or against” sentiment over ideological alternatives (albeit they make some allowance for proposals that are too extreme) and a particular degree of intensity about their preferences on these issues. Rabinowitz and MacDonald review developments in National Election Survey questions and conclude that there is not strong evidence that voters do array issues spatially. If voters only take a directional pro/con position on policy proposals, candidates have a “realm of acceptability” in which issue positions they may take. Voters may be more attracted to a candidate far from their “true” ideal point but on the same side as the voter than to a candidate who is closer to their ideal point but on the opposite side on an issue. There is little middle ground here; issues are framed in a yes or no manner, and voters will evaluate candidates’ position based on which side they are on and weight these positions according to how intensely they feel about the particular issue. Thus, parties will converge on an issue where there is consensus but will diverge where the electorate is polarized.

This model also raises empirical problems. Gilljam (1997) disputes Rabinowitz et al’s empirical support for their argument, and Merrill and Grofman (1997) join Gilljam in arguing that the directional model mixes voters’ subjective evaluations of parties with an attempt to place parties objectively on a policy dimension. The fact that voters may make errors in evaluating candidates does not discredit the proximity voting model, nor does the introduction of a preference intensity dimension. Merrill and Grofman also argue, in an argument which may bring Assumption Six into question, that tests for directionality in voting actually measure attempts voters and candidates make to confront uncertainty or lack of information.

In sum, these debates about voters’ behavior seem compelling in evaluating voting and election outcomes, but they have limited import for studying candidate strategies if candidates do not share these models’ quarrels with unidimensionality and proximity voting. That is, if candidates believe that their ideological statements will be evaluated solely on the liberalism/conservatism dimension, they will take positions that accord with a unidimensional model whether or not voters truly do evaluate them along these lines.
Assumption Three: Abstentions

The Downsian claim that there are no abstentions may be relaxed without affecting the model if either (a) the position of abstainers can be known, or (b) abstentions are not systematic – i.e. if candidates converge at the median in a single dimension then those voters on the extreme left and right have the same probability of abstention and will cancel each other out. Research on differences between voters and nonvoters has generally supported the second of these conditions. Wolfinger and Rosenstone (1980) have found, for instance, that if all Americans voted in presidential elections the outcome would be little different than it is in practice, where a large minority of eligible voters choose not to vote. The possibility exists, however, that candidates may mobilize disenchanted voters by taking noncentrist positions, and this phenomenon may indeed occur in some elections. Mobilization of potential supporters is certainly a goal of most candidates’ campaigns for office.

Downs himself does devote attention to the effects of abstention upon electoral outcomes (Downs 1957: 260-276). It is significant, however, that the Hotelling model upon which he draws in the median voter model is generally viewed as a model of competition between producers of goods with an inelastic demand function – for instance, of grocery stores or gasoline stations. Given equivalence of product, consumers will prefer the business located closest to them, but they cannot do without food, for instance, if the grocery store is farther away from their home than they would like. Likewise, one might argue, all voters are subject to their government’s laws; they cannot opt out of citizenship if their government does not enact policies they prefer. To extend a Hotelling model with barriers to entry to an unnecessary good – ice cream, for instance – would not alter its results unless consumers on one side of town were able to punish the ice cream stand for moving far away from them by declining to purchase ice cream while consumers on the other side of town were not.

This possibility is explored by Hirschman (1970) in his description of the problems of exit and voice in politics. If some consumers exit – or if some voters abstain – from supporting a firm or a party, the firm or party may not notice if it attracts as many new customers or voters as it loses by shifting its position. If, however, we have a two-stage process in which these individuals can make threats to exit without actually doing so, they may force the firm or party to take a position closer to their ideal point. This is the exercise of voice – an attempt by customers to change the practices of a firm rather than to escape from it. This can only occur where consumers have some sort of bargaining power. To return solely to the political context, such bargaining power may entail the threat of abstention or the threat of supporting an alternate candidate en bloc. It also may involve inspiring activists and mobilizing voters to pressure the party into taking a particular position. Because, somewhat paradoxically, the individuals most likely to exercise voice are those most loyal to the party and least likely to exit without warning, their threats may well be taken seriously by the party. These threats to punish the party in the short run in order to exact benefits in the long run spell trouble for office-seekers, whose time horizon is shorter than is the time horizon of activists. Hirschman’s conception still utilizes differences in motivations for office-seekers and other party members, but it certainly includes these activists within the party in the initial stage where voice occurs.

We know from empirical research on party conventions and caucuses that the most extreme members of the American parties are those most likely to attempt to exercise voice prior to the election or the selection of candidates (see, for instance, Bartels 1988 on primary voters and Sullivan, Pressman, Page, and Lyons 1974 on convention delegates). The Hirschman model seems somewhat inapplicable to a one-shot game, but if there is a multi-stage process occurring, where voice can be exercised prior to the adoption of issue positions, his model does produce a “curvilinear disparity” (May 1973) in which members attempt to exact benefits from leaders prior to the establishment of positions, and in which divergent positions may result. As Stokes (1998) points out, the leaders themselves must come from somewhere, and they are more likely than not to come from the activist ranks within the parties and to share some of these individuals’ ideological preferences.

Because these members have a longer time horizon than do office-seekers, they may remain loyal to the party even in a losing effort. Indeed, they may prefer a losing effort to a winning effort if it enhances the long-run prospects of having their preferences satisfied. Again, where the simultaneity assumption is discarded and where voters or candidates are able to gauge their ex ante probability of victory in an election at Time A, candidates who gauge their probability of winning to be equivalent at a number of different positions may be expected to take that position among those which maximizes their proximity to those party members who are exercising voice. This may be the case with a candidate certain of victory or a candidate certain of defeat. Election at Time A would certainly be presumed to be the most important goal for a candidate, but election (or re-election) at Time B may also carry some weight in the candidate’s calculus.
Assumption Four: Freedom of Party Movement

The threat of abstention imposes some limitations on party movement, but these are limitations of a particular type – they hamper movement toward the median because of strictly ideological preferences of party members. A somewhat different concern that has been raised by students of political mandates and political credibility is that candidates may not appear credible in the adoption of particular ideological positions. Voters may not believe that a candidate will actually pursue the policies claimed (that is, will remain at the issue positions taken prior to the election) if that candidate is elected. This may preclude a candidate from taking a median position.

This may occur in two ways, both of which are dependent upon a multi-stage game. First, an incumbent may be evaluated based upon her record. If voters vote retrospectively – that is, based upon what a candidate has done in the past and how well her past record compares with her campaign pronouncements – they may punish or fail to believe a candidate who advocates positions which differ from her past record. Comparative studies such as those of Klingemann, Hofferbert, and Budge (1994) and Przeworski and Stokes (1995) have evaluated the mechanisms by which voters may enforce accountability upon parties or candidates to ensure that once candidates are elected they actually seek to enact the policies they propose in their campaigns. An incumbent may be constrained by her past record from taking some positions.

This is not a major concern for the Downsian model; after all, even if one candidate is an incumbent, she presumably was elected in the first place because she took issue positions which satisfied the median voter. A candidate may be judged by voters to be inept or dishonest, but this ought not to alter the nature of issue competition. Another concern, however, is that if candidate emergence is itself considered a multi-stage process, a candidate may already have established a record as an advocate of a particular ideological position. Candidates may not actually be able to move towards the median; doing so may damage their credibility. This line of reasoning is frequently used to explain the failures of presidential candidates – it is said that candidates cannot shed the positions they have taken to win nomination once they proceed to the general election (Aldrich 1980). It is also used to explain the problems of office-holders who seek an office with a different constituency and thus a different preference distribution – for instance, members of the House of Representatives seeking election to the Senate. These candidates may seek to move towards the positions preferred by their prospective new constituency (see Rohde 1979), but they run the risk of losing credibility through “flip-flopping,” through taking contradictory positions at different points in time.

Finally, the movement of parties or candidate may be limited by party reputation; this seems to accord with Downs’s prohibition of “leap-frogging.” I noted above that leap-frogging poses no problems for a simultaneous movement model with two parties. If we again look at elections in a multi-stage process, however, party reputation may impose limitations upon movement. A Democrat may not, for instance, be able to take a position to the right of a Republican opponent because he would lose credibility. We might assume, for instance, a relatively liberal Republican incumbent who has established a position to the left of the electorate’s median. Were there no restraints on movement, the Democrat should win by establishing a position slightly to the right of the Republican, thereby conceding normally Democratic votes to the Republican and garnering Republican votes in return. If credibility is an issue, however, Republicans might not believe this Democrat to truly be more conservative than her opponent and might discount her issue positions.

Yet again, these criticisms suggest the problems of a simultaneous movement, one-stage issue competition model. They do no damage to the internal consistency of the median voter model, but they raise empirical questions about its ability to describe mass elections.
Assumption Five: Full Information

Perhaps the strongest assumption of the median voter model is its dual command regarding information – that voters know where candidates stand and that candidates know where voters stand. Downs devotes much of his book to arguments about why voters have little incentive to gather information about candidates. It does seem likely that voters will not be particularly well-informed, but this should have little import for the basic structure of competition unless voters are systematically uninformed – that is, voters who would prefer one candidate have little information while those who would prefer another do have information about the candidates. Low voter information might be another reason for candidates to systematically mobilize or inform particular groups, but in the absence of knowledge of the opposing candidate’s positions, it does not lead to alteration of the convergence prediction. Probabilistic voting theory, as exemplified by the work of Hinich, Ledyard, and Ordeshook (1972), Coughlin (1975), and Hinich and Munger (1995: 168) has made advances in modeling the behavior of voters given beliefs about candidate positions, but it does not affect candidate convergence unless it means that voters use non-ideological heuristics such as candidates’ personal attributes as means of reducing their uncertainty about candidate positions (Hinich 1977).

Of greater import, however, is the assumption of complete information on the part of the candidates about voter preferences. Downs’s model is deterministic – that is, it assumes that candidates know the expected outcome given any particular preference distribution. If candidates cannot know the distribution of voter preferences with certainty, however, they may take suboptimal positions based upon their subjective assessment of voter preferences. Erroneous assessments of voter preferences make a convenient scapegoat for candidates who take non-centrist positions.

For candidate divergence to occur, however, candidates must either be completely uninformed, must have different amounts of information, or must have different types of information. The first of these conditions would, if true, make any sort of formal theory of candidate strategies futile – it would have candidates behaving with no observable election-oriented incentive whatsoever. The second and third, however, seem quite plausible. Ferejohn and Noll (1978) present a theory of information asymmetries in which information about voter preferences is available to each candidate, but is costly. Such would be the case, for instance, for privately held, proprietary public opinion polls. In such situations, the wealthier candidate would obviously have an advantage. They might also, however, prefer to avoid policy issues and ideological appeals altogether in their campaign, so as to entice voters to evaluate them on other grounds.

Such an explanation may again account for divergence on issues, but again, it explains such divergence as a function of errors made by the candidates. Were the candidates in possession of information, they would still follow a median voter strategy. Even if candidates prefer to steer their campaign away from ideology, they must still take some issue positions, and there is no logic to adopting these positions without respect to beliefs about the median voter and the distribution of voter preferences.

Low information might also lead to rhetorical or heresthetical(2) appeals on the part of candidates – that is, if candidates are uncertain what voters’ preferences are, they may seek to influence voters’ preferences in order to bring them more in line with their own. Appeals to social norms, for instance, might influence voters’ beliefs about what their preferences are. Riker (1990) argues that, in fact, this is the function of campaigns. In addition, Kingdon (1993), Stoker (1992), and Hardin (1995) all make an argument that voters’ or citizens’ beliefs are not strictly self-interested or outcome oriented, and as such rhetorical appeals may be effective. Certainly voters are not omniscient. However, evidence is lacking that candidates have the resources to actually persuade voters to alter their preferences. If candidates can know voters’ preferences, it certainly seems more cost effective for them to follow voters’ preferences rather than to try to change them.

In electoral competition, however, the full information requirement for candidates is not as demanding as it may seem. First, candidates do have means available for gauging public opinion. Some, such as opinion polls, are costly. Others, such as gathering knowledge of the past behavior of the electorate, are not. In addition, if candidates move sequentially, the second mover has the additional advantage of observing the first mover’s positions. The second mover may thus either copy the positions of the first candidate, or if she thinks the first candidate has made an incorrect assessment of voter preferences she can take a different position. If we do assume that the liberalism/conservatism dimension is the appropriate dimensional distribution function for voters’ preferences, taking a position on this continuum which roughly approximates the electorate’s median does not require superhuman information-gathering efforts.
Assumption Six: Simultaneity

One relatively unexplored tenet of the Downsian model is the assumption that candidates choose positions simultaneously. In one sense, the simultaneity assumption can be relaxed without altering the outcome of the model. Simultaneous positioning necessarily implies a lack of information about one’s opponent’s position; hence, there is a presumption of rationality for each candidate. This ensures that each candidate will seek a median position regardless of what the other candidate’s position is. As the above discussion of Riker and Ordeshook shows, however, simultaneity is a necessary assumption for a median voter outcome where candidates are vote maximizers. It is not a necessary assumption where candidates seek to build a minimum winning coalition. In the latter circumstance, each candidates should seek a median position even if that candidates has knowledge that her opponent has failed to take such a position.

This seems like a rather rare defect to the model, however; this instance only occurs in cases where one candidate is irrational or misinformed and the other candidates knows the first to be irrational or misinformed. Furthermore, the simultaneity assumption may be discarded if the campaign is seen as a repeated give-and-take. If candidates have frequent opportunities to update their strategies, to assess their opponent’s positions, and to revise their own positions, a gradual movement toward the median on the part of both candidates results.

This circumstance can only happen, however, where there is freedom of movement and where movement is not particularly costly. This presumption seems ill-suited to most campaigns. Changing positions may be costly in terms of candidate credibility, and if one candidate has a pre-existing advantage, as in the case of incumbency, positions taken over a long period of time – over a term in office, for instance – may be difficult to alter. Thus, while simultaneity may seem a rather restrictive assumption, assuming unlimited updating may also be difficult to support.

The tendency documented by Fiorina (1981) and noted by Downs (1957: 41) for voters to vote retrospectively suggests a two-stage game in which the candidate who moves first – generally the incumbent – can “capture” a particular position on the dimension. Other models have sought to account for incumbents’ advantage, but they have not done so in the explicit context of a sequential movement framework. Feld and Grofman (1991; also Grofman 1993, Merrill and Grofman 1997) have developed a theory of “incumbent hegemony” (see Stokes 1998) in which incumbent have a “benefit of the doubt” zone, a zone of invulnerability around their spatial position. Here, incumbents give the incumbent the benefit of the doubt if their positions seem relatively close to theirs because of nonpolicy attributes of the incumbent. If this zone includes the electorate’s median, the incumbent cannot be defeated. They extend this model beyond the unidimensional framework to argue that where it exists, the two-dimensional instability described by McKelvey and Ordeshook does not exist. In this scenario, the incumbent need not be precisely at the electorate’s median, only somewhat close to it. Thus, an incumbent might also be able to maximize utility in regard to secondary, non-vote-maximizing goals.

The Feld and Grofman model assumes simultaneity, but it hints at a two- or more stage process. They demonstrate that, where this benefit of the doubt accrues to incumbents, “certain centrally located points will defeat any challenger by a substantial margin.” (Feld and Grofman 1991: 117) Should a potential challenger suspect that this will transpire, competition and candidate entry will be deterred. Thus, a sort of two-stage process transpires where an incumbent establishes a central position and a potential challenger decides whether or not to run.

Groseclose (1997) does not make direct reference to Feld and Grofman, but his model of two-candidate competition where one candidate has a personal advantage is quite reconcilable with Feld and Grofman. Groseclose notes that any personal advantage, no matter how small, causes the Downsian equilibrium to disappear. Again, candidates choose positions simultaneously, but the advantage held by one candidate is exogenous and is known. Should this transpire, candidate know that if indeed they do converge, the candidate with the personal advantage will be the unanimous winner. Groseclose assumes “non-policy triviality” – that is, that the personal advantage is not so large that there is no pair of positions where the disadvantaged candidate wins. Given this, the disadvantaged candidate will gain votes by moving away from the center if the advantaged candidate is at the center, and by moving towards the center if the advantaged candidate moves away from it. There is thus substantial allowance for candidate divergence. Groseclose closes by arguing that as the personal advantage of one candidate grows, the disadvantaged candidate adopts a more and more extreme position. This scenario is equivalent to Feld and Grofman’s benefit-of-the-doubt scenario.

Each of these models, as well as the incumbent hegemony model of Snyder (1994), assumes the establishment of a non-ideological advantage but simultaneous establishment of positions. Retrospective voting, however, a factor which has been acknowledged as rational behavior by spatial theorists at least as far back as Downs (1957: 41), must be considered at least in part to be retrospective evaluation of the ideological pronouncements of a party or candidate. As such, it is difficult to imagine the establishment of a personal advantage on the part of an incumbent which is completely devoid of issue positioning. The incumbent must take positions while in office and before the true extent of her “benefit of the doubt” or personal advantage is known; a vote-maximizing incumbent thus has an incentive to adopt a median position as early as possible – before competition arises.
Assumption Seven: Vote Maximization

By this point, it should be evident to the reader that rejection of one of the assumptions stated above has implications for the feasibility of entertaining the subsequent assumptions. For instance, if one disputes the Downsian definition of parties, it is difficult to assume that parties are solely vote maximizers. If parties do not have freedom of movement or if they do not take positions simultaneously, it is difficult to support the idea of parties as vote-maximizers because vote maximization in a losing cause may have scant utility to a party. If voter preferences are entirely known by parties, then the result of any election is virtually assured given a set of policy positions, and a party which cannot adopt a centrist position is a certain loser.

These problems bring into play two objections to the assumption that parties are vote maximizers: first, that parties do have vote maximization as a primary goal as opposed to maximizing benefits or probability of winning election; and second, that even if parties are vote-maximizers, that they are solely vote-maximizers, to the exclusion of any other type of secondary goals.

A common early line of criticism against Downs is that the market analogy has limited utility in describing politics precisely because parties gain little by winning by overwhelming majorities or in losing close elections. Barry (1970) and Przeworski and Sprague (1971) point out that in market competition, a firm always benefits from greater sales or more market share, while a party does not necessarily benefit from votes beyond a narrow majority or plurality. This argument is systematized by Riker and Ordeshook, who substitute benefit maximization to vote maximization; in such a scenario, a party seeks to ensure victory, and thus might prefer to seek as many votes as possible where the preferences of voters are somewhat uncertain. In a probabilistic, simultaneous mover model, maximizing votes and maximizing probability of winning may be coterminous (Coughlin 1975). Where a party’s probability of winning at any particular position can be known, however, that party may have a variety of positions with an equivalent probability of winning.

If simultaneity is not assumed, and where there are exogenous factors such as an incumbency advantage, this condition may occur in two different circumstances. First, an advantaged party may have a range of winning positions. Second, a disadvantaged party may have no winning position. In the first circumstance, a party with a benefit-of-the-doubt zone and full information about voter preferences can take any position within that zone. In the second, a party with knowledge that its opponent has taken a winning position has a choice of many positions, all of whose probability of winning is zero. Where parties position themselves sequentially, the party which chooses a position second may have a range of winning positions if the first mover has taken a suboptimal position, or it may be able to adopt any position without affecting its probability of winning (because it has no chance of winning) if the first mover has an advantage and has taken a position rationally.

These may seem to be relatively extreme circumstances, but they do necessitate the introduction of secondary goals for the parties in order to make any claims at all about rational position-taking. Even if the extreme nature of the above is reduced somewhat – where the probability of winning is not one or zero, but is highly restricted and there are secondary concerns for the parties, a party’s decision-making calculus may be affected. This begs the question of what these secondary concerns might be.

Relaxation of the first assumption to include activists or voters within the party, as well as considering the threat of voice or exit which results from relaxing the third assumption, introduces noninstrumental policy preferences for the party. That is, in addition to preferring to either maximize votes or to maximize probability of winning, candidates or parties may prefer to maximize proximity to their “true” or ex ante preferred positions. Where vote-maximization is posited, there is always a trade-off between votes and noninstrumental policy concerns; even where one candidate has a significant advantage, there are votes to be gained or lost through movement within the ideological space. Several formal theorists (Groseclose 1997; Wittman 1977, 1983a, 1983b; Chappell and Keech 1986) have sought to model the trade-off between the two, assigning weights to each concern and constructing a utility measurement which accounts for both concerns. If probability of winning is posited as the dominant concern, however, a deterministic, sequential, and full-information model throws such secondary incentives into sharp relief – there is nothing else to guide candidate or party position-taking across a range of positions where probability of winning is equivalent.

Reaching such a point involves disputing Assumptions One, Three, Six, and Seven. The only crucial dispute, however, is with Assumption Six; the other assumptions must necessarily be discarded when simultaneity is not assumed.

Secondary utility concerns have been inserted into hypothetical models, most notably in the work of Wittman and Chappell and Keech. These concerns are not directly measurable because they are idiosyncratic characteristics of each candidate. We cannot measure the actual preferences of candidates; even if we were to ask them what they “truly” believe about policy issues, it seems unlikely that they would claim to be advocating policies which deviate from their ex ante beliefs for the sake of being elected or gaining votes. As Canon (1990: 27-30) notes, however, a candidate who truly believes he has little chance of winning has less incentive to compromise his position; the very fact that he has chosen to run indicates that he is guided by his devotion to a cause, his desire to bring greater attention to his own ex ante preferences, or his desire to induce his opponent to address these issues. He will only make himself – and his fellow partisans – unhappy by deviating from such positions. Should this candidate find himself in a position to win, however, he may reason that even should he compromise his positions he will still no worse in regard to these issues than his opponent. Where candidates positions themselves sequentially, such a candidate seems particularly likely to emerge.

Implications of Altering the Simultaneity Assumption

        The assumptions of the median voter model are thus a set of dominos – if one is knocked down, the rest follow. To say this is not to argue that the median voter model contains internal contradictions, nor is it to say that the model should be knocked down. The basic intuition of the model, that given the assumptions enumerated above, candidates will adopt similar ideological positions, has been used to great effect in analysis of committee behavior and other smaller-scale phenomena. It has also been a useful tool for the study of many elections – but certainly not enough to pass empirical muster. It may well be that this is because one or more of the assumptions contained therein rarely hold, but a model cannot be proven or disproven if its failure requires us to test its assumptions.

The purpose of this paper has been to argue that while the introduction of positive models of political behavior ended much of the normative debate in the discipline about appropriate actions of political parties, these questions have not entirely vanished. The introduction of a sequential component into the median voter framework brings about several alterations in the other assumptions:

– Sequential movement implies that political parties must, at some times, produce candidates whose primary goal is not to win office, because attaining political office may not be feasible where the first mover holds an advantage.

– Parties in a sequential movement model may, then, share some of the preferences attributed to voters.

– In a sequential movement model, parties who choose positions second have information about the strategies of those candidates who move first. In such circumstances, holding full information about voter preferences is not entirely necessary – there is a threshold beyond which information about voter preferences serves no purpose.

– Vote maximizing strategies yield no benefits for candidates who choose positions second and are at a disadvantage; gaining votes does not alter a candidate’s probability of victory.

These alterations pose several theoretical issues for debate, issues which parallel the normative concerns which were debated prior to the introduction of the median voter model. First, can party preferences be isolated in instances where parties have multiple optimal strategies which maximize their probability of winning? In the case of candidates certain of defeat, can the positions taken be said to reflect the noninstrumental preferences of their party? If so, do these positions represent clear and divergent policy prescriptions? It may be somewhat paradoxical to look for the voice of the party in the campaigns of losing candidates, but these positions are not exclusively the province of losing candidates. Rather, we can be certain that these are positions taken for noninstrumental reasons, while similar positions taken by victorious candidates cannot securely be attributed to anything other than a desire to maximize votes. A victorious liberal candidate who represents an overwhelmingly liberal district may take the same positions as a defeated liberal candidate running in an overwhelmingly conservative district against a conservative incumbent. In the first case, the victorious candidate may be following either his or her true beliefs or merely catering to voter preferences; in the second, the defeated candidate certainly cannot be said to be seeking to gain support through such positions. The relevant question, then, is whether this defeated candidate speaks for his party, or whether his views are idiosyncratic, personal beliefs.

This scenario poses a somewhat paradoxical agenda for advocates of responsible parties. It might lead to a call for increased attention to disadvantaged candidate – to calls for campaign finance reform or public financing of campaigns, for instance. Such a call would not, according to the logic of the adjusted median voter scenario I have proposed, yield significantly different incumbents. First, divergent races occur because one candidate has no chance of victory. If that candidate’s probability of winning is increased, there is no reason not to expect that candidate to eschew his or her positions and adopt a more centrist strategy. In attempting to reward the provision of clear choices, we would have eliminated them. Second, even if this were not to happen, the candidate who is not following a strategy designed to capture the median voter would still be defeated, for the simple fact that her positions do not match those of a majority of voters. We would end up, as Riker’s above criticism of the responsible parties model notes, merely perpetuating the dominance of the party in power. In the end, we are left with the depressing conclusion that divergent party agendas exist right beneath our noses, but the more we seek to reward such strategies, the more they recede from our grasp.

Second, a focus on the similarity between the platforms of disadvantaged candidates – an emphasis, for example, on commonalities between challengers to incumbents which cuts across ideological or partisan lines – may help us to identify which issues are kept off of the agenda. Given the plethora of issues which confront the average member of a large legislative body, it seems difficult to argue that any particular issues are kept off of the agenda. To a large extent, however, one would expect a competitive challenger to be essentially reactive – to be addressing issues on which the incumbent appears to be vulnerable. Campaigns of less competitive candidates are free of this restraint. Candidates who run against popular opponents and who have no chance of victory have the luxury of being able to speak about anything they choose, to adopt any issue stance they choose. Are issues introduced in such campaigns which are not introduced by more competitive candidates? If so, what is the merit of such issues? Do they represent valid or innovative policy proposals, or are they merely idiosyncratic causes of these candidates?

Altering the assumptions of the median voter model thus does reintroduce valid normative questions, albeit in a different form than they took before its emergence. It does seem rather beside the point to argue about whether parties as a whole should present the voters with divergent yet responsible agendas. The question may be, instead, do they have “true”agendas which do diverge? And what is the import of this for policymaking, if indeed such divergence has any import at all? The area to look for responsible parties and electoral choice is not, in an age where overwhelming majorities of incumbents are re-elected, in the campaigns of incumbent office holders. It just may be, however, available when we consider the campaigns of nonincumbents.
Brian Twomey

Weekly Trade EUR/USD May 18 to 22

As a trade example to demonstrate continuous trades all week in order to maximize profit pips, the EURUSD trade as posted represented a perfect trade yet a trade that lasted all week. A third leg will be added at the end of this post.

The first trade

Long 1.0790 and 1.0771 to target 1.0884
Lows 1.0799, Highs 1.0884
Trade Ran +85 pips

2nd Leg

Long above 1.0902 to target 1.1012. Must cross 1.1009.
Highs 1.1008
trade Ran +106 pips.

2 trades + 191 pips.

3rd Leg

Short 1.1008 to target 1.0944. Must cross 1,0975.
Recall inclusion to this option to the weekly trade as it remains open.
Short below 1.0902 to target 1.0836,
Never materialized however the week contains a long way to Friday.

Friday Close

Friday is a special day for fx trading as the currency price spends 1 /2 its day trading to the close price.
The Friday close price will trade below 1.0975 to expected 1.0940’s. A list of 18 currency pairs may post as forecasts to closing prices and in turn will serve as Friday day trades. And quite easy trades. To know the close price will then offer indications to next week’s trade lineup, entries and targets.

Point of note to trades is perfect entries and targets then to continuous trading all week. The new feature is to forecast closing prices.

Last point is no charts, no stops, no graphs, no market talk as prices don’t care to market talk. All a price knows and cares about is entry and target as a trade begins at entry and ends at target.

To my observation, nobody else is doing, trading or forecasting as my posted trades. No trade posted a loss and for many, many trades over many weeks. The gurus today held up as best actually fail to meet any expectations to trade and profit ability.


Brian Twomey

Weekly Trades: May 18 -22, 2020

NZDJPY This week traded 300 pips Perfect entry  and target 


EURNZD 300 pips Perfect entry and target 

GBPJPY 300 pips Perfect entry and target

EURJPY 300 pips Perfect entry and target

GBPNZD 500 pips No trade no entry


NZDCHF 200 pips Perfect entry and target

EURUSD 200 pips perfect entry and target

AUDJPY 200 pips perfect entry and target

AUDUSD 200 pips entry from break

GBPCAD 100 Pips, 30 ish pips off entry, target not yet achieved 

EUR/AUD No entry, no trade

GBPAUD way off entry

USD/CAD No entry, no trade

CAD/JPY perfect entry, target achieved, 200 pips

EURUSD Perfect entry, target achieved 1st leg, within 30 ish pips to target on 2nd leg.

NZDCHF 200 pips, perfect entry and target

NZDUSD Missed entry on 1st leg by 30 ish pips. +40 pips on 2nd leg , target not yet achieved.

EUR/AUD Vs GBPAUD last 4 weeks, Running 1 week perfect then next week off, next week perfect then next week off.

2 weeks perfect vs 2 weeks off. Off for GBPAUD is 2 to 300 pips entries and perfect results in profit 300 pips profit.

AUDUSD partly responsible as AUD been slightly off for weeks. Off means 20 and 30 pips.


Overall, ranges are slowly compressing. AUDUSD for example is now down to 98 pips from a whopping 200 less than 2 weeks ago. GBP ranges are compressing. EURUSD was already in compression mode last 4 weeks. Rarely does Eur have a good range week. Wide ranger GBPAUD and GBP/NZD ranges are compressing.

We’re heading to severe range markets. For us, it doesn’t matter what type of markets trade, as we have it all completely covered. Actually make more money in range markets as entries and targets become just as perfect. but also we;ll have more trades per week and per currency pair. Range markets are just as easy to forecast.


Brian Twomey


FXstreet Signal Service

 Typical day at fxstreet signal service. Losses far exceed gains since inception.

16 total trades = 12 loses for minus 998 pips. 4 winners for 71 pips.

Access ALL FXStreet Signals and chat with our analysts

Make me Premium


Order ID: 43412981
Open: 17:30 05-18
Price: 0.6969
Close: 07:00 05-20
Pips: +18

Order ID: 43412991
Open: 17:30 05-18
Price: 0.6338
Close: 15:00 05-19
Pips: -41

Order ID: 43276873
Open: 11:19 05-15
Price: 130.48
Close: 14:14 05-19
Pips: -183

Order ID: 43153579
Open: 16:25 05-13
Price: 110.1
Close: 12:43 05-19
Pips: -100

Order ID: 43408832
Open: 16:30 05-18
Price: 0.5857
Close: 06:30 05-19
Pips: -52

Order ID: 43276944
Open: 11:21 05-15
Price: 68.8
Close: 00:13 05-19
Pips: -149

Order ID: 43083601
Open: 20:40 05-12
Price: 76.17
Close: 00:05 05-19
Pips: -94

Order ID: 43276798
Open: 11:18 05-15
Price: 63.83
Close: 00:05 05-19
Pips: -118

Order ID: 43333816
Open: 01:30 05-18
Price: 1.8799
Close: 19:50 05-18
Pips: -106

Order ID: 43217711
Open: 14:15 05-14
Price: 0.625
Close: 16:00 05-18
Pips: -66

Order ID: 43308586
Open: 16:59 05-15
Price: 1.4279
Close: 14:45 05-18
Pips: -59

Order ID: 43301212
Open: 15:30 05-15
Price: 1.8251
Close: 15:38 05-15
Pips: 0

Order ID: 43298144
Open: 15:00 05-15
Price: 1.6868
Close: 15:18 05-15
Pips: -14

Order ID: 43288856
Open: 13:30 05-15
Price: 0.5786
Close: 15:18 05-15
Pips: -16

Order ID: 43288870
Open: 13:30 05-15
Price: 1.5262
Close: 14:15 05-15
Pips: +36

Order ID: 43288841
Open: 13:30 05-15
Price: 1.6851
Close: 14:04 05-15
Pips: +17
                    Brian Twomey

Peter Wadkins Currency Options and Flow Volume

 Just don’t see this type of Fx Research anymore and this

Was common just a few short years ago. Enter and target = profit destroyed today’s market and vast majority can’t enter, target and profit correctly.

BUZZ-Asset managers’ alpha thirst drives CAD secs flows

Nov 16 3:52pm By Peter Wadkins

CAD looks vulnerable judging by U.S. and Japanese equity movements since quarter end. Today’s Canadian international securities data highlights the impact of alpha on asset managers’ investment decisions. Canadians invested just 1.7bn in foreign equities in September vs August’s 7.2bn. It’s the August data that’s the outlier. On August 1 USD/CAD was 10% below May’s cycle peak, with the S&P up 5.3% and USD/CAD appearing to form a base above 1.2400. U.S. stocks looked cheap. The S&P’s rally peaked early, August 8. A week later so did USD/CAD. Lack of either FX alpha progress or underlying asset appreciation, prompted fund managers to start hedging U.S. assets. Volume spikes in FX and S&P futures at month end hint at sizeable flow. In September, foreign asset managers bought 18.7bn of mainly short-dated Canadian bonds, double August’s total. Offshore investors were rewarded on Sep 6 when the BoC hiked rates and USD/CAD plunged. By month end CAD/JPY rose 3.7% and EUR/CAD 1.5% more than offsetting fixed income losses. Since quarter end the Nikkei’s +9.0%, CAD/JPY is -1.7%, the S&P +2.7% and USD/CAD +2.1%, the TSX +1.9%. Portfolio rebalancing’s already underway.

RM weekly CAD flows, charts EURCAD & CADJPY: http://reut.rs/2A4ot1D


BUZZ-FX options flows hint USD could still breakout

Nov 16 5:55pm By Peter Wadkins

Yesterday’s Deutsche Bank weekly option flow report noted the USD was the only major currency that benefitted from inflows, Swiss was the most sold, mainly USD/CHF calls however volumes dropped approximately 25%. That’s surprising because IMM CHF shorts jumped 20% – implying the options market is less convinced the topside’s going to break. Deutsche Bank’s SVACHF report (Skew Volume At-the money) indicates greater buying of USD/CHF calls over puts at roughly 2.3% of net. The BIS puts Swiss options volume at about USD 5bn a day so it’s not a huge number vs spot turnover. NZD puts over calls 2.2% of net, turnover USD8bn (BIS), so USD900mn for the week. CAD puts over calls around 1.7%, CAD options turnover 14bn a day (BIS) USD 1.2bn for the week. JPY as of Tuesday night was still being net sold, but puts over call sales dipped below 1% of net, with daily option turnover 74bn (BIS) that’s still 3.3bn for the week. The options market seems to be saying the buck could still breakout.

FX option flows: http://reut.rs/2zHvhlc

BUZZ-EUR/USD options flow, gravestone question rally

Nov 15 2:56pm By Peter Wadkins

Today’s weekly Deutsche Bank FX option flow report reveals the USD was the only currency benefiting from inflows last week (1.1% vs 0.9%). Between Nov 7-14 the DXY dropped 1% while EUR/USD rallied 1.8%, so option flow implies options desks counter-traded the underlying USD move. Deutsche Bank’s SVAEUR report (Skew Volume At-the money) indicates greater buying of EUR/USD puts over calls for a second week. According to BIS data EUR/USD options turnover is around USD 64bn per day. Deutsche reported put volumes were a net 1.0% over calls which implies global buying of ATM EUR/USD puts over calls around EUR 540mn. That smacks more of value buying than sentiment shift but it’s interesting nonetheless. Deutsche Bank notes a correlation between their EUR data and IMM positioning data of 43%since 2014. Last week the IMM added almost 20% to EUR/USD longs. Both Deutsche and HSBC concur options flow and IMM positioning are good indicators of spec positioning. Given the discord between the two this week something’s amiss. The gravestone Doji on today’s EUR/USD candle is very bearish. EUR/USD bulls should be cautious.

EUR, EUR IMM positionshttp://reut.rs/2zE4tm3


BUZZ-EUR flows: HF short squeeze, corp hedgers buy

Nov 14 3:58pm By Peter Wadkins

Global traders added to EUR/USD longs last week judging by Citibank’s weekly flow report. Last week Citibank saw USD net outflows of 3.8% of average weekly volume which, using the BIS 2016 triennial survey data as a benchmark, implies USD/G10 shorts grew roughly USD 180bn. EUR longs jumped 3.2% or approximately EUR60bn by the same criteria. However unlike the prior week, hedge funds were the biggest EUR buyers at 4.1% of their average weekly flow while banks and real money stood aside. Banks and real money remain long, but we estimate using Citibank’s flow data and BIS client group weightings that banks are long EUR/USD roughly E140bn, while RM, the next most active group according to the BIS are long 27-30bn. By the same criteria, corporate hedgers are even longer, having bought some E40-50bn over the past 9 weeks. HFs appear short EUR/USD, having sold some E11-13bn over the same time frame but they may have had residual longs heading into the period. Whatever the case, they’re buying EUR/USD on the short-squeeze. The long EUR/USD position overhang should slow EUR’s ascent.

BIS Triennial survey: http://bit.ly/2hp0thV

FX market turnover by counterparty: http://reut.rs/2zpBv9n



Peter Wadkins and Brian Twomey MAY 2013

The cycle gurus refers to big John Taylor from Fx Concepts, the largest FX hedge fund in the world, now gone. Head trader was Jonathan Clark. Again I was a perfect trader dating back 8 and 10 years ago.


The cycle gurus weekly missive came out on the day that EUR/USD blasted through their 1.3215 “red flag” level and we waited with baited breath to see what their response would be – “Take ‘Em” – surely not. Well surprisingly enough they were not being contrite in fact they were almost salivating at the chance to sell higher up “Our target for this upmove is only the 1.3350 area and if seen this should be a good place to begin selling.” Yesterday’s peak was 1.3243 and that naturally threw them off, after all it was a clean break above their red flag – but as the cycle gurus have frequently cautioned, you don’t chart the extremes to confirm a break, you work off daily closing levels, EUR/USD closed at 1.3185.
So, from that yardstick the cycle gurus prognostications remain intact, until we close above 1.3215 there’s nothing to worry about – right? Well not quite so, given the fact we’ve been as high as 1.3243 there’s possibly something awry with the cycles – they’ve raised their red flag level … “Only a close above this level (1.3350) means the uptrend will become stretched and it will rally to the 1.3480 area before peaking, but this is less likely. By the week of May 20 and probably sooner the euro should turn lower and decline for several months.”
Now that we’ve had such a shocking reversal today, the cat’s among the pigeons, in fact their first red flag close above 1.3215 may well be correct. We tend to blame the “slings and arrows of outrageous fortune” after all their missive came out as usual ahead of the ECB but more importantly after the May Day holiday – in the midst of Japan’s “Golden Week” – in all probability 1.3243 would not have printed if not for yesterday’s liquidity starved conditions on the back of month-end squaring the day before. We noted last week that extreme volatility is typically a sign that a trend is coming to an end or a violent continuation, after today’s ECB outcome it seems to be the former.
The cycle gurus have some advice as to how to identify this fresh downtrend is upon us … “By the week of May 20 and probably sooner the Euro should turn lower and decline for several months. A close below the support at the 1.3020 to 1.3035 area is needed to immediately turn the outlook negative. It is then headed directly lower into the middle of June. Our initial target for this downtrend would become the 1.2550 area. The longer-term cycles argue this overall weakness can persist into August and the euro can fall to as low as the 1.2100 area before bottoming. A widening of the Bund/Bonos spread is a likely to be an early warning that the downtrend is resuming.”
Our view is that yesterday was exactly as we penned above – a bully boy liquidity squeeze that caught the market wrong-footed. Today’s ECB doves, some who wanted a 50bp rate cut, tell us monetary policy will be accommodative going forward but not so loose that growth will pick up dramatically because the committee remains at odds with itself. German elections and Germany’s role as Europe’s paymaster dictate that there will continue to be bickering over when to loosen the purse strings through the summer and not to expect a contrite ECB proclaiming mea culpa – we need to embrace QE. So no equity market rally there unless global markets are rallying elsewhere. Banks are talking about the “Draghi put” offsetting the “Bernanke put” which should allow USD to rise if growth remains positive (relatively)
Our black box friend Brian who just scalped a nice long trade, booked his profit at his target 1.3111 and waited for the dust to settle. Here’s what he has to say now … “EUR/USD. Market is locked between shorter term 1.3128 — 1.3020 and longer term 1.3224 — 1.2924. Longer term targets: 1.3263 and 1.3298 from longer term averages yet overbought at 1.3203 and a good sell point…
Longer term average has forecast 1.3260’s since just before March 5 but has yet to achieve that potential. Forecasts of 1.3300 and above will not be an easy road and doesn’t yet appear in longer range forecasts neither do 1.2800’s. Trend Intensity is at the highest readings and warns of imminent decline. That indicator has risen steadily since March 5 when EUR/JPY embarked on its advance from 119.00 to 131. It appears EUR/USD rises was all EUR/JPY buying related as the trend has warned of decline since March 5. My long trade today has a target of 1.3124 from current 1.3040 lows.”
Our rudimentary moving average model was -5 units EUR/USD last time we updated it April 25; spot was 1.3015 and we highlighted the fact that the model would turn long by 1.3038 and that is indeed what happened. By yesterday’s close it would have been long 15 units EUR/USD and at maximum vulnerability. We have seen what’s happened since and that’s why you cannot run a rudimentary M/A system, you need some bells and whistles to book profits and keep you out of harms way (like extended Bollinger Bands, stretched average true ranges, oscillators, RSIs etc.)
Updating the rudimentary model this afternoon (m-to-m 1.3058) the model is now 7 units EUR/USD short; some of the longer models with hefty losses having only recently being triggered long. Others surprisingly not too bad, that’s why we skew the model to incorporate more shorter term components than we used to. The blended  model say spot trades most comfortably 1.2890/1.3210 and gets stretched outside that area. Short term models say 1.2945/75 is oversold, ultra-short term say 1.2995/1.3045 is oversold. 1.3185/1.3300 is overbought from 24-hr M/A thru 55-days.
So from looking at Brian’s long term models (1.3263/98) ours (1.3185/1.3300) and the cycle gurus 1.3350 (only close above allows higher) we seem to be at a similar consensus as last week, 1.3200/40 is overdone, if you get another bite of the cherry, fade it. Peter.Wadkins@ThomsonReuters.com
       Brian Twomey

Brian Twomey and Peter Wadkins 2013

From 2012 to 2014, Peter Wadkins at Thomson Reuters was allowed wide variations to write FX commentary. But brilliantly expressed and true FX commentary in regards to deep depths to how FX markets operate, central bank longs and shorts, yields, imports and exports. Name it and Peter covered it in detail. Also from 2012 -2014, Peter included me in his commentaries and seen by many hedge funds, central banks, corporate trading departments. Back then I was perfect as demonstrated by Peter for many years.

As we noted in our piece yesterday “Models long, Black Box traders say watch out below” the market was vulnerable to a sharp reversal and it didn’t take much to eradicate the bullish tone. Just a big fig drop and the momentum models have flipped back from heavy long to neutral (ours as we cautioned from +11 to -1) If we had closed nearer the day’s low (1.2944) our model would have been even shorter (-7 units EUR/USD) This clearly demonstrates how fluid markets are presently and why the volatility is rampant, the confluence of all these moving averages are driving algorithmic trades in a way that exhausts human traders, in today’s technically driven markets headline risk is chopping up momentum systems.
We contacted our friend Brian who was comfortable holding onto his short posis from yesterday, it was more perspiration than inspiration yesterday, but faith in his systems paid off. As Brian works off multi-layered systems (similar to our blended M/A model but tweaked to perfection) He can be long and short at different levels with different targets. His long term shorts remain in place with a target of 1.2820 however profits were booked on the short stuff. He notes…
“EUR/USD overnight target hit for +74 pips, looks like a minor correction on the way to bring us back to 1.3019 then 1.3056 as my targets”… “My intention is to get back long in the low 1.29’s :- 1.2912, 1.2921, 1.2935 are perfect but today’s targets reveal we shouldn’t see lower than 1.2966. A day trade to buy at this low  will prove profitable. A break of 1.2982 should see 1.3011 and as long as we stay above 1.2957, corrective longs are safe.”
We marked to market at 1.2955 and we’re 30 pips higher so that’s already sending some individual components back long, which is why it’s best to wait for the close to make corrections to positions (unless you are an algo and it doesn’t matter, just keeps churning) M-to-M at 1.2985 the blended model would be +3 units EUR/USD, pretty flat, which is what it should be with the market in flux. The biggest change in overbought and oversold levels is naturally in the ultra-short term and short term portfolios which now get oversold from 1.2920 to 1.2805 (reflecting increase in volatility) and overbought 1.3055/1.3105 (ultra S/T) 1.3020/35 (S/T)
Once again we’ll leave Brian with the final word as he’s bee “dead reet” as they say in the old country; “My shorter term market scale 1.3063 – 1.2931″; intermediate/longer term 1.3174 – 1.2820.  1. 3200’s and 1.2700’s are not in the forecasts as either side would bring us deeply in either oversold or overbought territory. Averages don’t have the impetus just yet.” That’s how we see it too, it’ll take some serious wood chopping to break this market down or up in any serious way. Thanks Brian, as the Aussie say “Good on ya cobber”. Peter.Wadkins@ThomsonReuters.com
      Brian Twomey

Weekly Trades: EUR/USD, EUR/GBP, NZD/USD

EUR/USD is driven by two factors. The first is the traditional 9 year currency cycle bottom achieved in 2017 at 1.0300’s from its cycle high in 2008 at 1.6000’s.

Second factor is the falling line at current 1.1179. In the week of January 4, the line was located at 1.1280. In the week of June 6, 2019, the line was 1.1394. Weekly, the line moves slowly yet moves slowly downward.

The competition to 1.1179 is the 5 year average at current 1.1278. In 2008, the 5 year average was located at 1.3200’s. The formal break of 1.3200’s in 2012 was not only the mid point of the 9 year currency cycle to price and time but allowed EUR/USD to continue its travels to the 1.0300’s cycle bottom.

EUR/USD remains trapped between 1.0300’s and 1.1179 and the 5 year average at 1.1278, a rough mid point at about 1.0748.

EUR/USD currently trades above its mid point at 1.0800’s.

Non USD currency pair commonality is all trade below 5 year averages while all USD currency pairs trade above. This includes all EM currencies. Also includes DXY as it trades at overbought from 100.36 and trades above its 5 year average at 96.45.

DXY or USD currency pairs lack any chance to break 5 year averages anytime soon nor will non USD currency pairs such as EUR/USD and GBP/USD break its 5 year averages.

Until a break is seen to 5 year averages, the vast majority of currency pairs are contained in about 300 pip ranges for USD and Non USD currency pairs.

JPY cross pairs are contained within 200 to 300 pip ranges. Higher range pairs such as GBP/AUD, EUR/AUD, EUR/NZD and GBP/NZD contain about 300 to 500 pip ranges.

Wide range EM currency pairs to include RON, PLN, HUF and BRL ranges are at about 5 to 800 pips.

How 5 year averages translates to currency pairs and prices is the higher prices travel to 5 year averages then the higher expands the ranges. The more distant to trade away from 5 year averages then the more ranges compress. The more compression is seen from ranges warns to not only higher prices but subject to violent upswings. Above describes EUR/USD and all non USD currency pairs.

The reverse is true for USD currency pairs. The more prices travel lower to 5 year averages then expands the ranges.

Its May and the traditional time for EUR/USD to begin its seasonal 6 month upswing until it tops in December and January. Then falls from January to May. Always a EUR/USD price caution to traditional European Parliament budgets passed in November.

EUR/USD seasonal effects are assessed by my friend Peter Wadkins, formerly 15 years at Thomson Rueters and 47 year FX veteran. The Peter Wadkins Upswing Indicator works like this and perfectly over the past 15 years for EUR/USD and its past currency, USD/DEM.

Whenever Peter takes traditional June vacations, the EUR/USD travels higher and remains higher throughout the summer. To truly understand the depth to the Peter Wadkins Upswing Indicator, USD/DEM rose upon June vacations every year in the pre EUR/USD days.

On the interest rate front, USD interest rates last week failed to move all week. Literally, failed to move and an event not ever seen. If USD interest rates fail to move then every nation’s interest rates fail to move. Did we truly believe USD could ever trade negative. Its impossible because every nation’s interest rates would trade lower or to negative because every nation prices interest rates from USD.

Weekly Trades

This week’s trades are EUR/USD to again trade continuously throughout the week and maximize profit pips. NZD/USD is included for no particular reason but it contains good potential for continuous trades. EUR/GBP, a most horrible, horrible currency pair is at richter scale 0.8900’s and represents a decent trade.


Long 1.0790 and 1.0771 to target 1.0884. Must cross 1.0827, 1.0845 and 1.0866.
Long above 1.0902 to target 1.1012. Must cross 1.1009.
Short below 1.0902 to target 1.0836. Must cross 1.0866 and 1.0845.


Short 0.8942 and 0.8958 to target 0.8774. Must cross and watch 0.8808.


Long 0.5905 and 0.5892 to target 0.6099. Must cross 0.6005, 0.6030, 0.6055 and 0.6080.
Long above 0.6130 to target 0.6223. Must cross 0.6155, 0.6180 and 0.6205.
Short 0.6223 to target 0.6160.

Cautious short 0.6099 to target 0.6020. Must cross 0.6080, 0.6055 and 0.6030

Brian Twomey

Weekly Trade Analysis for Week May 18-22

To understand closing prices, view prices starting at 4:00 p.m EST as prices ” head for the close”. This is the same principle as the 4 phases of the hourly price candle. The last 15 minutes, prices “head for the hourly close.


EUR/USD watching close price at or around 1.0835. Good chance to see close above.

EUR/JPY Watching close at or around 115.92. Good chance to see close above

EUR/NZD great trade for next week especially at close 1.8100’s.

EUR/AUD tricky here, depends on AUD/USD close at or below 0.6400’s or in the vicinity.

EUR/GBP achieved 0.8900’s and good short next week. EUR/GBP is the opposite pair to EUR/USD. Same principle as saying this EUR/USD Vs USD/EUR.


AUD/USD around 0.6400’s

AUD/JPY 68.40’s and 68.50 really nice for next week.


USD/CAD watching 1.4083.

CAD/JPY Low 76.00’s close, good.


All GBP currently deeply oversold and agreed, never use 2 adverbs in a row to violate long standing grammatical rules.

Small concern is fail to significantly bounce to high 1.2200’s, low 1.2300’s and sustain itself at those levels. Further to not see any GBP pair significantly rise. This situation is not normal. GBP/CHF, the bottom pair, at low 1.1800’s is a main pair to view GBP. GBP/CHF has no business at low 1.1800’s.

GBP/CAD next week will range, range again. GBP/JPY as well contains no business at low 130.00’s. GBP/NZD at middle to upper 2.0300’s would be wonderful. GBP/NZD and GBP/CHF both oversold informs, GBP/USD should be okay for a significant rise next week. We’ll see how it goes.

Recall GBP/USD 1.2900’s to 1.1900’s straight down every week without correction. A further drop next week to 1.2000’s and 1.1900;s reveals we have another continuous drop on our hands. This means we may eliminate GBP for a week or two. EUR pairs are much better and offer clear price paths.

As it stands, all GBP pairs contain significant ability to rise and offer many pips higher. The trades are terrific if prices hold correctly.

NZD/USD Below 0.6000’s is really good.


Brian Twomey