Australia Money Supply and Interest Rates

   Exchange Rate vs Money Supply
  Notice the money supply, inflation/CPI and economic GDP forecasts are determined by an interest rate rather than an exchange rate. Inflation and /or interest rate targeting in relation to the money supply and the pricing of interbank money and capital market instruments became a phenomenon in the 1990’s for most central banks. The effect balanced an interest rate to the money supply. Both share a sort of see saw, inverse relationship, a methodology easier to manage for central banks because what was found in paper after paper was one single entity,one economy has a high correlation to its money supply, interest rates and inflation.
A proper economic forecast can be made based on the balance of all three rather than a single focus on GDP output for example. Notice the historic CHART of AUSTRALIA’s CASH RATE, INFLATION AND GDP GROWTH as one example representative of many economies today. The Cash Rate tracks inflation within the stated parameters of the Cash Rate target while GDP grows when inflation and the Cash Rate are low.
  To manage the money supply in relation to a spot price or exchange rate ties two economies together so one nation attaches its economic fortunes to another. These arrangements were called pegs, fixed or crawling, and were eliminated by the major nations as a form of economic practice and exchanged for the money supply, inflation and interest rate target method because pegs caused wild swings in spot prices and so became unmanageable. Under inflation targeting, economies became separate entities with an inward focus on their own historic economic culture and the exchange rate price became a function of economic factors for each nation primarily based on a money market interest rate.
Even if one side of a currency pair was priced based on a single nation’s supply of money, the forecast, pricing and management function would be difficult and time consuming. To follow the methodologies and trade financial instruments using inflation targeting, the indicator order works as money supply, interest rates then inflation and GDP. A further management function is found in international reserves.
  The management function of exchange rates as well became a separate entity for each central bank. International reserves are found in the balance sheet of every central bank and highlighted by composition of currencies in the reserve, income, profit or loss from foreign currency transactions and swaps to name a few categories. From a market and trading perspective, composition of currencies in the reserve is most important because it determines cross border flows and currency employed to facilitate those flows through swaps, which currency to replenish at month end and if an economy employs its own currency to fund their own economy. Sweden for example relies primarily on US Dollars to fund its economy so USD/SEK deserves prominent attention from a trader perspective in terms of Sweden’s money supply, interest rate and exchange rate with the US. EUR/CHF in terms of German and Swiss economic and cultural relationships is profoundly important in terms of cross border flows.
                                               Australia
 Australia not only lacks a maintenance period but a reserve requirement as well. The method employed is the target for the Cash Rate, an unsecured lending overnight rate and pays 25 basis point interest on balances below the target for the Cash Rate. (Gray 2010).
The Cash Rate is determined by an end of day survey of only the largests banks, 25 at last count. Survey questions ask about borrowed and lent funds and is based on a weighted average by value. (RBA).
  The Cash Rate is the target of the overnight rate and is not only the policy rate that determines a loan rate but its the basis for all other interest rates in the Australian system. It represents a floor of interest and moves inversely with the money supply.
From a market perspective, the Cash Rate forms the basis of the money market yield curve then the capital market yield curve. What follows the Cash Rate as trade able money market instruments are Bank Accepted Bills with durations of 30, 90 and 180 days.
  Bank Accepted Bills are bills of credit, drawn by customers and extended by banks to business. The market comprises 20 percent of loans, 80 percent for Certificates of Deposit. ( Matthew Boge and Ian Wilson 2011, The Domestic Market for Short Term Debt Securities, RBA Bulletin, Sept 2011 ).
Bank Accepted Bills then determine overnight Index Swap rates with 1, 3 and 6 month durations. This means as Bank Bill Swap Rates, Aussie Dollars are borrowed by prime banks at the 10:00 a.m. Sydney, 6:00 p.m. New York Fix.
   Interest rates then moves to the capital market to price 1,3 and 6 month Treasury Notes as well as longer term Commonwealth Government Bonds. Figure CHART CASH RATE, BANK BILLS AND Overnight rates and notice the overnight rate priced at 4.23 and compare that rate to the targeted interbank Cash Rate. Further view the CHART CREDIT TO MONEY GROWTH HISTORY.
  The current Cash Rate is 4.25 and the inflation rate is 3.50. The RBA maintains an inflation target of 2-3 percent over a medium term. The medium term is an average rather than a rate. Stability of the currency and full employment is the foundation for Monetary Policy.
  The current economic situation in Australia has been fairly steady since 2006 so money supplies and interest rates equally held consistent. For the short term, its vitally important to look at the 90 day Bank Bill because its the one rate that can’t be controlled by future money supply predictions and therefore trades with volatility. This point was always known and outlined by Bob Rankin in his 1992 paper “The Cash Market in Australia”, a highly recommend read. ( “The CASH MARKET IN AUSTRALIA”, BOB RANKIN, 1992, RBA RESEARCH DISCUSSION PAPERS).
                                               TRADE STRATEGY
  For AUD/USD, a rise in the money supply is a sell provided interest rates move opposite. In US markets, AUD/USD comprises the overnight rate/ Effective Fed Funds rate until a trading rate is established in the capital markets.
  In Europe, Eonia/Aussie overnight night rates comprises EUR/AUD. Currency pairs can rearrange for example as Effective Fed Funds/Aussie overnight for USD/AUD and Aussie overnight/ Eonia for AUD/EUR.
When European markets close, EURIBOR/Aussie overnight rates comprise EUR/AUD and Aussie overnight / EURIBOR for AUD/EUR.
 In Japanese trading, AUD/JPY comprises the actual Cash Rate and/ or Bank Bills to Yen Tibor when Yen Tibor is fixed in Japanese markets, opposite arrangements for JPY/AUD.
Euroyen can factor as Euro currency to Yen Tibor for EUR/JPY.
  USD/CHF factors as Effective Fed Funds/ Swiss repo and Effective Fed Funds/SARON  when Swiss markets close.
                                                   CRAWLING PEG
  The true definition of a crawling peg is the link between two nation’s money supply. For Australia before formal operation of its central bank formally named the Royal Bank of Australia in 1960, reserves of Australian Dollars were held in Sterling accounts. Aussie Dollars moved in the markets based not only on British Pound movements but United Kingdom interest rates until December 1983 when  the Aussie Dollar achieved its free float status.
2011 European Banking Federation Newsletter
Brian Twomey

 

 Maintenance Periods and the Money Supply: Europe, United States, Japan

 Maintenance Periods and the Money Supply: Europe vs the United States
Published 2011 in Eurpean Banking Federation Newsletter. Seems incomplete as Australia is missing and others
   Maintenance periods is an old issue no longer discussed yet its import should be understood in order to determine the demand and supply of money, interest rate direction and purpose employed through market mechanisms such as Eonia and Euribor in Europe and Fed Funds in the United States. Maintenance periods for central banks defines monetary policy by pricing a nation’s demand and supply of reserves to an interest rate. Yet each central bank that employs a maintenance period deploys those periods in different ways.
   A maintenance period  for the Eurozone is the time between governing council meetings, an ECB policy rate meeting in market parlance with a purpose to price excess reserves placed by banks on account at the European Central Bank. Reserves are averaged monthly over the maintenance period term. The ECB then provides liquidity to the banking system by bankers who bid for Euros through weekly and three month auctions at the Main Refinance Rate, also termed the Minimum Bid Rate.
   The Refi rate is the most important interest rate in Europe because it influences prices in market interest rates such as Eonia, Eurepo (Repurchase Agreement Rate) and Euribor to satisfy ECB policy to bring price stability to the European system based on the ECB Treaty on the Functioning of the European Union Article 127.
 Price stability for the European consumer is found within the Harmonized Index of Consumer Prices, a quarterly economic release to measure prices in relation to inflation and the Refi Rate. More importantly, the Refi rate is also termed the base rate because it establishes at what interest rate reserves are expected to grow or contract and appears in M3 money supply forecasts. Reserves are also termed the money supply, interest rates are termed reserve rates and reserve requirements are the terms of reserves vs deposit liabilities, sometimes called the cash ratios due to focus on balances in accounts.
   Excess reserves are paid an interest rate to manage liquidity. For the European system, the overnight rate termed Euribor or the Euro Interbank Offered Rate is paid on excesses for the short term. Reserve deficits are charged an interest rate termed Eonia, Euro Overnight Index Average and a weighted average of Euribor. Eonia is the rate bankers charge each other for overnight loans and represents the floor of interest rates. Eonia is an effective rate to Euribor that trades contracts from one day to 24 months with a Fix time of 19 CET, 1:00 p.m. New York after European market closes.
  Excess reserves once priced are loaned throughout the banking system to multiply Euros by borrowing at the Eonia rate and lending at Euribor. Euribor is a term deposit, fixed at 11:00 a.m Central European Time, 5:00 a.m. New York with 1 week to 12 month lending terms with a T+2 two day spot value. Spot value dates establishes foreign banks in different time zones to adjust their own books within their own trading hours.
Notice not only Euribor establishes loan rates for mortgages, consumers, major companies but are indexed through Fixed Rate Tenders and offered by the ECB since 2008. A Fixed Rate Tender simply allows the ECB to offer money, Euros at a fixed price. This represents what is known as the interest rate corridor or channel for the European system that establishes a floor and ceiling of daily interest rates.
The Fixed Rate Tender forces money into the interest rate channel by pricing liquidity at a desired price and allows a Maintenance period time to maintain its existence throughout policy meeting to policy meeting.
   The purpose of the 1999 introduction of these short rates is to price reserves on a daily basis and gives indication of the demand and supply of Euros to meet the three month, money supply refi target rate. Above the target means inflation and an erosion prices, below means deflation. Both are an uncertain cost to reserves.
  Euribor rates are established through an average of bids and offers for Euros by bankers who comprise the panel of bankers at the European Banking Federation in Brussels. Once the rate is released, bankers with excess reserves loan Euros at OIS Euribor- Libor + 100 basis points while bankers with deficits borrow.
The 100 basis point figure is derived from the size of the interest rate corridor as corridors change with expansion and contraction of interest rates over time. If the price of Euribor interest is not sufficient to meet a deficit nor sufficient to meet Euro supply, bankers can swap Euribor interest rates in another currency.
  Euribor then transforms terminology to an Overnight Indexed Swap rate. If the swap was completed with OIS Euribor-Libor and  OIS-Fed Funds, bankers would buy/sell or sell/buy. With present lower interest in the US compared to the Eurozone, bankers would sell OIS-Fed Funds and Buy OIS-Euribor. The proceeds of OIS-Fed Funds is added to OIS-Euribor for one week to 3 months or longer in order to meet the insufficiency of interest in the Eurozone.
  What is funded within the demand/supply, deficit/surplus equation from a bankers perspective are deposits not only in Euros but Swiss Francs, US Dollars and British Pounds. CHART of EURO DEPOSITS
   Euribor and Eonia are trade able market rates that trade as futures contracts on Euronext based on a 360 day count.
Euribor/Eonia Swap Index futures is one such contract. Euribor prices are quoted in percents to three decimal places as 100 minus the rate of interest, 0.005 equals 12.50 Euros. Eonia trades as 100 minus the traded average effective Eonia rate, 0.005 equals 12.50 Euros.
Since 2008, the one month Eonia contract and the three month Eonia swap index aligned in terms of trade and settlement to central bank maintenance periods. The Eonia three month contract trades normal International Money Market dates while the one month contract is aligned to European maintenance periods. Eonia swap rates is a measure of future overnight rates.
  Interest rates and Euribor share an inverse relationship, a rise in the Refi rate will see a decrease in Euribor while an decrease will see an increase in Euribor contract prices.
  A trader will find as the money supply increases, Euribor rates decrease while a money supply decrease will see a Euribor increase. Euribor bids and offers are based on one week to 12 month terms for 15 maturities with a Fix time 11:00 CET, 4:00 am New York. Most important is Euribor and Eonia are unsecured loans hence unsecured interest rates.  CHART  EURIBOR and EURO MONEY SUPPLY.
   Eurepo is a secured loan, employed as the repurchase agreement rate. A range of  European interest rates is measured by the spread of unsecured rates to the secured interest rates. CHART EUREPO VS EURIBOR
   Figures 6.8 and 6.9 are charts of the interest rate corridor for the ECB, US, Japan and the BOE. Each nation has various means to achieve the desired three month money supply target. For the short term, the Japanese employ Call rates, the BOE remunerates and indexes reserves at the Bank rate with Sonia rates as the daily operational guide and the US employs the Fed Funds rate. The Japanese, ECB and the US employs maintenance periods while AustraliaCanada and New Zealand do not.
  Most important for the Japanese, ECB and the US maintenance period time is short rates establishes the start of the day’s yield curve. The interest rate releases at the predetermined Fix time answers the question long or short a currency pair and/or which pair to swap in interest rate terms. More important is the fix time establishes the pricing of financial market instruments such as government bonds in each respective market.
                        United States Maintenance Periods
  The Federal Reserve Board established in 2008 what it calls maintenance periods. These are weekly periods during which banks must maintain required reserves. Maintenance periods cover 14 consecutive days.
Required reserves began as Reg D that appeared first in the Federal Reserve Act of 1978  then carried forward to the 1980 Monetary Control Act that imposed mandatory reserve requirements. All governments then imposed  reserve laws and this gave rise to the need for the British Bankers Association as reserves had to be priced and balanced in line with central bank target rates.
  To pay interest on reserves was scheduled for 2011 with passage of the Financial Services Regulatory Relief Act of 2006 but moved forward by Congress to 2008 with passage of the Emergency Economic Stabilization Act of 2008 (Federal Reserve 2010).
  Overdrafts are charged and reserves are credited an Effective Fed Funds rate defined as a volume-weighted average rate of trades. Required Reserves are credited an average targeted Fed funds rate minus 10 basis points while excess reserves are credited the lowest targeted fed funds rate minus 75 basis points (Federal Reserve 2010).
    With a fed funds target rate currently 0.25 with hardly a possibility to sustain itself above due to crisis conditions, the fed had to ensure this rate would never trade to zero or worse, trade a negative.
Instead they had to shift reserves toward or in line with the target rate. See Exhibits 5.5 and 5.6 and notice that without required reserves, fed funds rates would have traded to 0. Exhibit 5.7 provides actual fed funds trades before, during and after maintenance periods. Notice fed funds target rates are 0.00 to 0.25 so trading ranges will normally fall within this period unless crisis occurs or the market prices in an interest rate hike.
    FED Fund rates trades the same as Euribor, money supply increases in US Dollars will see a decreased Fed Funds rate while a decrease in the money supply will see an increase in Fed Funds rates. Fed Funds is a single rate and is measured against an effective fed funds rate. The money supply base measured against Fed Funds is released monthly. This time is reserved for money managers to re balance their books in relation to the money supply and an interest rate.
 In conclusion, maintenance periods is an important time between the US and Europe because lend and borrow rates, bids and offers, yield curves and direction of financial market instruments are established for a particular day’s trade not only in the US and Europe but between the US and Europe in terms of a Euro/USD exchange rate. The most important aspect of maintenance periods is the determination in the demand and supply of Euros vs US Dollars. Its an indicator and a valued market tool that should be a regular focus for those involved in the markets.
  So a EUR/USD spot price begins a trading day based on Euribor/Fed Funds or Eonia/Effective Fed Funds.
                                  Bank of Japan Overnight Rates
  The unsecured Overnight Call Rate represents the base rate that determines Japanese Yen money supply forecasts. As the money supply increases, Call rates decline while a money supply decrease sees an increase in the Overnight Call rate. Its a daily weighted average of uncollateralized loans in the overnight market between Japanese bankers.
  The Complementary Deposit Facility, a 2008 invention by the BOJ and the heart of the maintenance period is the conduit where interest on excess reserves, not required are payed.
Euroyen is another call market rate with the same maturities, same unsecured structure but trades offshore as actual/360.
Un collateralized Call rates establish daily loan rates and forms the interest rate yield curve in the Japanese market to price Japanese Government Bonds, the Japanese Yen, repurchase agreements and other financial market instruments.
   The current interest rate in the Complementary Deposit facility is 0.10, the top interest rate and remained fixed since Nov 2008. This rate served as the risk free rate. Trading ranges/target for the Uncollaterialized Call rate is presently 0.00- 0.10.
The December 1, 2011 rate traded an average of 0.082% with a maximum of 0.150% and Minimum of 0.050%.
December 2, 2011 saw Uncollateralized Call  rates traded an average of 0.077%, a maximum of 0.125 % and minimum of 0.060%. Notice the reduction in the daily averages. In order for the Yen to multiply, bankers must borrow short and lend long to profit or the economic system stagnates. Its the point to determine economic growth.
    Important is the Overnight Call rate is an “end of day” rate to mean after Japanese markets close.
It prices Yen in the overnight market until a new price is established. That new price will be found when the British Bankers Association releases EUROYEN Libor at 11:00 a.m. London, 5:00 a.m New York.
More importantly is bank accounts must balance and that is the purpose for Uncollatereralized Call rates. While this may conclude the Japanese maintenance period time to factor reserves and overnight rates, Japanese interest rates continue during market trading hours to price not only the Yen but financial instruments such as the JGB and shorter term T-Bills. Its an extension of banking activities to not only loan funds but to raise and invest funds.  CHART  UNCOLLATERALIZED CALL RATE HISTORY.
                                                      Overnight Call Rate Futures
  Contracts are quoted as 100 minus the rate of interest, 0.005 =1250 ( 300,000,000 notional  X 0.005 % X 30/360 =1250. One basis point =2,500 Yen so 1250= a 1/2 basis point and 0.625 is a 1/4 basis point.
The contract settles as 100 minus the average uncollaterialized overnight Call rate rounded to the nearest 3rd decimal place and trades between BOJ policy rate meetings.
                                                    TIBOR, EUROYEN and TOKYO REPO RATE
    During market hours, three interest rates have profound importance, Yen Tibor, Euroyen Tibor and the Tokyo Repo rate. This period changes the Japanese yield curve due to market trading.  CHART UNCOLLATERALIZED CALL RATE VS TOKYO REPO RATE.
   Japanese Yen Tibor, Tokyo Interbank Ofered Rate, is fixed at 11:00 a.m Tokyo time, 11:00 p.m. New York and is an unsecured Call market interest rate and offered 1 week and 1-12 months against 13 maturities, trades actual/365 and establishes loans during market hours.
The 11:00 a.m. Tokyo time release coincides with a 1 hour market opening so this time is called a Mid Rate.
Tibor is a loan rate but it also represents a deposit rate. A loan/ deposit rate funds not only mortgages and consumer loans but it funds the Yen and other currencies held in Japanese and foreign banks. CHART of TIBOR RATES.
    Important is Tibor is the day rate banks employ as a currency swap to satisfy a deficit or surplus, earn interest income and invest funds. A swap for the Japanese banking system is USD, a vitally important currency to the Japanese economic system. YEN FUNDING CHART.
A high or low Tibor rate determines which currency to employ as the funder, sell Yen buy USD or buy Yen, sell USD. Yet Tibor is released as a rate to specifically price the Yen.
   The historic problem with Tibor is its offered at the ask or offered rate, a single rate without a bid rate. The released rate is the day’s trading rate for lending and borrowing in the inter-bank money market and called a cash rate because it represents the day’s risk free deposit rate.
The Japanese Bankers Association manages Tibor through 18 Japanese banks who submit bids and offers. The highest and lowest are discarded, remaining bids and offers are averaged and that rate is released at 12:00 Tokyo time, 12:00 p.m. New York. This time represents one hour after Tokyo markets open.
   The 18 Japanese banks represent the largest and most capitalized yet thousands of banks channeled by many types such as city, agricultural, rural, cooperatives comprise the Japanese banking system and all are members of the JBA.
Currently 64.9% of banks represent fund raising and 64.2% represent loans as a percentage of market share. In 2001, JBA membership comprised 141 full members, 46 associates and 72 special members so the JBA grew exponentially. (BIS official reports). JBA’s proper Japanese name is zenginkyo and translates roughly as complete bank pride.
                                                               Euroyen Tibor
    Euroyen Tibor is an invention of post World War 2 reconstruction implemented by the US to allow offshore finance of food and supplies to enter Japan. (Fukao 2006).
With a 60 year rocky interval of exchange control laws and early establishment as a bond, Euroyen Tibor was formally established in the offshore market in Dec 1986, internationalized in 1998 by amendment to Japan’s Foreign Exchange Law that allowed a liberalization of international financial transactions after amendment in 1980 that allowed freedom of transactions with exceptions and only offered to certain foreign banks.(Fukao 2006).
   With the addition of the 1997 introduction of Japanese derivatives, single stock options and screen based trading between the Osaka and Tokyo Stock Exchange, it appeared Euroyen was completely liberalized but transaction taxes, low interest rates and regulations saw ‘lethargy” in Euroyen trade. (Ito and Lin 1996).
  When Nikkei 225 futures arbitrage between the Osaka and Singapore exchanges hindered Euroyen’s function due to volatility swings from separate margin requirements between both exchanges, Euroyen would not become free until formal introduction of the Euro.
    Euroyen Tibor is an offshore deposit interest rate to serve Japanese banks overseas, derived by an average of bids and offers from 18 Japanese banks, released the same time as Yen Tibor by the Japanese Bankers Association as a Mid Rate and trades as actual/360.
The actual/360 classifies Euroyen as an offshore rate due to alignment to Europe and the US 360 day count convention. More importantly was the purposeful alignment by WW 2 allies of Euroyen to US Dollars to serve as a funding and/or reserve currency. Yet it is strictly a Japanese interest rate and operates within the same parameters as Yen Tibor.
When Yen money supply increases, Euroyen Tibor drops. It shares an inverse relationship to the money supply.
Euroyen has two profound purposes, an insight into future Yen Tibor deposit rates and BBA Yen Libor, the London Interbank Offered Rate.
    As an insight to future Tibor, Euroyen Tibor trades as a futures and options contract not only at the Tokyo Financial Exchange but also at the Chicago Mercantile Exchange, Euronext and the Singapore Exchange, the SGX but once termed the Singapore International Monetary Exchange or SIMEX.
    The Singapore Exchange opens at 7:40 a.m. -7:05 p.m. and 8:00 p.m. -2:00 a.m. the next day, 7:40 p.m., 7:05 a.m. New York time and  8:00 a.m. – 2:00 p.m. New York time to coincide with the CME close.
Tokyo opens from  8:45 a.m. to 11:30 a.m and 12:30 p.m.-8:00 p.m.—-8:45 p.m. to 11:30 p.m. and 12:30 a.m., 8:00 a.m. New York time. Singapore time is ahead of Tokyo one hour so Euroyen futures and options trades one hour when Tokyo opens.
Therefore, arbitrage and hedging opportunities occur in Singapore markets because it also trades Japanese Government Bonds. JGB’s trade based on yield so a long JGB position can be hedged against Euroyen Tibor.
   Euroyen Tibor is a deposit rate and trades at times as future Yen Tibor because Yen Tibor serves as a daily interbank trading rate rather than a capital market rate.
Euroyen Tibor serves as the key indicator to determine not only future Yen Tibor interest rates but capital market rates, the money market yield curve and supply of Yen. A valid intellectual argument can easily refute these claims for many reasons.
   Japanese Overnight Index Swap derives from internal interest rates and connects to Tibor for currency swaps, a 360 day Euroyen yield curve may not always align to Japanese 365 day financial and money market instruments and Euroyen may not forecast Yen money supplies.
Therefore as a key indicator, it fails. Both arguments must be analyzed in the context of Japanese companies establishment offshore since 2006 for purpose of access to cheaper priced commodities and greater profit potential.
USD/JPY was the premiere pair arrangement in this context so Japanese banks offshore can buy US dollars to fund offshore investments especially when the interest rate differential favored the Japanese.
Euroyen, FX Swaps, Libor and Eurodollars was employed as traditional funding mechanisms. During Japanese trading hours, JPY/USD must be traded with Yen Tibor as the focus so the profound conundrum is Japanese markets are closed during overseas Euroyen trade. Its a fascinating interest rate that serves a broad usage.
                                                   3 Month Euroyen Futures and 3 Month Options
  Trades as 100 minus the rate of interest, 0.005 =Yen 1,250 and settled to the 3rd decimal place. To calculate, from the Tokyo Financial Exchange: if Tibor is 0.12786%, final settlement price is 99.872, 100 minus 0.128.
  Options are quoted as Euroyen Futures points, 0.005 with strike price interval=0.125, 0.005=Yen 1,250.
Options settle American style. Singapore and the CME Euroyen Futures contracts trade based on LIBOR, the London Interbank Offered Rate and released at 11:00 a.m. London, 5:00 a.m New York, 2:00 a.m.Tokyo, 1:00 a.m. Singapore.
     Brian Twomey

FX Points, Spot Effect and Hedging

  The Great Peter Wadkins

 

just be aware of something that I learned at my own expense as chief dealer … I had a guy working for me who used to arbitrage forward points against AUD bill futures and USD Eurodollar interest rate futures. He used to hedge 100% of the face value (E.G. $1,000,000 ) with the equivalent amount of Euro futures (1 contract I think it was) the problem with futures is that they tend to out-perform or under-perform cash – always … a function of the market’s bias … so if everybody thinks US interest rates are going up and LIBOR is 1.0000% for arguments sake, the implied rate on the futures contract will be anywhere from that to 1.25% (and vice versa) you also have to calculate the impact of margin requirements in futures … this is called “tailing” at the time (1988/89 – whenever the AUD contract started trading on the IMM) we found that it was better to hedge something like 85-90% of face.

It’s a little known fact that many traders are unaware of … another issue is “spot effect” on forward points … as spot moves up and down the future value of the forward contracts are impacted by the value of foreign currency – E.G. AUD points are worth US$100 per 1 point on spot or presently AUD 99.78 (because AUD/USD is quoted USD per AUD. If AUD goes higher say 1.10 then those forward points are only worth AUD90 if AUD goes down 10% they are worth AUD110.

On small books it doesn’t matter but if you are carrying multi-billion dollar forward books those differences add up. Therefore any forward swap trader worth his salt has to hedge spot effect.

The best way to look at arbitrage is to understand that a forward swap is as you simply a loan and a borrowing simultaneously … just in different currencies … you borrow 1,000,000 AUD at 5.5% for one year it costs AUD 55,000 interest you lend USD 1.002,200 for 1 year at 1.0% and you receive USD 10,000 plus 1% on 2,200 $22 so $10,022 if you were to do that via interest rates you would have to convert the AUD you borrowed at 1.0022 and in a year’s time you would you would convert back at the spot rate applicable at the time.

As they did not want that risk they invented FX forward swaps so that the exchange rate was locked in at the same time and the difference between spot rates and outright forward rates was purely and simply the interest rate differentials or the difference in what you paid in I nterest in one currency versus what you would receive in another.

What complicates the issue is when you do actually arbitrage and interest rates are no longer theoretical but based upon what you can actually borrow at and what the investment rules (if any) allow you to invest in. I.E. if you can invest in muni bonds or AA corporate paper instead of “LIBOR” (or whatever is implied by the bank pricing the trade) you can make a spread … that’s when the repayment schedules become involved (monthly, quarterly, or annual interest payments) because then you have to calculate the impact of interest upon interest (net present valuing or compounding) That is why long dated forwards (one year or more) are trickier to price than one year or less …

 

Brian Twomey

Continuing Claims

 Continuing Claims is a United States weekly economic report released every Thursday morning from Form r539cy at 8:30 Eastern Standard Time that determines the number of workers that initially filed and continued to receive unemployment insurance.
Data is compiled by the Employment and Training Administration an agency within the Department of Labor from all 50 states, Puerto Rico, Virgin Islands and Washington D.C. The ETA calculates rates of unemployment for seasonally adjusted workers, non seasonally adjusted workers, newly discharged veterans, former federal civillian employees and  railroad retirement board employees.
The railroad industry was granted a special provision in 1938 under HR 10127 titled the Railroad Unemployment Insurance Act that allows railroad employees to collect unemployment but who are exempted from federal tax on those benefits. The ETA further calculates not only initial and continuing claims for each of these categories but initial and continuing claims for workers whose benefits were first time claims and those whose claims were either extended or received under emergency unemployment compensation, disaster relief and those that fall under previous Trade Readjustment Acts. Modern day trade acts began with HR 10710 termed the Trade Act of  1974 where benefits were extended to 52 weeks whose allowances have increased since this original passage.
   Trade Adjustment Allowances is an income to persons who exhausted unemployment compensation whose jobs were affected by foreign imports. The Trade and Globalization Adjustment Assistance Act of 2009 is just one example where benefits were extended to workers as well as provisions for retraining and extension of cash benefits. Workers can now collect up to 120 weeks of cash benefits along with retraining provisions.
    Unemployment Insurance for Disaster relief began in 1974 with the Disaster Relief Act but has been expanded over the years to include pregnant women, sickness and hardship cases. Federal civillian employees began coverage in 1954 with HR 9707 while Korean War veterans began receiving unemployment insurance in July 1952 with HR 7656 termed the Veterans Readjustment Assistance Act of 1952. Since 1952, the collection of benefits has expanded with passage of HR 4717 termed the revenue Act of 1982 to all service personnel.
  The program called Unemployment Insurance that is measured in the Initial and Continuing Claims report today began in 1935 with passage of HR 6635 called the Social Security Act. Coverage was initially extended to national and state banks who were members of the Federal Reserve System and to “instrumentalities” not owned by state and local governments.
As time progressed and as more laws were passed, more industries began coverage with benefit weeks that expanded and contracted. For example, 1946 saw expansion of UI benefits to Maritime workers while Korean War vets received 26 weeks of benefits yet federal civillian workers received 20 weeks.
With passage of HR 12065 in 1958, benefits began extensions of 13 weeks for those that exhausted their claim but who still were not gainfully employed. Extensions were granted to 39 weeks in the early 60’s with passage of HR 4806.
With passage of HR 14705 called the Employment Security Amendments of 1970, UI benefits were extended to employees for 39 weeks who were affected by recession. This law introduced triggers for the first time.
For example, UI was extended to 39 weeks if the unemployment rate exceeded 4.5 % for 3 consecutive months. As recession ended, benefits were scaled back to whatever the norm was during that time. Today, all industries are eligible under the UI program with a larger expansion in not only benefit weeks but extensions.
  The collection of data of employed and unemployed workers in the UI program by the federal government dates its history to the original passage of the Social Security Act of 1935. Then, UI information was collected on an annual basis.Yet as a formal economic release and as a means to compile data for formal distribution,  the continuing and initial claims report began in 1967 as a weekly release.
The Department of Labor was responsible for collection and dissemination of data. All 50 states, the Virgin Islands, Puerto Rico and Washington D. C participate. These states electronically send their weekly UI claims to the ETA .This information is disseminated  by initial claims and continued claims and separated by each state initially to report trends and differences in trends.
The report released Thursday is quite detailed and reports any changes to states such as an increase or decrease in reported filings for initial or continuing claims. The ETA doesn’t have the means to report on industries that experience an increase or decrease in claims. They only report initial and continuing claims on the full report along with any changes to states.
 Further, the ETA reports on seasonal and non seasonal trends and marked by a 4 week moving average. The term seasonally adjusted first entered the lexicon with passage of  HR 12987 that established the National Commission on Employment and Unemployment Statistics.
The purpose of this commission was to measure employment and unemployment trends to find possible deficits in industry that may need help in the future. Since, the ETA adopted the seasonal adjusted aspects in their report marked for the first time by a 4 week moving average to smooth the data. Seasonal spikes occur during holiday periods such as Christmas, Thanksgiving and Easter for example. So initial and continuing claims is factored for both seasonal and non seasonal factors.
 Calculation of employment rates works based on a covered denominator of 130,128,328. Where does this number come from?
The Bureau of Labor Statistics in their division of  Quarterly Census of Employment and Wages factors the total number of jobs in the US as 130,128.328.
The QCEW tracks employment, unemployment and wages as factors to determine how much employers must pay for UI claims.
Approximately 96 % of all US employers are covered under the UI program. Between a small tax called FUTA, Future unemployment tax allowance payed by employers and a small tax payed by recipients, UI program expenses and benefits can be paid to the next set of recipients.
  What is not covered under the UI program and not factored in the initial claims reports are those that collect benefits under Emergency Compensation Claims.
This number is lumped together with initial and continuing claims. Exhaustion rates are not factored by the ETA or reflected in the initial claims reports either. That is the domain of the QCEW. Initial and continuing claims is a straight number that records claimants of initial and continuing claims in the UI program.
February 2010
  Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

 

Market Technicians Association

 The Market Technicians Association is an organization incorporated in 1973 as a not for profit with the intention to propagate the study of technical analysis for the mainstream of then present and future market professionals. During this time period however, the study of technical analysis wasn’t the all pervasive study that it is today so its full acceptance took time as market professionals moved from pencil and graph paper to computers.
This may explain part of the reason the first Certified Market Technicians exam was not awarded by the MTA until 1989. Despite a slow beginning, the MTA today claims over 3000 members worldwide, 1900 in the United States with 930 designated with the now coveted CMT moniker beside their name and 817 of those active in the industry as working professionals. Yet  approximately 2070 can claim affiliate status as many await their chance to achieve the high status as a Certified Market Technician.
    When the MTA confers the honor of CMT this means they certify to the exchanges, the investment community and the public that candidates have the full and comprehensive body of knowledge of technical skills from past and present to be able to conduct research, sign their name to a research report, recommend trades and investment programs from a wide variety of financial instruments and markets  and even trade their own accounts with proficiency.
With trillions of dollars flowing through the financial markets on any given day as opposed to millions from days past, the importance of the CMT designation bestows a greater consequence to those that are charged with the management of that money.
   The CMT encompasses much more than the ability to read a chart. Candidates will learn to read and fully understand point and figure, line and candlestick charts from price perspectives past, present and future. Candidates will learn the relationship between these prices and price patterns. Candidates will learn trends and what they mean, how to draw trend lines, how to determine if trends will continue or fade.
Candidates will then advance their knowledge to indicators and how they work, how they are calculated and meaning and purpose behind those calculations. Candidates will learn implied volatilities, put/call ratios, and inferential statistics from correlational analysis to t-tests to regression analysis. Candidates will learn volume, breadth, short selling, sentiment gauges and inter market analysis. While these examples name only a small portion of the technical skills learned from the three exams offered to become a CMT, the exams in itself tests a much wider knowledge of technical skills and analysis. Each exam encompasses a level of difficulty so the historical pass rate is about 60 %.
    For example, level 1 exams test definitions, measures basic concepts of terminology. charting methods and ethics. Ethics is not only a recent focus due to past scandals but it has been given greater weight to each exam. Failure to pass ethics portions on the CMT level 3 exam will mean failure. Ethics encompasses  factors of  public trust, inside information and research reports to name just a small portion of the tested ethics for each exam, its comprehensive and shouldn’t be treated lightly by candidates.
For definitions, any idea what is the true meaning of a dead cat bounce, an increase in the VIX, the meaning and purpose of Bollinger Bands or how to read a line or point and figure chart. Any idea how to apply technical analysis to bonds, currencies, options and futures.
The first 132 multiple choice exam of which 120 are graded will ask these and much more within the two hour allotted time period for testing. The cost is $500, $250 covers the whole program fee that allows a candidate five years to complete all three exams. Exams are offered in the Spring and Fall at many locations not only in the US but 300 worldwide sites.
    Level 2 is a four hour,160 question of which 150 are graded multiple choice exam that measures application of technical analysis, ethics, Dow Theory and inter market analysis to name a few categories. The cost is $450 and offered Spring and Fall.
Any idea what is the general term for rate of change, the difference between relative strength and the indicator RSI. Relative Strength Indicator, the theories of Charles Dow or phases of cycles. This and much more will be tested as well as ethics.
   Level 3 was recently changed from an outside written research project that demonstrated a high level of technical analysis skills to strictly essay exam questions  that demonstrate well thought out research opinions, portfolio analysis and theory and ability to integrate a high level of technical analysis skills.
Candidates are allotted four hours to complete this last and rigorous exam. The pass rate is about 60 % historically. The cost is $450 and offered Spring and Fall. Passage of the ethics portion and a 70 % passing score will qualify a candidate to become a CMT. The designation will not be conferred until a candidate has achieved three years of work experience, joins the MTA and maintains $300 a year in annual dues.
  To help candidates with their independent study, the MTA recommends books from masters such as Edwards and Magee: Technical Analysis of Stock Trends, Martin Pring: Tehnical Analysis Explained, Charles Kirkpatrick and Julie Dahlquist: Technical Analysis:  The Complete Resource for Financial Market Technicians, David Aronson: Evidence Based Technical Analysis, Perry Kaufman: New Trading Systems and Methods and Frost and Prechter: Elliott Wave Principles.
  All three exams require independent study. The MTA recommends 100 hours of study for exam 1, 140 hours for exam 2 and 160 for exam three. Yet they offer forums, mentor ships and webinars to help candidates further their comprehension and study.
  Passage of exams 1 and 2 qualifies a candidate for a series 86 exemption. A Series 86 is a Research Analyst designation that addresses research reports and a candidate’s ability to conduct research so candidates no longer need to take this exam as of 2005 thanks to the  National Association of Securities Dealers submittal of a rule that was accepted by the SEC and recognized by the exchanges.
 Many CMT’S have moved forward with rich and rewarding careers. Some invented indicators or a unique trading methodology, some became teachers, analysts, mentors, authors. Some became independent traders while many work for the exchanges, hedge funds, firms and brokerages that cover many different markets.  The opportunities are immense, the rewards great.
February 2010 Brian twomey
 Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University.

 

National Futures Association

  When the National Futures Association was created in 1982 as a self regulatory body by congressional passage of an amendment to the Commodity Exchange Act, the original purpose established in 1983 was to register introducing brokers, commodity pool operators and commodity trading advisors.
The introduction of many new financial instruments in the later 1970’s such as T-Bill Futures, Ginnie Mae Certificates, options on sugar and coffee and stock index futures in 1982 saw the need to register new brokers and salespeople to the futures industry. With the introduction of new financial instruments and the International Money Market that traded many of these new instruments, the NFA assisted the CFTC as a complement  in their oversight functions of the futures industry.
Since 1982, the mandate of the NFA has expanded to compliance and the issuance of regulations, arbitration and mediation and an online registration system for brokers with the function that allows the public to check the registration status of their broker as well as a means to check the status and outcomes of dispute resolutions.
   Earlier registrations began on a voluntary basis for brokers. With time and the passage of new laws to protect market integrity and eliminate fraud, registration became mandatory. Today mandatory registration of brokers and principles encompasses even  broader categories to now include five new sections: supervision, ethics training, business continuity and disaster recovery, privacy rules and promotional materials.
Principles that have a 10 % or more ownership in a brokerage or who oversee client communications, sales processes or trading activity must register. The term broker is defined as an IB or introducing broker. These are firms that offer self trading accounts particularly to the general public. Supervisors or Associated Persons are employees that supervise communications, sales forces or trading activities and must obtain a series 3 license to maintain employment as well as register. Ethics training, business continuity and disaster recovery, privacy rules and promotional materials must be included in registration applications but fall more under the category of regulations yet all are focused on the total accountability of the industry and the NFA’S oversight function.
   Today the NFA claims 3816 total firms registered with 1446 registered as Introducing Brokers, 975 CTA’S and 377 CPO’S. 52,941 Associated Persons are registered and all reported from 9 different exchanges in the United States. Annual dues can range from $750 for CTA’S and CPO’S to $125,000 for firms with $5 million and above in annual revenues. Registration fees can range from a $200 application fee for CPO’S and CTA’S along with a $750 membership fee reported on Form 7R to $85 for IB’s and AP’S and reported on Form 7R. Futures Commission Merchants pay a $500 application fee with membership fees ranging from $1500 to $5625 and reported on form 8R.
 The CFTC Reauthorization Act of 2008 mandated forex solicitors, account managers, CTA’S and Pool Operators to register with the NFA as Introducing Brokers to allow more accountability of  foreign exchange trading so reported membership figures will be much higher in the future.
   Laws passed by Congress must be codified into a set of uniform regulations. Regulations issued by the NFA for 2008-2009 primarily addressed  foreign exchange due to the many frauds and abuses reported over these years.
For example, the CFTC Re authorization Act increased net capital requirements for foreign exchange brokers from $10 million in 2008 to $20 million in 2009. Membership requires compliance.
For example, NFA Rule 41 states brokers must certify net capital requirements and report weekly account balances while CTA’S and CPO’S must maintain a $45,000 net capital requirement at all times as well as file disclosure documents. NFA Rule 2-36 issued in April 2009 addressed provisions of false reports, supervision of personnel and submission of promotional materials.
Regulations issued in June 2009 addressed written confirmations that must be issued one business day and monthly and quarterly statement submittal. The NFA not only issues uniform regulations but all regulations are monitored through a system of compliance to perform the function of oversight through surveillance. For example, violators can be censured, face expulsion or fined $250,000 for each violation depending on the severity and number of infractions.
    Dispute resolution at the NFA began in 1983 with an arbitration program while mediation began in 1991. Parties that agree to mediation usually report claims less than $150,000. Cases that are arbitrated are filed either as customer to member, member to customer or member to member.
Historically the majority of claims filed have been customer to member. For example, 160 customer complaints were filed in 2009 with an average close of cases of about five months with 44 awards granted.
In 2008, 193 customer complaints were filed with an average of six months to close and 43 awards granted. Awards granted and arbitration decisions are final and cannot be filed in the court system. Awards are administered through the Restitution program.
    What began as a system called DIAL or Disciplinary Information Access Line in 1991 to allow customers to check the registration status of their broker or salesperson by telephone soon became an online system called BASIC or Background Affiliation Status Information Center.
Basic allows online users to check the registration status of their broker and principles of the firm, commodity pool operators and CTA’S 24 hours a day. Basic also allows users to check a broker’s disciplinary actions, arbitration status and awards granted.
 Since the creation of the NFA, many firms became members through a series of  laws yet as the functions and duties of the NFA expanded, the number of claims filed has also decreased. One reason may be that the NFA offers investor alerts, education and an enhanced system to allow the markets to function properly.
January 2010 Brian Twomey
 Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

 

Taylor Rule

 The Taylor Rule is an interest rate forecast model invented and perfected by famed Economist John Taylor in 1992 and outlined in his landmark 1993 study Discretion VS Policy Rules in Practice. Taylor operated in the early 1990’s with credible assumptions that the Federal Reserve determined future interest rates to be gauged based on the Rational Expectations Theory of Macroeconomics to name one major component.
This is a backward looking model that assumed  if workers, consumers and firms believe future expectation of the economy is good, interest rates don’t need an adjustment. The model is not only backward looking but doesn’t take into account long term economic prospects.
The Phillips Curve was the last of discredited Rational Expectations Theory models that attempted to forecast the trade off between inflation and employment. The problem again was short term expectation may have been correct but what about long term assumptions based on these models and how can adjustments be made to an economy if the interest rate action taken was wrong.
Here monetary policy was based more on discretion than concrete rules. What we found was we can’t imply monetary expectations based on Rational Expectation Theories any longer particularly when an economy didn’t grow or stagflation was the result of recent interest rate change. So enter the Taylor Rule.
   The formula  looks like this. i= r* + pi + 0.5 (pi-pi*) + 0.5 ( y-y*). Here i= nominal fed funds rate, r*=  the real federal funds rate (usually 2%), pi= rate of inflation, p* is the target inflation rate, y= logarithm of real output and y* = logarithm of potential output.
What this equation says is this.  The difference between a nominal and real interest rate is inflation. Real interest rates are factored for inflation while nominal rates are not. Here we are looking at possible targets of interest rates.  Yet that can’t be accomplished in isolation without looking at inflation.
To compare rates of inflation or non inflation, one must look at the total picture of an economy in terms of prices. Prices and inflation are driven by three factors, the Consumer Price Index, Producer Prices and the Employment Index.
Most nations in the modern day look at the Consumer Price Index as a whole rather than look at core CPI. Taylor recommends this method as core CPI excludes food and energy prices. This method allows an observer to look at the total picture of an economy in terms of prices and inflation.
Rising prices means higher inflation. So Taylor recommends factoring the rate of inflation over one year or four quarters for a comprehensive picture.Taylor recommends the real interest rate should be 1 1/2 times the inflation rate. This is based on the assumption of an equilibrium rate that factors the real inflation rate against the expected inflation rate.
Taylor calls this the equilibrium, a 2 % steady state equalled to a  rate of about 2 %. Another way to look at this is the coefficients on the deviation of real GDP from trend  GDP and the inflation rate. Both methods are about the same for forecasting purposes. But that’s only half of the equation. Output must be factored.
   The total output picture of an economy is determined by productivity, labor force participation and changes in employment.
For the equation we look at real output against potential output. Logarithms is the term used. What is logarithms?
Exponents. Logarithms is one means to factor this aspect of the equation. We must look at GDP in terms of real and nominal GDP or to use the words of John Taylor, actual vs trend GDP. To do this, we must factor the GDP Deflator that measures prices of all goods produced domestically.
Factor nominal GDP divided by real GDP times 100. The answer is the figure for real GDP. We are deflating nominal GDP into a true number to fully measure total output of an economy.
  The product of the Taylor Rule is three numbers, an interest rate, an inflation rate, a GDP rate with all based on an equilibrium rate to gauge exactly the proper balance for an interest rate forecast by monetary authorities.
   The rule for policymakers is this. The Federal Reserve should raise rates when inflation is above target or when GDP growth is too high and above potential.  The Fed should lower rates when inflation is below the target level or when GDP growth is to slow and below potential.
When inflation is on target and GDP is growing at potential, rates are said to be neutral. This model has as its short term goal to stabilize the economy and a long term goal to stabilize inflation.
To properly gauge inflation and price levels, apply a moving average of the various price levels to determine a trend and to smooth out fluctuations. Perform the same functions on a monthly interest rate chart. Follow the fed funds rate to determine trends.
    The Taylor Rule has held many central banks around the world in good stead since its inception in 1993. It has served not only as a gauge of interest rates, inflation and output levels but it can equally serve as a guide to gauge proper levels of the money supply since money supply levels and inflation meld together to form a perfect economy. It allows us to understand money vs prices to gauge a proper balance because inflation can erode the purchasing power of the dollar if its not leveled properly.
 While the Taylor Rule has served economies in good economic times, it can also serve as a gauge for bad economic times.
Suppose a central bank held interest rates to low for to long. This prescription is what causes asset bubbles so interest rates must eventually be raised to balance inflation and output levels. A further problem of asset bubbles is money supply levels rise far higher than is needed to balance an economy suffering from inflation and output imbalances. Since 1993, the Taylor Rule has been the order of the day that has not only lived up to expectations but any criticisms have been muted responses without real bases of arguments.
January 2010 Brian Twomey
   Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

McGinley Dynamic and Moving Averages

   McGinley Dynamic
 The McGinley Dynamic is a market tool invented and perfected after many years by a 40 year trader, a 40 year market technician, who can add to his lengthy credits, a long time officer of the Market Technicians Association and former Editor of the their Journal of Technical Analysis.
This article will introduce traders to his little known tool first published in the Journal of Technical Analysis as an outline in 1991 and later published as a full blown study in 1997. My hope is to capture the mind of a master technician as he worked through the process of invention to perfection of the McGinley Dynamic who has advanced the study of technical analysis more than most would know. We begin with moving averages.
   Moving averages is the study of the history of  time series analysis. Early practicioners used various algorithms to smooth data and to flatten varied shaped curves. Yet this early work was quite primitive. Various graduation methods were used later like a fitting line using a least squares rule for plotting and construction purposes. Fitting lines using the least squares rule was later adopted in technical analysis in the family of moving averages.This began the process of interpolating data using probability theories and analysis.
In the Journal of the Royal Statistical Society in 1909, G.U. Yule described instantaneous averages that R.H. Hooker calculated in 1901 as moving averages. Yule identified properties of the variate difference correlation method.  The term moving averages entered the lexicon it was said shortly after in 1912 through W.I King’s  publication of Elements of Statistical Method.
Harold Wold later adopted Yule’s studies and described moving averages in a 1938 study called the Analysis of Time Series.
Others attribute exponential smoothing to Brown and Holt late 1950’s book about inventory control. Brown used exponential smoothing for Naval inventory processes. Holt was the first to use linear and seasonal trends for inventory control.
 Pete Haurlan was the first to use exponential smoothing for tracking stock prices and advanced the study for technicians in the modern day. Haurlan called exponential smoothing trend values where a 19 day EMA he called a 10 percent trend. His earlier work as a designer of tracking systems for rockets helped him design steering mechanisms. If the steering mechanism was off, it needed further inputs. Haurlan called this proportional control and used this method in his groundbreaking studies.
  For Haurlan and others, EMA’s was the moving average method of choice because of its focus on two inputs as opposed to the simple moving average that needed many past data points. These early technicians used pure mathematical calculations graphed on chart paper.
For example,  Haurlan needed a conversion factor, a smoothing constant. His smoothing constant = 2/ (n+1) where N is the number of days.
So a 19 day EMA equates to a 10 percent trend  by 2/ (19+1) = 2/20=0.10 or a 10 percent smoothing constant. Proportionate control  equates to how far price moved from the trend value and adjusts by using trend value curves. This he charted in waves by 1%, 2%, 5%, 10%, 20%, and 50 % .
Haurlan developed tracking rates based on trends. These tracking rates were measured against a stabilization period. For example, a 50 % tracking rate has a 5 day stabilization period.
  Sherman and Marian McClellan added two different EMA’S of daily breadth figures, 10 % and 5 % trend. This gave the first alerts to crossovers when the 10 % trend moved above the 5% trend. This detected a market reversal as well as overbought and oversold markets.
The McClellans would later invent the McClellan Oscillator and the Summation Index based on their calculations and charting methods during this period and published in their 1970 book, Patterns for Profit. The McClellan Oscillator measures the acceleration of daily advance decline statistics by smoothing with two different EMA’s and finding the difference between the two.
   For Haurlan and Loyd Humphries after him with his groundbreaking book called the Moving Balance System and his invention of the Moving Balance Indicator, both benefited from the easier use of coding EMA’s that only needed two inputs, price, angle and position and the prior value.
Back then, computer sophistication wasn’t available. Hence the reason for EMA’s over Simple Moving Averages that needed many data points. What separated McGinley from earlier technicians was his groundbreaking work in moving averages, following where others left off, that led to the McGinley Dynamic.What did he see. I paraphrase.
  McGinley says the problem with moving averages is twofold, inappropriately applied and overused.
They should only be used as smoothing mechanisms rather than a trading system and signal generator. Consider as he said, moving averages range in their uses from fast to slow markets. How can one know which to use and appropriately apply them. How can one know when to use a 10 day average from a 100 day. Further, moving averages are fixed in length without ability to change, a restriction in its use because it can’t adjust to changing data during trading days. We know lengths today as slopes.
The hope is the ability of a smoother to filter whipsaws but outliers exist in the averages. What should you do with a 10 day moving average on the 9th or 10th day. It doesn’t work because much of the trend has been lost.
 Next says McGinley, simple moving averages are always out of date. A 10 day average is off by 5 days or half its length and graphed wrong. Chances are big price moves already occurred within the 5 days so the graph set at 10 periods must also be off.
The further problem is the drop off, the difference in price and the line. What if the new data from X days ago is dropped and the data drop is larger than present values. The moving average must also drop generating false signals.
 Next, exponential moving averages where much is directly quoted so I can replicate modern day examples.
The exponential moving average improves on the simple moving average because calculations allow the average to hug prices more smoothly and allows for faster response to market data. Yet it under performs in consolidations just as the simple moving average generating line breaks and sheer trading indecision.
Exponentials require two inputs, previous average and current price. The classic calculation is A X the previous moving average + B X new data where A+B = 1.0.
Usually a small part of the new data is added to a large piece of the old. To build on the earlier works of Haurlan, for example, an 18% exponential  where A=0.82 and B=0.18 can be compared to a normal moving average where B=2/(x+1).
So an 18% exponential (x=10) hugs prices as closely as a 10 day moving average (2/(10+1)= 18. The shape of the exponential may be different due to calculations. The exponential calculation of B can be adjusted to fit market data and prices where simple moving averages are fixed, its much more rigid due to its calculations.
   Exponential moving averages therefore follows prices and market changes better than a fixed simple moving average, it smooths the data better. Yet the exponential moving average is not perfect, adjustments are always needed and it can’t rise with falling prices and fall with rising prices. So what’s the answer, enter the McGinley Dynamic.
  Building on years of moving average research, the McGinley Dynamic was invented as a market  tool designed to generate less whipsaws, hugs prices more closely, adjustable calculations to fit the users needs and follows markets fast and slow automatically.
Think of the Dynamic Line of the McGinley Dynamic as Haurlan’s steering mechanism, a proportionate control tool that steers the Dynamic Line along with prices. The questions whether the McGinley Dynamic lives up to its reputation, the answer is unquestioningly yes. Does it perform the above functions, absolutely. Here’s how. Again I quote.
  Building on Dr. Lloyd Humphrey’s work of moving averages in his groundbreaking 1976 book The Moving Balance System where the previous Dynamic Line was modified, here is the new formula.
New Dynamic= Dynamic +(index-Dynamic -1)/(N X(Index/Dynamic -1) 5. The index may be the Dow, S&P or a stock.
Mr. McGinley divides the difference between the Dynamic and the index by N times the ratio of the two. The numerator gives the up or down sign and the denominator stays within percentages within the bounds defined by N.
McGinley further states the 4th power gives the calculation an adjustment factor that increases more sharply the greater the difference between the Dynamic Line and the current data. Quoting further, the size of the adjustment changes not linearly but logarithmically. This feature allows the Dynamic to hug prices.
 Mr. McGinley recommends N should be 60 % of the moving average one wishes to emulate. His example is a 20 day moving average that uses an N of 12. Herein the Dynamic Line adjusts itself by speeding up or slowing down as markets dictate. The second term of the equation McGinley states is not a factor unless the difference between the index and the Dynamic Line is large. This aspect of the equation deals with lengths or slopes.
 An important factor is the second term however. McGinley says in fast up markets, the Dynamic Line slows  down less than down markets. Its the factor of the 4th power that speeds up the Dynamic Line in down markets.
From McGinley’s example, insert 10 for the old Dynamic, 5 for the close and N=7, a product of -6.67. Further, make the close =14 and you get 0.15.
So 14 is far above the old Dynamic as 6 is below. Not a problem says McGinley as the object is to let profits run and bail out when the market drops. So upside profits run without whipsaws while the downside adjusts quickly to a drop allowing opportunity to cut losses. So do you avoid a loss or grab a gain must be the question here and decision for intended users of the Dynamic.
 Exactly what does the McGinley Dynamic do.
Mr McGinley set out to avoid whipsaws as moving averages are prone to do and find a tool that won’t separate prices from the average.
In this instance, it avoids large drop offs. The Dynamic Line rises with falling data. Only one piece of back data is needed.
In any trending or trading market, the Dynamic doesn’t need back testing or adjustments. In instances of extreme whipsaw markets, it still sells high and buys low. The main point is moving averages get separated from prices. What happens when a crossover occurs. One may have a loss. So the Dynamic avoids these dilemmas.
 Notice the term market tool used throughout this article. Mr. McGinley says the Dynamic Line is not an indicator and shouldn’t be used as such.
He abhors the idea using the Dynamic as a trading vehicle. This was his purpose to comment regarding the problems of the ratios of the up and down markets. Rather I believe, maybe the intention is it should be a market tool to gauge where the market may be in relation to other market tools used by traders. Just speculation.
It should also be noted, the McGinley Dynamic is not only a remarkable tool but its the product of many years of intense research and insight by a master technician.
 The author would like to thank John McGinley, a good man, for decency, patience and understanding to allow time to get it all right to bring forth the McGinley Dynamic.
 The author would like to offer many thank you’s and a debt of gratitude to Tom McClellan Editor of the McClellan Market Report and McClellan Financial Publications for access to loads of research.
 Suggested reading:  Colby and Meyers Encyclopedia of Technical Market Indicators, 1988.
                                Dr. Lloyd Humphrey, The Moving Balance System, Windsor 1976
                               Pete Haurlan Measuring Trend Values 1968– Details can be read at Mcoscillator.com.
                               Sherman and Marian McClellan, Patterns for Profit 1970– Details can be read at mcoscillator.com
November 2009 Brian Twomey

Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

Gauss and the Bell Curve

 Karl Fredrich Gauss was a brilliant mathematician who lived in the early 1800’s and gave the world quadratic equations, methods of least squares fitting and the normal distribution. Gauss defined the normal distribution as the mean error while mathematician Karl Pearson defined it as standard deviation in the early 1900’s.
Modern day terminology defines the normal distribution as the bell curve. Ironically, Gauss intended in 1809 to answer an astronomy question not to find or understand normal distributions. Another mathematician of the era Pierre Simon LaPlace actually was the founder of the normal distribution from the paper Gauss published regarding his astronomy question in 1809.
The normal distribution was founded by sheer accident yet credited to Gauss because it appeared in print by him and has been the subject of much study by mathematicians for 200 years. The entire study of statistics originated from Gauss and thankfully so because it allowed us to understand markets, prices and probabilities among other applications. The only way to understand Gauss and the bell curve is to understand statistics. So I will build a bell curve in this article beginning with means and apply it to a trading example.
     Three methods exist to determine distributions, mean, median and mode.
Means are factored by adding all scores and dividing by the number of scores. Median is factored by adding the two middle numbers of a sample and divide by two. Mode is the most frequent of the numbers in a distribution of numbers.
The best method is to use means because it averages all numbers and is less subject to sample fluctuations. This was the Gaussian approach and his preferred method. What we are measuring here is parameters of central tendency or to answer where are our sample scores headed. To understand this, we must plot our scores beginning with 0 in the middle and plot + 1, + 2 and + 3 standard deviations on the right and -1, -2 and -3 on the left.
    So on a chart, plot the scores. What we will find here is .68 % of all scores will fall within -1 and + 1 standard deviations, 95 % fall within 2 standard deviations and 99 % fall within 3 standard deviations of the mean.  But this is not enough to tell us about the curve. We need to factor variances.
 Variance answers the question how spread out is our distribution.
It factors in possibilities why outliers may exist in our sample and helps us to understand these outliers and where they are plotted. So find the mean, subtract the mean from each score for a deviation score, square each deviation score and add all. Divide the sum by the number of scores. This is the variance that explains variability and may help to explain a hypothesis regarding the outliers.
   For standard deviation, we want to measure our spread more closely. So factor the square root of the variance.Here we will know exactly where our standard deviations will fall in relation to our total distribution.Modern day terms call this dispersion. In a Gaussian distribution, if we know the mean and the standard deviation, we can know the percentages of the scores that fall within plus or minus 1,2 or 3 standard deviations from the mean. This is called the confidence interval. This is how we know 68% of distributions fall within plus or minus 1 standard deviation, 95% within plus or minus 2 standard deviations and 99 % within plus or minus 3 standard deviations. Gauss called these probability functions.
     Notice our whole discussion so far is all about explanation of the mean and the various computations to help us explain it more closely. Once we plotted our distribution scores, we basically drew our bell curve above all the scores.Yet we can’t assume that all distributions will be perfectly normal where the mean will always equal 0 and the tails will be of equal length. So still this is not enough because we have tails on our curve that need explanation to better understand the whole curve. To do this we go to the third and fourth moments of statistics of the distribution called Skew and Kurtosis.
  Skewness of tails measures asymmetry of the distribution. A positive skew has a variance from the mean that is positive and skewed right while a negative skew has a variance from the mean skewed left. A symmetrical skew has 0 variance that forms a perfect normal distribution. Visually, when the bell curve is drawn first with a long tail, this is positive while the tail at the beginning before the bell curve is negative. If a distribution is symmetric, the sum of cubed deviations  above the mean will balance the cubed deviations below the mean.A skewed right distribution will have a skew greater than 0 while a skewed left distribution will have a skew less than 0.
  Kurtosis explains the peakedness of the distribution. High kurtosis has more peak and is less flat.
A perfectly normal distribution called mesokurtic has a kurtosis equal to 0. A positive distribution called leptokurtic with a high bell usually has a value greater than 3 while a negative platykurtic peak has kurtosis less than 3.
 Skew is more important to measure trades than kurtosis. Both are used to measure treasury auctions by the amount of bills or bonds sold to the skew to determine if the auction was successful. A successful auction would show a big bell curve with a short skew and positive kutosis.
Treasury bills and bonds is  the measure of interest rates and determines prices for many other financial instruments such as stocks, options and currency pairs. Skews are used to measure option prices by measuring implied volatilities by strike prices on an L shaped graph among other uses.
     Standard deviation measures volatility and asks the question can past returns equal future returns. Smaller standard deviations may mean less risk for a stock while higher volatility may mean a higher standard deviation.
Traders can measure closing prices from the average as it is dispersed from the mean. Dispersion would then measure the difference from actual value to average value. A larger difference between the two means a higher standard deviation and volatility. Prices that deviate far away from the mean will always revert back to the mean so traders can always take advantage of these situations. Prices that trade in a small range are always ready for a breakout.
The best technical indicator to use for standard deviation trades is Bollinger Bands because its a measure of volatility set at two standard deviations for upper and lower bands with a 20 day moving average. Double Bands is recommended with standard deviations set at 3. The Gauss Distribution was just the beginning of understanding of markets. It later led to Time Price Series and Garch Models as well as more applications of skew such as the Volatility Smile and other volatility skews.
 Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

 

New Zealand Central Bank History

  New Zealand  Central Bank
     Since the unanimous passage in New Zealand’s Parliament of the Reserve Bank of New Zealand Act in 1989 due to dire economic conditions in the 1980’s and interest rates in the 18 percent range, the central bank gained a greater independence from central government control. Both New Zealand’s major parties, Labour and National , voted overwhelmingly to support a first ever agreement within  industrialized nations called the Policy Targeting Agreement to achieve price stability by explicitly targeting inflation. Since New Zealand’s passage, the Bank of Canada in 1991, Bank of Australia in 1992, Bank of England in 1992,
Sweden’s Riksbank in 1993 and a vast majority of nations adopted a monetary policy form of inflation targeting that gave central bankers greater independence and restrained disastrous fiscal policies by central governments.
    New Zealand’s economy throughout the 1980’s suffered from low productive output, exchange rate risk not aligned with the economy and  exports and elevated interest rates in the 18 to 20 percent range. From 1990 to the present, interest rates averaged about four percent in New Zealand but remained below worldwide averages by 2.7 percent from 1988-2000 and remains low and competitive today. Most credit the Policy Targeting Agreement with its main focus on price stability and the central banks focus on transparency and accountability to keep inflation low.
   Eight agreements have been signed since 1989 between the minister of  finance and the reserve bank president as defined by section 9 of the law and each agreement defines monetary targets for inflation and price stability in broad ranges with a two or three point spread. The wide spread allows for possible economic price shocks because the minister of finance can fire the reserve bank president if prices fall outside the intended target, an unlikely event.
Another unlikely event is agreements are renegotiated and introduced that coincide with election cycles that allow for monetary policy manipulation. The government can direct a different monetary target and even a different agreement anytime but an Order -In -Council within the New Zealand Parliament would have to be submitted along with a public statement of explanation, highly unlikely. Monetary policy statements such as quarterly interest rate decisions are reviewed by members of both political parties in the Finance and Expenditure Committee within New Zealand’s Parliament and reviewed for accuracy.
  Statistics New Zealand surveys, collects, issues and monitors New Zealand’s All Groups Consumer Price Index on a quarterly basis that reflects prices of 9 groups, 21 subgroups with 73 sections. The 9 groups include food, housing, household operations, apparel, transportation, tobacco and alcohol, personal and healthcare, recreation and education and credit services. 700 items are price surveyed every quarter to gauge prices within the economy, to gauge prices for Policy Target Agreement objectives of price stability and inflation and to guage the level of interest rates. To accomplish this, the Price Index was set in June of 1999 with a base price of 1000.
     The All Groups Consumer Price Index comes with the caveat to ask not only what are the present level of prices but what will prices be in percentage terms in the future. For example, September 2000 had a 1034 consumer price and 1051 in September 2001. To factor the increase in  prices, 1051 minus 1034 divided by 1034 and multiply by 100 = a 2.4 percent increase in prices over a one year period. Chances are good that the 2.4 percent  year over year increase in prices fell right in line with the Policy Target Agreement and inflation expectations. Statistics New Zealand is charged with various statistical monitoring and price projections. The focus for CPI numbers and agreement objectives  is the overall headline number rather than the core number that subtracts food and energy prices.
  New Zealand’s Central Bank called the RBNZ or the Royal Bank of New Zealand is further charged   by the Policy Target Agreement to conduct open market operations to target the settlement of their cash balances. Such transactions would include interbank operations. To hold non negative balances is illegal under the Reserve Act law.
A recent implementation is to target a band of interest rates for open market operations rather than set targets. Section 13 of the Policy Target Agreement further charges the central bank with conducts of interest rates and exchange rates and to avoid unnecessary instability in output.
   To understand New Zealand’s economy and to stay within inflation targets, the central bank since 1997 employed  a macroeconomic model called the Forecasting and Policy System. While this system served its purpose, KITT or the KIWI Inflation Targeting Technology system recently replaced the forecasting system in June 2009. Not only does this appear to be a better system but the central bank now focuses more closely on the factors of their economy. For example, if inflation is a factor of supply and demand as they suspect, what are the factors of supply and demand in the economy.
    The structure of the economy can be viewed in four sectors that tie into GDP, non trade able goods producers, trade able goods producers, producers of residential investment and exporters. To understand the correlation of these factors allows a closer monitor of Policy Targets by using such instruments as fan charts to plot the present economy and prepare for future economic events. Fan charts were originally adopted by the Bank of England.
To understand household patterns of consumption based on trade able, non trade able, housing services, fuel and house prices will allow a better understanding of CPI and inflation and further allow an enhanced forecasting method with accuracy. What are the marginal costs to firms doing business and what about construction costs. Currently, construction costs make up 5.5 percent of New Zealand’s economy. With this information, what would housing prices cost and can the KITT system forecast better costs. Absolutely.
   While microeconomic variables is an important factor for economic and inflation purposes, KITT also addresses macroeconomic variables as a forecast tool such as inflation, output, interest rates and exchange rates.
Exchange rates are highlighted in Clause 4B of the PTA. All are factors due to possible shocks to the system. What would occur in the New Zealand economy if the New Zealand dollar rose or fell by extreme  proportions.The primary focus of KITT is the focus on the domestic economy with faster responses of economic conditions as a better forecasting tool using its 27 data sources. Clearly a central bank that has stayed on the leading edge of technology and intelligence for over 20 years.
December 2009 Brian Twomey
 Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

International Fisher Effect

  International Fisher Effect
 The International Fisher Effect is an exchange rate model designed by Economist  Irving Fisher in the 1930’s that is based on present and future risk free nominal interest rates rather than pure inflation to predict and understand present and future spot currency price movements. In order for this model to work in its purest form, it must be assumed the risk free aspects of capital must be allowed to free flow between nations that comprise a particular currency pair.
The derivation of the separation to use a pure interest rate model rather than an inflation model or some combination stems from the assumption by Fisher in the 1930’s that real interest rates are not affected by changes in expected inflation rates because both will become equalized over time through market arbitrage.
Inflation is embedded within the interest rate and factored into market projections for a currency price.  So it is assumed that spot currency prices will naturally achieve parity with perfect ordering markets.This is known as the Fisher Effect and not to be confused with the International Fisher effect. So Fisher believed the pure interest rate model was more of a leading indicator to predict future spot currency prices 12 months in the future.
The minor problem with this assumption is that we can’t ever know with certainty over time the spot price or the exact interest rate. This is known as Uncovered Interest Parity. The question for modern studies is does the International Fisher Effect work now that currencies are allowed to free float.From the 1930’s to the 1970’s, we didn’t  have an answer because nations controlled their currency price for economic and trade purposes. So only in the modern day has credence been given to a model that hasn’t really been fully tested. Yet the vast majority of studies only concentrated on one nation and compared that nation to the United States currency.
  International Fisher Effect calculations 12 months in the future work like this. Multiply the current spot exchange rate by the nominal annual US interest rate then divide by the annual rate of another nation. For example suppose the GBP/USD spot exchange rate was 1.5339 and the current interest rate in the US is 5 percent and 7 percent in Great Britain.
What is expected 12 months in the future. Calculate ( 1.5339 X 1.05) X 1.07 = 1.7233.
Investors would sell the USD against the GBP to allow the free flow of capital to float between these nations and profit. What if we looked at this interest rate model in terms of inflation and the Fisher Effect to account for the 2 percent difference in yield.
  The Fisher Effect model says nominal interest rates reflect the real rate of return and expected rate of inflation. So the difference between real and nominal rates of interest is determined by expected rates of inflation.
The nominal rate of return = real rate of return X expected rate of inflation.
For example, if the rate of return is 3.5% and expected inflation is 5.4 % then the nominal rate of return is 0.035 + 0.054 + ( 0.035 X 0.054) = 0.091 or 9.1 percent. The International Fisher Effect takes this example one step further to assume appreciation or depreciation of currency prices is proportionally related to differences in nominal rates of interest.
Nominal interest rates would automatically reflect differences in inflation by a purchasing power parity or arbitrage system. Suppose inflation in the UK is 10 percent and 3 percent in the US and the spot rate is GBP/USD 1.4. Expected GBP/USD is 1.5 = (1+ 0.1) X ( 1+ 0.03) = expected GBP/USD = 1.5.
 A number of factors could occur with these models however. What would happen if  nominal interest rates are the same within a currency pair. The twofold answer is either stay invested in the home nation because expected returns are not known or focus on inflation in either of the two nations for possible investment opportunities.
Yet this goes against the grain of the model and is not a good predictor of currency movements. The Fisher Effect has proven that dramatic effects can occur within currency pairs by changes in interest rates and inflation if investors are on the right side of the market. The above GBP/USD example has proven correctly but what if the trade was USD/GBP.
This trade would’ve had dramatic losses. For the shorter term, the Fisher Effect has proven to be a disaster because of the short term predictions of nominal rates and inflation. Even with perfect market information, investors buying shorter term T-Bills would’ve fared much better than investing in currency pairs.
  Longer term International Fisher Effects have proven much better but not very much. Interest rates eventually offset exchange rates but prediction errors have been known to occur. Remember we are trying to predict 12 months in the future. IFE fails particularly when the cost of borrowing or expected returns differ or when purchasing power parity fails. This is defined when the cost of goods can’t be exchanged in each nation on a one for one basis after adjusting for exchange rate changes and inflation.
 The interesting failure of these models is the focus on nominal interest rates and inflation. The modern day doesn’t see the big interest rate changes as once happened just 20 years ago. One point or even half point nominal interest rate changes rarely occurs anymore.
Instead the focus for central bankers in the modern day is not an interest rate target but rather an inflation target where interest rates are determined by the expected rate of inflation. Central bankers focus on their nations Consumer Price Index to measure prices and adjust interest rates according to prices in an economy.  To do otherwise may cause an economy to fall into deflation or stop a growing economy from further growth. So a 12 month interest rate target and 12 month exchange rate target can only be measured in 1/4 points at best in the modern day. Does this leave these models in the backseat for the modern day. The answer is probably yes until a new model is developed with the thought that all models includes these served an effective purpose.
December 2009 Brian Twomey
 Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

 

History T-Bill Auction

 To understand the formal context that led to the first T-Bill auction in 1929, it  must be viewed as a series of 1920’s events beginning with the end of World War 1.
At the end of the war, the United States carried a war debt of approximately $25 billion between 1917 and 1919. To understand this number, the debt in 1914 was $968 million. Factor that debt with a war surtax placed on American incomes by President Woodrow Wilson and a 73 percent personal income tax rate, the 1920 economic recovery for the US was bleak.
How was the United States to pay down the debt financed strictly by Americans through sales of Liberty and Victory bonds and short term debt instruments called Certificates of Indebtedness. Further how was the Treasury to not payout more in issued Treasury interest than what is received through income taxes especially when income taxes was the only revenue of repayment and a public outcry existed to reduce those rates.
Lastly, how was an economic recovery to be sustained. President Harding signed the Revenue Act of 1921 and reduced the top income tax rate from 73 to 58 percent coupled with a small reduction of the surtax on incomes and raised Capital Gains taxes from 10 to 12.5 percent. With reduced revenue, the Treasury was then forced into serious debt management mode especially in the short term.
 During the war years, the government issued short term, monthly and biweekly subscriptions of Certificates of Indebtedness that had maturities of one year or less. By wars end in 1919, $3.4 billion of Certificates of Indebtedness were outstanding. The Treasury set the coupon rate at a fixed price and sold the Certificates at par value. The coupon rates were set in increments of 1/8 percents, just above money market rates. Instances of over subscription and this occurred often, the Treasury gave preference to small orders and small distributors so the market wasn’t dominated by single entities, particularly banks  so a secondary market could be established. Sales were so good, the Treasury opened a War Loan Deposit Account at banks that payed 2 percent interest to transfer monies easier. The problem with this system was after the war.
 The government held subscription offerings four times a year on the 15th of every third month, in line with tax receipts so payouts can be arranged. Problems occurred when the government payed out monies in surpluses when they never knew what the surplus would be or if a surplus would even exist. Plus banks became such steady customers for themselves and their own customers, they oversubscribed in many instances and credited the War Loan account without paying out actual cash. Despite moving to a cash refinancing system with payouts in new Certificates and cash repurchasings at or near maturities, the Treasury reduced its debt burden to $22 billion by 1923. Yet an answer was needed because of the creative finance structure of the market and because the government was never sure regarding its ability of refinance.
 Formal legislation was signed by President Hoover to incorporate a new security with new market arrangements because the Treasury didn’t have the authority to change the present finance structures. Zero coupon bonds were proposed up to one year maturities issued at a discount of face value. The Zero Coupon Bonds would shortly come to be known as Treasury Bills due to its short term nature. The legislation changed the Treasury’s fixed price subscription offerings to an auction system based on competitive bids to obtain the lowest market rates. After much public debate, the public won the right to decide rates based on the competitive bid system. All deals would be settled in cash and the government would be allowed to sell T-Bills when funds were needed not necessarily on tax dates.
 During the first offering, the Treasury offered $100 million, 90 day bills with payment due seven days later on settlement day. The auction actually saw investors bid for $224 million in bills with an average price of 99.181. Quoting bills three decimal places was part of the passed legislation. The government now earned cheap money to finance their operations.
 By 1930, the government sold bills at auctions the second month of every quarter to limit borrowings and reduce interest costs. All four auctions in 1930 saw buyers refinance with newer bills. By 1934 and due to the success of past bill auctions, Certificates of Indebtedness were eliminated. By the end of 1934, T-Bills were the only short term finance mechanisms for the government. 1935 saw President Franklin Delano Roosevelt sign the Baby Bonds Bill that would later allow the government to issue Series HH,EE and I bonds as another mechanism to finance its operations.
 Today, the US Government holds market auctions every Monday or as scheduled. Four week, 28 day T-Bills are auctioned every month, 13 week, 91 day T-Bills are auctioned every three months and 26 week, 182 day T-Bills are auctioned every six months.
  What started out as a question whether debt can be transferred to future generations was a misnomer in the 1920s as the government through skilled debt management produced a surplus every year of the 1920’s. Despite early and continuous problems of over subscriptions and under pricing of fixed price offerings, the government still financed its needs. It helped when investors were willing to pay par value for an issue and wait the scheduled length of time to receive their coupon payment.  A tricky problem then because the government never knew if it was paying out too much, too little or just enough. Proceeds were payed out using surplus tax revenues yet who could know if those receipts came in as scheduled or if the economy would hold up in uncertain economic times.Prior problems were eliminated when the T-Bill system came into effect. That market today is unquestionable one of the largest markets traded in the world.
December 2009 Brian Twomey
  Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

Commodity Cycles

 Predictions of beginning and ending of commodity cycles since the modern day economies emerged from world war two has been more an art than a science. Many economic variables have been tested for their correlative and predictive powers without a consensus over the years. Yet modern studies remain the best because time determines which variables hold up to academic rigor. Any study of commodity cycles can’t find a foundation alone in a commodity nation such as Canada, New Zealand, South Africa and Australia because commodities are priced in US dollars. So understanding periods of commodity boom and busts cycles must be grounded in the US economy as a predictor of future or declining economic  activity.
    The study of cycles is not a new phenomenon in the modern day. Joseph Shumpeter spent his life studying business cycles and published his classic work Theory of Economic Development in the 1930’s. Scholars, economists, market watchers and traders have since spent their time studying the factors, the variables that make cycles work, the boom and bust, and the tops and bottoms. We know cycles exist. But how do they work. What we learned economically since world war two is not one factor or variable holds up to absolute rigor as a mechanism of prediction of boom and bust. So we will focus on various economic factors with the hope that two or three variables will correlate to understanding booms and bust and possibly predict the future.
     The last significant commodity cycle occurred in the 1970’s and lasted from the early 1970’s to about  1980. Most would agree Nixon’s policy to take the United States off the gold standard was a major contributing factor that allowed such a long cycle to perpetuate. Since World War two, economies never experienced such a long cycle. Historically, commodity cycles normally have duration about 10 years. Gold, oil and physical commodities such as wheat, rice, corn and soybeans saw significant and sustained price increases during the 1970’s cycle.. What we learned from this experience and what we didn’t know before because of free floating exchange rates was the U.S. dollar factor.
    During periods of US dollar decline, commodity prices and commodity currencies rise. Investors must seek higher yields. They do this by purchasing commodity futures.These factors can be attributed to interest rates. US dollar declines are usually associated with interest rate decreases that presages a declining economy. What occurs many times during these periods is governments experience increased borrowing that leads to extended periods of the downward cycle. This allows the commodity cycle to continue unfettered while governments contemplate exit strategies from recessions.
   Boom cycles are quite different. Boom cycles see credit expansions, rising interest rates and rising asset prices. But these boom periods are followed by reversals that normally have tendencies to reverse rapidly. Predictions of boom and bust can be complicated. One way may be observing terms of trade.
  Looking at terms of trade for commodity nations such as Australia, New Zealand, South Africa, Canada and Brazil may serve as a predictor since these nations are dependent on exports for foreign exchange revenues. If exports are increasing to the United States, cycle beginnings may be occurring.
 Yield curves always served as valuable predictors in the modern day of boom and bust economic activity especially the 10 year Treasury Bond and the shorter 3 month T-Bill. If the 10 year bond price falls below the 3 month T-Bill or if those prices are falling towards the 3 month T-Bill, recession is looming. When a formal cross occurs, recession is imminent. This would confirm the need for investors to seek higher yields by purchasing commodity futures.
  Because commodity currencies have floating exchange rates, another predictor is to correlate exchange rates to commodity indices such as the Reuters/Jefferies CRB Index. The purpose to use Reuters/Jefferies is its not only the oldest of the other three that dates back to 1947 but its heavily favored towards physical commodities rather than metals. Physical commodities will always react faster in any boom or bust cycle than metals such as gold, silver, platinum or Paladium.
Metals are laggard indicators. Yet a correlation of the S&P/ Goldman Sachs Index that began in 1970, Dow Jones/ AIG Commodity Index began in 1991 and the 1980 IMF Non Fuel Commodity Prices Index may also serve as predictors when measured against commodity currency exchange rates. A true correlation is needed. This model has been a predictor of future economic activity one quarter ahead.
 Smarter market watchers will look at the Baltic Dry Index. This is a commodity in itself and trades on an exchange. The Baltic Dry Index not only determines how many ships leave ports loaded with commodities but they determine shipping rates. Lower shipping rates and few ships leaving ports for exports is a valuable indicator and early warning sign of boom and bust cycles.
 Except for the yield curve example, all predictors focused on short term cycles. What drives short term demand for currencies and futures prices can’t be explained by larger macroeconomic models.
They predict long term rather than short term movements. Scholars, economists, traders and market watchers can’t agree when cycles began or ended.. They only know when we are in one or the other. This determination can only be found by looking at past economic data. But one variable can’t determine which cycle exists. Past years of research looked at employment as a predictor until they found employment was a lagging indicator. Others looked at such factors as the National Purchasing Managers Index and even compared that data to prices and economic activity within the 12 Federal Reserve Districts. This proved faulty. So inflation studies began. This again proved faulty so we looked at core inflation and then subtracted core inflation from food and energy to predict cycles. None proved absolute.
 Modern day studies focus on the markets and market indices and compare technical analysis to fundamental analysis to determine if a valid prediction may exist. Much of the research is good and getting better but we still don’t have a definitive answer to when cycles began or end. Yet no study of micro or macroeconomic models can serve us properly unless we look at commodity supply and crop reports.Until an answer occurs, investors and traders would be best served watching the markets for direction.
November 2009 Brian Twomey
 Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University.

 

Treasury International Capital: Tic Data History

Since 1934 under the Presidency of Franklin Delano Roosevelt,  Treasury International Capital was implemented to provide data on United States international Portfolio Investment and Capital Movements, known in the modern day as TIC.  The data was once collected and reported by various agencies over the years such as the Bureau of Economic Administration and the Census but due to a lack of interaction between agencies by those who collect, report and analyze the data, problems existed.
Problems existed in the type and quality of information in the early years so the Office of Federal Statistical Policy and Standards coordinated statistical efforts across agencies and accounted for its smoothness in operation for many decades. When the Office of Federal Statistical Policy and Standards lost its role in the 1980’s, Treasury took over the functions of collecting and reporting.
In 1983, after many years of negotiation, Treasury agreed that the Fed was entitled access to bank TIC data. Thanks to better coordination between the two agencies, TIC data is now collected and only recently reported on a monthly and quarterly basis quite accurately. Reasons stem from the world crisis due to the Asian currency devaluation.
This caught the world off guard and alerted all nations that a better reporting system was needed. Since 1974 and TIC system redesign in 1978, TIC data was collected and reported every five years and only covered certain types of securities transactions. Only in the past 10 years has this data been reported on a quarterly then monthly basis. But the collection and types of data didn’t come without problems.
  Currency transactions was never a factor in the early reporting of transactions. Traditionally, when $10,000 entered or left the United States, a Currency and Monetary Instruments Report was filed with Customs. But this reporting never reflected TIC data. Today dollar values as well as currency claims and liabilities of transactions are accurately reported now thanks to easy computer transmission and accountability. This began in March 2003. For example, how does a trading firm incorporated in the United States handle transactions to and from the London office. What if that transaction resulted in a gain or loss. Or what if those monies sat idle in an overseas bank account. What if a bank wrote off a bad loan.This is all now fully reflected in the TIC reports.
 TIC Data is the collection and reporting of purchases and sales of U.S.securities and financial instruments by institutions, governments, central banks, corporations  and many other entities. In earlier days, the US was only concerned with reporting and collection of long term Government securities. Now the focus concerns all transactions short and long term such as stocks, derivatives, currencies, options, forwards and swaps as well as bank transactions and any cross border transactions. The purpose is twofold. To report cross border portfolio positions of nations, central bankers, corporations and other entities. Secondly, to determine dollar values that enter and exit the US. This is conducted for purposes of accountability and important for monetary policy purposes. Data is used to determine balance of payments, international policy and to track international financial markets. Balance of payments is published by the Bureau of Economic Analysis quarterly in three sections: current account, capital account and financial account. Its the debit and credits of the flow of funds into and out of the US. All information has a distinct analytical insight..
  What if governments purchased short term T-Bill’s rather than long term bonds. What if corporate bonds were purchased over agencies securities. What if central bankers were selling government securities or selling dollar assets. What does this say for monetary policy. Should deficits be financed by governments or private markets.
Should interest rates rise or fall based on inflows and outflows of dollars. Should the US government buy what foreigners sell or sell what foreigners buy. And what types of instruments. For example, in 1974 overall ownership of securities by foreigners was 4.8 %, 13.5 % by 2003 while US Treasuries accounted for 14.7 % in 1974 and 45.5 % by 2003. These numbers account for dispersions rather than concentrations by any one nation. Yet these numbers have dramatically increased since 2003 with nation specific concentrations since part of modern day reporting is nation specific. Since September 2009, China holds $798.9 billion in US Government debt, up from $618.2 billion in September 2008. The next largest holder of US debt is Japan who in September 2008 held $617.2 billion and $751 billion in September 2009. Great Britain is third but doesn’t come near these totals.
  The monthly TIC data is distinguished by Treasury’s TIC B Reports and the Federal Reserve’s S reports. BQ  reports are Treasury’s quarterly reports and FR or FF belong to the Federal Reserve for quarterly reporting purposes. Each will be handled and explained separately with explanation of changes along the historic journey.
  Treasury’s BL1 reports dollar denominated liabilities to foreigners.Excludes short term instruments. BL2 reports dollar denominated liabilities from foreigners.Includes longer term instruments. Institutions refer to depository, bank holding companies, financial holding companies, brokers and dealers. Foreign institutions refers to central banks, Ministries of Finance, Treasuries, Diplomatic Establishments, International and regional organizations. FR 2050 is a weekly report of EURODOLLAR liabilities held in foreign offices of US banks. FFIEC 002 that stands for Federal Financial Institutions Examinations Council Agency reports assets and liabilities of US branches and agencies of foreign banks. They collect balance and off balance sheet information. FFIEC 019- county exposure report for US branches and agencies of foreign banks. This information is collected nation by nation. FR 2069 now FR 2644, a weekly report that collects information on borrowings, loans, deposits and selected balance sheet items.
  FRY-7N US non bank subsidiaries held by foreign banking organizations. FRY-7Q capital and asset report for foreign banking organizations. These reports are compiled by the International Reports Division of the Federal Reserve Bank of New York and reported on TIC D forms that also covers derivatives. The derivitives market had a notional value of $87 billion in 1998 to $454 trillion in June 2006 measured in payments. Measured by market value its $3 trillion in June 1998 to $10 trillion in June 2006. Since 2007, derivatives data was reported in TIC data on TIC form D.  Federal Reserve S forms for monthly data are US entities who buy or sell long term securities directly from or to foreigners.
 Quarterly reports are represented by forms BQ2, foreign currency liabilities and claims of depository institutions and part 2 customers foreign currency liabilities to foreigners. BQ3, maturities of selected liabilities of depository institutions and bank holding companies to foreigners. FR2502a, assets and liabilities of large foreign offices of US banks. Monthly TIC data reports can be looked upon as rollovers leading into the quarterly and semi annual reports. Monthly and quarterly reports are released by the Treasury and found in detail on the Treasury’s web site.
 While the International Portfolio Investment of Capital Movements was the beginning program in 1934, the program was suspended until 1943. The monthly and quarterly reports began in 1994 with many additions over the years as new financial products were introduced and new laws reflected expanded banking opportunities. Computers also helped the free flow and speed of capital in and out of borders. While the monthly and quarterly releases may draw criticism and praise from commentators, those employed in the TIC Department of the Treasury are not only experts but quite dedicated professionals.
November 2009 Brian Twomey
 Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

International Monetary Market

The introduction of the International Monetary Market in December 1971 and formal implementation in May 1972 can be traced to the end of Bretton Woods through the 1971 Smithsonian Agreement and Nixon’s suspension of United States dollar convertibility to gold. The increase in international business and trade, currency and interest rate volatility due to floating exchange rates, corporations and speculators lock out in the interbank market and world trade imbalances resulted in the need for the IMM. The IMM Exchange was formed as a separate division of the Chicago Mercantile Exchange whose sole purpose was trade of agricultural futures. With IMM’s 500 chartered members, increased to 750 by 1976 and a $10,000 membership fee increased to $325,000 by 1987, the purpose of the IMM was trade of currency futures, a new product previously studied by academics to open a freely traded exchange market to facilitate trade among nations.

The first futures experimental contracts included trade against the US Dollar such as the British Pound, Swiss Franc, German Deutsch Mark, Canadian Dollar, Japanese Yen and September 1974, the French Franc. This list would later expand to include the Australian Dollar, the Euro, emerging market currencies such as the Russian Ruble, Brazilian Real, Turkish Lira, Hungarian Forint, Polish Zloty, Mexican Peso and South African Rand.. In 1992, the German Deutsche Mark/Japanese Yen was introduced as the first futures cross rate currency. These early successes didn’t come without a price.

The challenging aspects were how to connect values of IMM foreign exchange contracts to the interbank market since the interbank market was the dominant means of currency trading in the 1970’s and how to allow the IMM to be the free floating exchange envisioned by academics. Clearing member firms were incorporated to act as sort of arbitrageurs between banks and the IMM to facilitate orderly markets between bid and ask spreads. Continental Bank of Chicago was later hired as a delivery agent for contracts. These successes bred a futures competition for new products never envisioned in this short term duration.

The Chicago Board Options Exchange competed and received the right to trade US 30 year Bond Futures while the IMM secured the right to trade Eurodollar contracts, a 90 day interest rate contract settled in cash rather than physical delivery. US dollar deposits in European banks and other continents came to be known as Eurodollars. Eurodollars came to be known as the Eurocurrency market used mainly by the Organization  for Petroleum Exporting Countries because OPEC always required payment for oil in US dollars. This cash settlement aspect would later pave the way for index futures such as world stock market indices and the IMM Index. Cash settlement would also allow the IMM to be later known as the cash markets because of its trade in short term  interest rate sensitive instruments such as 30 day Fed Funds futures, 13 week T-Bills, 2 and 10 year Notes, Libor, EURO/YEN Tibor and 3 month OIS Futures. a swap that allows spread trades between a 3 month money market asset and the overnight cost of financing the asset over the 3 month period.

With new competition, a transaction system was desperately needed. The CME and Reuters Holdings created the PMT, Post Market Trade to allow a global electronic automated transaction system to act as a single clearing entity and link the world’s financial centers such as Tokyo and London. PMT is today known as Globex who facilitates not only clearing but electronic trading for traders around the world. In 1975, US T-Bills were born and traded on the IMM in January 1976  with T-Bill futures trading in April 1986 with approval from the Commodities Futures Trading Commission.

The real success would come in the mid 1980’s when options began trading on currency futures. The Deutsche Mark began January 1984, British Pound and Swiss Franc February 1985, Japanese Yen March 1986, French Franc 1984, Canadian Dollar June 1986, European Currency Unit January 1986 and Australian Dollar 1987. By 2003, Foreign Exchange trading had a notional value of $347.5 billion.

The 1990’s saw explosive growth for the IMM due to three world events. The first was Basel 1 in July 1988 where the 12 nation European Central Bank Governors agreed to standardize guidelines for banks. Bank capital had to be equal to 4% of assets. The second was the 1992 Single European Act that allowed not only capital to flow freely throughout national borders but all banks were allowed to incorporate in any EU nation. Basel 2 is geared to control risk by preventing losses, a current work in progress.

A banks role is to channel funds from depositors to borrowers. With these news acts, depositors could be governments, governmental agencies and  multinational corporations. The role for banks in this new international arena exploded so to meet the demands of financing capital requirements, new loan structures and new interest rate structures such as overnight lending rates, they increasingly used the IMM for all finance needs. Plus a whole host of new trading instruments were introduced such as money market swaps to lock in or reduce borrowing costs, swaps for arbitrage against futures or hedge risk. Swaps would not be introduced until the the 2000’s however. Types of trades changed as well such as calendar spreads, overnight trades and spread trades. Further, bank relationships to central bankers solidified completely with these new arrangements. No better example than crisis.

In financial crisis situations, central bankers must provide liquidity to stabilize markets because risk may trade at premiums to a bank’s target rates, called  money rates that central bankers can’t control. Central bankers then provide liquidity to banks who trade and control  rates. These are called repo rates that are traded through the IMM. Repo markets allow participants to undertake rapid refinancing in the interbank market independent of credit limits to stabilize the system. A borrower pledges securitized assets such as stocks in exchange for cash to allow their operations to continue.

Asian money markets linked to the IMM because Asian governments, banks and businesses needed to facilitate business and trade in a faster way rather than borrow US Dollar deposits from European banks. Asian banks like European banks were saddled with dollar denominated deposits because all trades were dollar denominated due to the US dollar’s dominance. Extra trades were needed to facilitate trade in another currency, particularly Euros, other than US Dollars taking more time than necessary. These two continents would share not only an explosion of trade but these are two of the most widely traded world currencies on the IMM. For this reason, the Japanese Yen is quoted in US cents while Eurodollar futures are quoted based on the IMM Index, a function of the 3 month Libor Rate.

The IMM Index base of 100 is subtracted from the 3 month Libor rate to ensure bid prices would be below the asked price. These are normal market prevailing procedures used in other widely traded instruments on the IMM to insure market stabilization and normal traded markets. For example, price quotes for T-Bill futures contracts are based on the IMM Index. Subtract the discount yield of the T-Bill from the IMM’s base of 100, a 9.75 yield would equal a 90.25 IMM Index. Index values move in the same direction as futures prices. Same with the EURO Index. Widely traded instruments are tracked by the IMM Index.

As of June 2000, the IMM switched from a not for profit to a profit, membership and shareholder owned entity. It  opens for trading at 8:20 Eastern time to reflect major US economic releases reported at 8:30. The IMM is the largest financial market in the world. Banks, central bankers, multinational corporations, traders, speculators and other institutions all use its various products to borrow, lend, trade, profit, finance, speculate and hedge risks.

November 2009 Brian Twomey

 

 

Brian Twomey is a currency trader and Adjunct Professor of Political Science at Gardner-Webb University

Debt Monetization

The public debate regarding the debt and debt monetization is as old as the Republic. James Madison called debt a curse on the public, First Treasury Secretary Alexander Hamilton called it a blessing provided the debt wasn’t large. The modern day debt monetization term emanated from the Treasury’s cost of financing World War 2’s war debt because the Federal Reserve’s holdings of government debt tripled from 1943-1946.

The public was fearful of buying any debt during this period. Historically the Treasury Department then and now determines the amount of debt and maturities issued. In this capacity, they have full control over monetary policy, defined as the supply of money and credit. The Federal Reserve was the distributor of all debt to the public and supported debt prices through sales of bonds, notes and bills. A collision would occur between the two agencies as to their roles due to the failure to timely finance the war debt. The 1951 Treasury -Fed Accord settled the question who controls the fed’s balance sheet by reversing roles. The Fed would control monetary policy by supporting debt prices without control over any debt it holds and buy what the public doesn’t want while the Treasury would focus on amount of issuance and categorical maturities. .

Monetary policy since 1951 would be controlled through the Fed’s Open Market Operations with a Treasuries only policy. This would separate the Fed from fiscal policy and credit allocation and allow for true independence. This freed the Fed from monetizing debt for fiscal policy purposes and prevented collusion such as agreements to peg interest rates directly to treasury issues. Credit policy was also separated and limited to Treasury, defined as bailing out institutions, sterilizing foreign exchange operations and transfer Fed assets to Treasury for deficit reduction. The Treasury Secretary and the Comptroller of the Currency were removed from the Federal Reserve Board so policy decisions were separate from fiscal policy. Today 12 Federal Reserve Bank Governors and the Chairman of the Fed make up the Federal Open Market Committee that sets interest rate and money supply policies.

Monetizing the debt can be defined as money growth in relation to interest rates but not money growth in relation to government purchases or open market operations. Monetizing the debt occurs when changes in debt produce changes in interest rates. Yet money growth alone is not a monetizing of the debt since money growth ebbs and flows through contraction and expansion cycles over the years without a change in interest rates. Suppose a wash sale occurred where all debt issued was sold, no monetization. This is fiscal policy objectives completed. Fiscal policy is tax and spending policy by current Presidential Administrations. What if money growth was equal to debt, no monetization? Money growth is found in M1, M2 and M3. M1 is money in circulation, M2 is M1 plus savings and time deposits under $100,000 and M3 is M2 plus large time deposits over $100,000. So Open Market Operations is the issuance of debt replaced with money.

Monetizing the debt can also be characterized as money growth in excess of the federal debt or no money growth in relation to debt. This last example is called the liquidity effect where low money growth leads to low interest rates. Either will change the velocity of money defined as how fast  money circulates. Usually the target is a debt growth equal to velocity. This allows the system to be in sync.

A better way to understand this relationship is to ask exactly what are the feds targets. Do they target growth to velocity, money growth to employment as was once the case, money growth equaled to the present supply of money, interest rate targets or even inflation. Targets to inflation has proven to not only be disastrous but studies show negative statistical relationships forcing an out of sync growth to debt relationship. Many avenues have been tried since the 1913 Federal Reserve Act was passed that created the Federal Reserve System.

The question of monetization and growth to debt must be understood in terms of the multiplier effect, how much the money supply increases in response to changes in the monetary base. This is a better method to understand  fed holdings. Suppose the fed changed the banks reserve requirements, the cash ratio banks must hold against customer deposits. This would change the rate of money growth in the multiplier, the monetary base and possibly cause an interest rate change. As long as debt is in sync with this money growth, no monetization occurs because all that was increased was the monetary base or the supply of money and credit. Previous studies over the years shows without question a statistical  impact between money growth and changes in debt.

Monetization occurs in other ways such as when money growth targets higher interest rates. This is money growth with desired growth targets. The only way this can occur is to reduce maturity levels to increase liquidity. An increase in liquidity with corresponding reductions in debt issuance would cause a higher money supply and disequilibrium in growth to debt. Interest rates would have to rise to bring equilibrium back to the system. The problem occurs when interest rates rise, the value of outstanding debt falls. Longer term debt falls more than short term debt so deficits ensue due to a slowdown in economic activity and an increase in the debt to income ratios. This method would boost GDP growth in the short term but slow down an economy in the longer term.

During contractionary cycles and low interest rate environments, money growth and debt usually decrease simultaneously. This means governments must payout the yields bonds, notes and bills command in the marketplace. New debt and taxes are needed to retire old debt and service the new debt. If bond prices are not rising and governments are only paying yields, this ensures further contractions and a lengthening of the cycle. Why the debt tripled between 1943-1946 was investors didn’t want to buy bonds whose prices were decreasing. Investors can’t earn money on yields alone. Yet as long as money growth equals debt, no monetization occurs.

Its important to watch the amount of debt and length of maturities offered by the Treasury. For the most part, equal maturities were traditionally offered in the 2, 10, and 30 year bonds and 13 week T-Bills. Watch for any changes to this dynamic as money growth to debt will change. Also watch prices of these various instruments. You don’t want short term debt to pay more than long term debt. This presages a wholesale change in the growth to debt ratio. Be especially careful of big demand for shorter term debt because this may crowd out longer term capital. This is called debt neutrality or the Ricardian Equivalence named after David Ricardo the famous 18th century economist. Debt neutrality can be viewed as Treasury issuing more shorter term debt than longer term maturities. The purpose is twofold. To hide deficits or, as in years past, to keep inflation and employment low. While net debt issued may have been equal, the long term effects can be devastating to an economy.

Lastly, be aware of fed statements as monetary policy can only target money supply or interest rates. Understanding money growth to debt issues will help those understand their direction.

November 2009 Brian Twomey

 

 

Brian Twomey is a currency trader and Adjunct Professor of Political Science at Gardner-Webb University

International Monetary Market

 The introduction of the International Monetary Market in December 1971 and formal implementation in May 1972 can be traced to the end of Bretton Woods through the 1971 Smithsonian Agreement and Nixon’s suspension of United States dollar convertability to gold. The increase in international business and trade, currency and interest rate volatility due to floating exchange rates, corporations and speculators lock out in the interbank market and world trade imbalances resulted in the need for the IMM. The IMM Exchange was formed as a separate division of the Chicago Mercantile Exchange whose sole purpose was trade of agricultural futures. With IMM’s 500 chartered members, increased to 750 by 1976 and a $10,000 membership fee increased to $325,000 by 1987, the purpose of the IMM was trade of currency futures, a new product previously studied by academics to open a freely traded exchange market to facilitate trade among nations.
 The first futures experimental contracts included trade against the US Dollar such as the British Pound, Swiss Franc, German Deutsch Mark, Canadian Dollar, Japanese Yen and September 1974, the French Franc. This list would later expand to include the Australian Dollar, the Euro, emerging market currencies such as the Russian Ruble, Brazilian Real, Turkish Lira, Hungarian Forint, Polish Zloty, Mexican Peso and South African Rand.. In 1992, the German Deutsche Mark/Japanese Yen was introduced as the first futures cross rate currency.These early successes didn’t come without a price.
  The challenging aspects were how to connect values of IMM foreign exchange contracts to the interbank market since the interbank market was the dominant means of currency trading in the 1970’s and how to allow the IMM to be the free floating exchange envisioned by academics. Clearing member firms were incorporated to act as sort of arbitrageuers between banks and the IMM to facilitate orderly markets between bid and ask spreads. Continental Bank of Chicago was later hired as a delivery agent for contracts. These successes bred a futures competition for new products never envisioned in this short term duration.
   The Chicago Board Options Exchange competed and received the right to trade US 30 year Bond Futures while the IMM secured the right to trade Eurodollar contracts, a 90 day interest rate contract settled in cash rather than physical delivery. US dollar deposits in European banks and other continents came to be known as Eurodollars. Eurodollars came to be known as the Eurocurrency market used mainly by the Organization  for Petroleum Exporting Countries because OPEC always required payment for oil in US dollars.This cash settlement aspect would later pave the way for index futures such as world stock market indices and the IMM Index.Cash settlement would also allow the IMM to be later known as the cash markets because of its trade in short term  interest rate sensitive instruments such as 30 day Fed Funds futures, 13 week T-Bills, 2 and 10 year Notes, Libor, EURO/YEN Tibor and 3 month OIS Futures. a swap that allows spread trades between a 3 month money market asset and the overnight cost of financing the asset over the 3 month period.
 With new competition, a transaction system was desperately needed. The CME and Reuters Holdings created the PMT, Post Market Trade to allow a global electronic  automated transaction system to act as a single clearing entity and link the world’s financial centers such as Tokyo and London. PMT is today known as Globex who facilitates not only clearing but electronic trading for traders around the world. In 1975, US T-Bills were born and traded on the IMM in January 1976  with T-Bill futures trading in April 1986 with approval from the Commodities Futures Trading Commission.
 The real success would come in the mid 1980’s when options began trading on currency futures. The Deutsche Mark began January 1984, British Pound and Swiss Franc February 1985, Japanese Yen March 1986, French Franc 1984, Canadian Dollar June 1986, European Currency Unit January 1986 and Australian Dollar 1987. By 2003, Foreign Exchange trading had a notional value of $347.5 billion.
  The 1990’s saw explosive growth for the IMM due to three world events. The first was Basel 1 in July 1988 where the 12 nation European Central Bank Governors agreed to standardize guidelines for banks. Bank capital had to be equal to 4% of assets. The second was the 1992 Single European Act that allowed not only capital to flow freely throughout national borders but all banks were allowed to incorporate in any EU nation. Basel 2 is geared to control risk by preventing losses, a current work in progress.
   A banks role is to channel funds from depositers to borrowers. With these news acts, depositers could be governments, governmental agencies and  multinational corporations. The role for banks in this new international arena exploded so to meet the demands of financing capital requirements, new loan structures and new interest rate structures such as overnight lending rates, they increasingly used the IMM for all finance needs. Plus a whole host of new trading instruments were introduced such as money market swaps to lock in or reduce borrowing costs, swaps for arbitrage against futures or hedge risk. Swaps would not be introduced until the the 2000’s however. Types of trades changed as well such as calendar spreads, overnight trades and spread trades. Further, bank relationships to central bankers solidified completely with these new arrangements. No better example than crisis.
  In financial crisis situations, central bankers must provide liquidity to stabilize markets because risk may trade at premiums to a bank’s target rates, called  money rates that central bankers can’t control. Central bankers then provide liquidity to banks who trade and control  rates. These are called repo rates that are traded through the IMM. Repo markets allow participants to undertake rapid refinancing in the interbank market independent of credit limits to stabilize the system. A borrower pledges securitized assets such as stocks in exchange for cash to allow their operations to continue.
  Asian money markets linked to the IMM because Asian governments, banks and businesses needed to facilitate business and trade in a faster way rather than borrow US Dollar deposits from European banks.Asian banks like European banks were saddled with dollar denominated deposits because all trades were dollar denominated due to the US dollar’s dominance. Extra trades were needed to facilitate trade in another currency, particularly Euros, other than US Dollars taking more time than necessary. These two continents would share not only an explosion of trade but these are two of the most widely traded world currencies on the IMM. For this reason, the Japanese Yen is quoted in US cents while Eurodollar futures are quoted based on the IMM Index, a function of the 3 month Libor Rate.
   The IMM Index base of 100 is subtracted from the 3 month Libor rate to ensure bid prices would be below the asked price. These are normal market prevailing procedures used in other widely traded instruments on the IMM to insure market stabilization and normal traded markets. For example, price quotes for T-Bill futures contracts are based on the IMM Index. Subtract the discount yield of the T-Bill  from the IMM’s base of 100, a 9.75 yield would equal a 90.25 IMM Index. Index values move in the same direction as futures prices. Same with the EURO Index. Widely traded instruments are tracked by the IMM Index.
 As of June 2000, the IMM switched from a not for profit to a profit, membership and shareholder owned entity. It  opens for trading at 8:20 Eastern time to reflect major US economic releases reported at 8:30. The IMM is the largest financial market in the world. Banks, central bankers, multinational corporations, traders, speculators and other institutions all use its various products to borrow, lend, trade, profit, finance, speculate and hedge risks.
November 2009 Brian Twomey
   Brian Twomey is a currency trader and Adjunct Professor of Political Science at Gardner-Webb University

 

Japanese Keiretsu

The Japanese corporate system of governance known as Keiretsu dates back to the Meiji Restoration of 1866 and the world’s introduction to the industrial revolution. Because Japan has always been a small, very educated and very advanced society, the only way to compete among its larger Asian neighbors and ensure perpetuity was to group their companies into tightly knit relationships, a cultural trait some would argue. The English translation for Keiretsu denotes lineage while its forerunner Zaibatsu means monopoly or financial clique.

Some would argue whether Keiretsu or its older rival even exists as the group form they suspect. Some find the modern day basis for existence in a 1952 Japanese law that mentions the word Keiretsu, others assume Zaibaitsu existed because of the perpetuation of major Japanese companies that formed long before the Meiiji Restoration and are still powerful and profitable today. One example is Matsui.

Matsui began as a dry goods shop in 1673, ten years later they opened money changing shops for the Japanese government, the Tokugawa Shogunate in the capital city of Kyodo. With the introduction of a Japanese monetary system, Matsui later became a bank, today a leading bank of Japan. Other examples include Sumitomo who originated as a mining and smelting company and later expanded into copper, today a leading bank of Japan. Mitsubishi later formed as a bank in Kochi Prefecture, a region of Japan much like the county system in the United States. Yasuda formed as a bank in Toyama Prefecture and Okura formed as a bank in Niigata Prefecture. Later these banks and other businesses formed as holding companies, all family owned and managed.

These so called Zaibatsu holding companies were eliminated after World War 2 by the United States and written into the new Japanese constitution because of its undemocratic nature and governmental policies that perpetuated their existence. With Japan devastated after the war, it was time for Japanese companies to reinvent themselves. Along came these called Keiretsu.

A Keiretsu is a corporate governance system that has a bank as its first line of formation. Major banks established in the Zaibatsu period all formed a leading Keiretsu based in the region they began as a Zaibatsu corporation. The next Keiretsu line is a major corporate conglomerate such as Toyota, Nissan, Matsushita Electric, and Nippon Steel. All formed for a specific business purpose, called vertical Keiretsu while horizontal Keiretsu are the six largest banks of Japan. The remaining companies of a Keiretsu are ancillary companies that  perpetuate the conglomerate company by supplying parts, distribution, trading for exports. All corporate needs are met within the Keiretsu so other Keiretsu don’t conduct business with each other. The all important companies in the Keiretsu are the bank and the conglomerate company, some argue the controllers of the Keiretsu who’s goals are profits and long term existence by restricting competitors and hostile takeovers.

Features of a Keiretsu are established long term relationships, vast supply of workers, permanent employment, a steady supply of capital from the bank, share information with suppliers, manage inventory to reduce costs and increase efficiency and increase supply chain management. Some allude to the just in time inventory devised by the automobile Keiretsu as a show of success of Keiretsu formations to increase demand for foreign demand.

Financing begins by Keiretsu companies owning shares of stock of other companies. especially between banks and the major conglomerate company. Yet major conglomerates are said to own majority stakes in smaller Keiretsu companies for control purposes as well as supply members to sit on their corporate boards. Control means conglomerates consult with smaller companies regarding investment decisions with the ability to take over smaller companies.

The cost of a Keiretsu includes inefficiency, no reason to worry about existence since a large supply of capital exists from banks. To much debt and bankruptcy prone. Risk averse, why take chances. Less profitable firms grew slowly without innovation or structural changes to its formation.

The term Keiretsu first appeared in July 1952 when the Small and Medium Enterprises Planning Bureau issued guidelines for a program to target general machinery for productivity improvement. This program was called Keiretsu Shindan, Keiretsu diagnosis. This led scholars and the popular press on a course to prove the existence of Keiretsu, diagnose its operations and cry foul when outside nations couldn’t establish operations in Japan.

Factors to consider regarding Keiretsu existence is the 2002 merger of Sumitomo and Mitsubishi banks as well as the second historic merger of Fuji and Daiichi and banks. Lunch clubs existed in Japan for major company executives once a month since 1967. Not only hard to prove that Keiretsu exists regarding these factors but it has never been proven despite the many studies published over many years.

Due to the economic crisis that hit Japan in the late 1990’s and major conglomerates loss of profits, all Japanese companies have opened to competition. Firms now compete for price and quality by using market based systems instead of what is termed Keiretsu relational arrangements. Globalization and technology is said to also open Japanese companies because of the need to identify new customers, increase efficiency of orders and research so all Japanese companies are leaving their Keiretsu ways and going it alone.

Never has the existence of Keiretsu been definitively proven. Some say Marxist economists identified Keiretsu because it satisfied their ideology. Others say it came from attacks from unsatisfied companies. Either way, Japanese companies are opening more and more as economic crisis hits them harder and harder.

November 2009 Brian Twomey

 

Brian Twomey is a currency trader and Adjunct Professor of Political Science at Gardner-Webb University

 

MCGinley Dynamic Part 2

The McGinley Dynamic is a little-known yet highly reliable indicator invented by John R. McGinley, a Certified Market Technician and former editor of the Market Technicians Association’s Journal of Technical Analysis. Working within the context of moving averages throughout the 1990s, McGinley sought to invent a responsive indicator that would automatically adjust itself in relation to the speed of the market. His eponymous Dynamic, first published in the Journal of Technical Analysis in 1997, is a 10-day simple and exponential moving average with a filter that smooths the data to avoid whipsaws.

Simple Moving Averages vs. Exponential Moving Averages

simple moving average (SMA) smooths out price action by calculating past closing prices and dividing by the number of periods. To calculate a 10-day simple moving average, add the closing prices of the last 10 days and divide by 10. The smoother the moving average, the slower it reacts to prices. A 50-day moving average moves slower than a 10-day moving average. A 10- and 20-day moving average can at times experience volatility of prices that can make it harder to interpret price action. False signals may occur during these periods, creating losses because prices may get too far ahead of the market.

An exponential moving average (EMA) responds to prices much more quickly than a simple moving average. This is because the EMA gives more weight to the latest data rather than older data. It’s a good indicator for the short term and a great method to catch short term trends, which is why traders use both simple and exponential moving averages simultaneously for entry and exits. Nevertheless, it too can leave data behind.

The Problem With Moving Averages

In his research, McGinley found moving averages had many problems. In the first place, they were inappropriately applied. Moving averages in different periods operate with varying degrees in different markets. For example, how can one know when to use a 10-day, 20-day, or a 50-day moving average in a fast or slow market? In order to solve the problem of choosing the right length of the moving average, the McGinley Dynamic was built to automatically adjust to the current speed of the market.

McGinley believes moving averages should only be used as a smoothing mechanism rather than a trading system or signal generator. It is a monitor of trends. Further, McGinley found moving averages failed to follow prices since large separations frequently exist between prices and moving average lines. He sought to eliminate these problems by inventing an indicator that would hug prices more closely, avoid price separation and whipsaws, and follow prices automatically in fast or slow markets.

McGinley Dynamic Formula

This he did with the invention of the McGinley Dynamic. The formula is:

\begin{aligned} &\text{MD}_i = MD_{i-1} + \frac{ \text{Close} – MD_{i-1} }{ k \times N \times \left ( \frac{ \text{Close} }{ MD_{i-1} } \right )^4 } \\ &\textbf{where:}\\ &\text{MD}_i = \text{Current McGinley Dynamic} \\ &MD_{i-1} = \text{Previous McGinley Dynamic} \\ &\text{Close} = \text{Closing price} \\ &k = .6\ \text{(Constant 60\% of selected period N)} \\ &N = \text{Moving average period} \\ \end{aligned}

The McGinley Dynamic looks like a moving average line, yet it is actually a smoothing mechanism for prices that turns out to track far better than any moving average. It minimizes price separation, price whipsaws, and hugs prices much more closely. And it does this automatically as a factor of its formula.

Because of the calculation, the Dynamic Line speeds up in down markets as it follows prices yet moves more slowly in up markets. One wants to be quick to sell in a down market, yet ride an up market as long as possible. The constant N determines how closely the Dynamic tracks the index or stock. If one is emulating a 20-day moving average, for instance, use an N value half that of the moving average, or in this case 10.

It greatly avoids whipsaws because the Dynamic Line automatically follows and stays aligned to prices in any market—fast or slow—like a steering mechanism of a car that can adjust to the changing conditions of the road. Traders can rely on it to make decisions and time entrances and exits.

The Bottom Line

McGinley invented the Dynamic to act as a market tool rather than as a trading indicator. But whatever it’s used for, whether it is called a tool or indicator, the McGinley Dynamic is quite a fascinating instrument invented by a market technician that has followed and studied markets and indicators for nearly 40 years. In creating the Dynamic, McGinley sought to create a technical aid that would be more responsive to the raw data than simple or exponential moving averages.

 

2009

 

Brian Twomey

Australia Vs United States Tax Treaty

 Australia’s 1953 tax treaty with the United States was voided when both nations ratified a new treaty in 1983 and renewed in 2006 that reflected modern day developments. The purpose of a treaty is to prevent individuals and companies of third nations from inappropriately obtaining treaty benefits when they are not residents of either state. The second purpose is to allow modern day provisions to be  defined and understood to allow the force of law of each nation and treaty obligations to be enforced by both parties. With 21 million residents and an export dependent nation that distributes its nations abundant natural resources such as coal, zinc, copper, gold, aluminum and iron, Australia protected this status within this highly technical treaty. Many of these protections will be addressed.
  To begin. When is an incorporated United States company considered an Australian company. When that company is managed and controlled in Australia, conducts business in Australia and voting power is controlled by Australian resident shareholders. If a company declared dual residency status, failure of residency status would be declared and treaty benefits would be voided by both states.
 Areas defined for treaty purposes is the continental shelf to protect the exploitation and exploration of natural resources. This is defined further in US section 638 of the Internal Revenue Code. For the United States, Puerto Rico, Guam and the Virgin Islands are not included. Australia covered Norfolk Island territories, Christmas Island, Cocos Islands, Ashmore and Cartier Islands and the Coral Sea Islands.
 For treaty purposes, a state can’t tax higher or lower than the law allows. Domestic law overrides any treaty obligations. An example can be found in Article 4 and 1, Paragraph 3. If a US citizen relinquished citizenship for tax avoidance, 877 of IRS code says that person will be taxed 10 years following citizenship loss. Secondly, Article 18 Paragraph 2 and 6 says child support, social security and alimony is taxed by the respective state if domestic law taxes such revenue.
 Australian companies incorporated in Australia are Australian for residency purposes. These include partnerships, estates and trusts. A trust is exempt from taxes if that trust is formed for charitable or scientific research. Residency is defined as the place where the home is located or where major economic relations are conducted. Disputes from this Article 4, Paragraph 1 provision can be found in the Mutual Agreement clause in  Article 24.
 A company is considered a permanent resident in Australia if management is conducted in Australia, a branch or office, building site or factory and establishment for extractions of natural resources.
 For treaty purposes, Australia’s corporate tax was 46 percent since lowered to a flat rate of 30 percent while permanent establishments but non residents pay a 51 percent corporate tax. United States corporate tax rates vary depending on types of corporate formation.
    Dividends paid to non residents can’t be taxed higher than a 15 percent gross amount. The prior rate was 30 percentf or both states, a leftover from the 1953 treaty. Undistributed profits are taxed at 15 percent for non resident companies based on Article 10 clauses. Suppose a non resident company has undistributed profits liable to tax. The 15 percent must be taxed on undistributed profits as well as payment of foreign corporation taxes.
   If interest is derived from contracting state, no tax is paid if interest has a source in either state, the owner is a resident of either state or monies are derived from a permanent establishment. The US can tax interest paid by an Australian company if the interest has a source in the US. Australia and the US tax 10 percent on interest to non residents. If interest is derived from respective governments, tax is exempt.
  Gains connected with permanent establishment are taxable where permanent establishment is located. Other gains may be taxed by the state of source of gains and state of residence of owner to avoid double taxation.
 If a citizen resides in either state more than 183 days, that person may be taxed by the same state.
  If a person resides in a third country but incorporates in Australia or the US, that person is granted treaty benefits.
 Can’t skirt tax obligations for example where an Australian company who establishes a trust in the US to collect dividends from an Australian company to avoid taxes.
  Under the Double Taxation clauses in Article 22, the US will give a foreign tax credit for income taxes paid to Australia subject to US Code. Australia agreed to allow Australian residents a credit against Australian income tax paid in the US other than solely by reason of US citizenship. If a US citizen is resident in Australia, both states tax worldwide income. This refers to income generated from outside treaty jurisdictions, a common denominator for all parties to treaties in the modern day. Yet this resident if paid Australian taxes will receive a credit from the US minus Australia’s foreign tax credit. The United States will not lower its normal taxable limits.
 Further Double taxation provisions state Australia’s imposition of a 5 percent additional corporate tax on profits of Australian branches of foreign corporations taxes in lieu of a withholding tax on profit remits. Source income by a US resident in Australia is taxed by Australia. A resident of Australia whose source income is US is taxed by the US. Entertainers doing even one show in Australia, pays Australian taxes for the one show.
 The difference between the two states is in the forms of taxation and recognition of various corporate formations. Australia doesn’t appear to recognize LLC’s, the United States does. The United States has a progressive tax policy, Australia does not. The United States has established tax codes, Australia is constantly updating theirs.
 If any problems arise with treaty provisions, citizens can go to the state of resident or state of citizenship. Dispute provisions are three years which doesn’t necessarily mean settlement in three years.
 Treaty provisions are supposed to be updated every year to reflect changes in domestic laws yet either party can terminate this treaty after five years with a six month notice.
October 2009 Brian Twomey
   Brian Twomey is a currency trader and Adjunct Professor of Political Science at Gardner-Webb University