Taylor Rule

 The Taylor Rule is an interest rate forecast model invented and perfected by famed Economist John Taylor in 1992 and outlined in his landmark 1993 study Discretion VS Policy Rules in Practice. Taylor operated in the early 1990’s with credible assumptions that the Federal Reserve determined future interest rates to be gauged based on the Rational Expectations Theory of Macroeconomics to name one major component.
This is a backward looking model that assumed  if workers, consumers and firms believe future expectation of the economy is good, interest rates don’t need an adjustment. The model is not only backward looking but doesn’t take into account long term economic prospects.
The Phillips Curve was the last of discredited Rational Expectations Theory models that attempted to forecast the trade off between inflation and employment. The problem again was short term expectation may have been correct but what about long term assumptions based on these models and how can adjustments be made to an economy if the interest rate action taken was wrong.
Here monetary policy was based more on discretion than concrete rules. What we found was we can’t imply monetary expectations based on Rational Expectation Theories any longer particularly when an economy didn’t grow or stagflation was the result of recent interest rate change. So enter the Taylor Rule.
   The formula  looks like this. i= r* + pi + 0.5 (pi-pi*) + 0.5 ( y-y*). Here i= nominal fed funds rate, r*=  the real federal funds rate (usually 2%), pi= rate of inflation, p* is the target inflation rate, y= logarithm of real output and y* = logarithm of potential output.
What this equation says is this.  The difference between a nominal and real interest rate is inflation. Real interest rates are factored for inflation while nominal rates are not. Here we are looking at possible targets of interest rates.  Yet that can’t be accomplished in isolation without looking at inflation.
To compare rates of inflation or non inflation, one must look at the total picture of an economy in terms of prices. Prices and inflation are driven by three factors, the Consumer Price Index, Producer Prices and the Employment Index.
Most nations in the modern day look at the Consumer Price Index as a whole rather than look at core CPI. Taylor recommends this method as core CPI excludes food and energy prices. This method allows an observer to look at the total picture of an economy in terms of prices and inflation.
Rising prices means higher inflation. So Taylor recommends factoring the rate of inflation over one year or four quarters for a comprehensive picture.Taylor recommends the real interest rate should be 1 1/2 times the inflation rate. This is based on the assumption of an equilibrium rate that factors the real inflation rate against the expected inflation rate.
Taylor calls this the equilibrium, a 2 % steady state equalled to a  rate of about 2 %. Another way to look at this is the coefficients on the deviation of real GDP from trend  GDP and the inflation rate. Both methods are about the same for forecasting purposes. But that’s only half of the equation. Output must be factored.
   The total output picture of an economy is determined by productivity, labor force participation and changes in employment.
For the equation we look at real output against potential output. Logarithms is the term used. What is logarithms?
Exponents. Logarithms is one means to factor this aspect of the equation. We must look at GDP in terms of real and nominal GDP or to use the words of John Taylor, actual vs trend GDP. To do this, we must factor the GDP Deflator that measures prices of all goods produced domestically.
Factor nominal GDP divided by real GDP times 100. The answer is the figure for real GDP. We are deflating nominal GDP into a true number to fully measure total output of an economy.
  The product of the Taylor Rule is three numbers, an interest rate, an inflation rate, a GDP rate with all based on an equilibrium rate to gauge exactly the proper balance for an interest rate forecast by monetary authorities.
   The rule for policymakers is this. The Federal Reserve should raise rates when inflation is above target or when GDP growth is too high and above potential.  The Fed should lower rates when inflation is below the target level or when GDP growth is to slow and below potential.
When inflation is on target and GDP is growing at potential, rates are said to be neutral. This model has as its short term goal to stabilize the economy and a long term goal to stabilize inflation.
To properly gauge inflation and price levels, apply a moving average of the various price levels to determine a trend and to smooth out fluctuations. Perform the same functions on a monthly interest rate chart. Follow the fed funds rate to determine trends.
    The Taylor Rule has held many central banks around the world in good stead since its inception in 1993. It has served not only as a gauge of interest rates, inflation and output levels but it can equally serve as a guide to gauge proper levels of the money supply since money supply levels and inflation meld together to form a perfect economy. It allows us to understand money vs prices to gauge a proper balance because inflation can erode the purchasing power of the dollar if its not leveled properly.
 While the Taylor Rule has served economies in good economic times, it can also serve as a gauge for bad economic times.
Suppose a central bank held interest rates to low for to long. This prescription is what causes asset bubbles so interest rates must eventually be raised to balance inflation and output levels. A further problem of asset bubbles is money supply levels rise far higher than is needed to balance an economy suffering from inflation and output imbalances. Since 1993, the Taylor Rule has been the order of the day that has not only lived up to expectations but any criticisms have been muted responses without real bases of arguments.
January 2010 Brian Twomey
   Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

McGinley Dynamic and Moving Averages

   McGinley Dynamic
 The McGinley Dynamic is a market tool invented and perfected after many years by a 40 year trader, a 40 year market technician, who can add to his lengthy credits, a long time officer of the Market Technicians Association and former Editor of the their Journal of Technical Analysis.
This article will introduce traders to his little known tool first published in the Journal of Technical Analysis as an outline in 1991 and later published as a full blown study in 1997. My hope is to capture the mind of a master technician as he worked through the process of invention to perfection of the McGinley Dynamic who has advanced the study of technical analysis more than most would know. We begin with moving averages.
   Moving averages is the study of the history of  time series analysis. Early practicioners used various algorithms to smooth data and to flatten varied shaped curves. Yet this early work was quite primitive. Various graduation methods were used later like a fitting line using a least squares rule for plotting and construction purposes. Fitting lines using the least squares rule was later adopted in technical analysis in the family of moving averages.This began the process of interpolating data using probability theories and analysis.
In the Journal of the Royal Statistical Society in 1909, G.U. Yule described instantaneous averages that R.H. Hooker calculated in 1901 as moving averages. Yule identified properties of the variate difference correlation method.  The term moving averages entered the lexicon it was said shortly after in 1912 through W.I King’s  publication of Elements of Statistical Method.
Harold Wold later adopted Yule’s studies and described moving averages in a 1938 study called the Analysis of Time Series.
Others attribute exponential smoothing to Brown and Holt late 1950’s book about inventory control. Brown used exponential smoothing for Naval inventory processes. Holt was the first to use linear and seasonal trends for inventory control.
 Pete Haurlan was the first to use exponential smoothing for tracking stock prices and advanced the study for technicians in the modern day. Haurlan called exponential smoothing trend values where a 19 day EMA he called a 10 percent trend. His earlier work as a designer of tracking systems for rockets helped him design steering mechanisms. If the steering mechanism was off, it needed further inputs. Haurlan called this proportional control and used this method in his groundbreaking studies.
  For Haurlan and others, EMA’s was the moving average method of choice because of its focus on two inputs as opposed to the simple moving average that needed many past data points. These early technicians used pure mathematical calculations graphed on chart paper.
For example,  Haurlan needed a conversion factor, a smoothing constant. His smoothing constant = 2/ (n+1) where N is the number of days.
So a 19 day EMA equates to a 10 percent trend  by 2/ (19+1) = 2/20=0.10 or a 10 percent smoothing constant. Proportionate control  equates to how far price moved from the trend value and adjusts by using trend value curves. This he charted in waves by 1%, 2%, 5%, 10%, 20%, and 50 % .
Haurlan developed tracking rates based on trends. These tracking rates were measured against a stabilization period. For example, a 50 % tracking rate has a 5 day stabilization period.
  Sherman and Marian McClellan added two different EMA’S of daily breadth figures, 10 % and 5 % trend. This gave the first alerts to crossovers when the 10 % trend moved above the 5% trend. This detected a market reversal as well as overbought and oversold markets.
The McClellans would later invent the McClellan Oscillator and the Summation Index based on their calculations and charting methods during this period and published in their 1970 book, Patterns for Profit. The McClellan Oscillator measures the acceleration of daily advance decline statistics by smoothing with two different EMA’s and finding the difference between the two.
   For Haurlan and Loyd Humphries after him with his groundbreaking book called the Moving Balance System and his invention of the Moving Balance Indicator, both benefited from the easier use of coding EMA’s that only needed two inputs, price, angle and position and the prior value.
Back then, computer sophistication wasn’t available. Hence the reason for EMA’s over Simple Moving Averages that needed many data points. What separated McGinley from earlier technicians was his groundbreaking work in moving averages, following where others left off, that led to the McGinley Dynamic.What did he see. I paraphrase.
  McGinley says the problem with moving averages is twofold, inappropriately applied and overused.
They should only be used as smoothing mechanisms rather than a trading system and signal generator. Consider as he said, moving averages range in their uses from fast to slow markets. How can one know which to use and appropriately apply them. How can one know when to use a 10 day average from a 100 day. Further, moving averages are fixed in length without ability to change, a restriction in its use because it can’t adjust to changing data during trading days. We know lengths today as slopes.
The hope is the ability of a smoother to filter whipsaws but outliers exist in the averages. What should you do with a 10 day moving average on the 9th or 10th day. It doesn’t work because much of the trend has been lost.
 Next says McGinley, simple moving averages are always out of date. A 10 day average is off by 5 days or half its length and graphed wrong. Chances are big price moves already occurred within the 5 days so the graph set at 10 periods must also be off.
The further problem is the drop off, the difference in price and the line. What if the new data from X days ago is dropped and the data drop is larger than present values. The moving average must also drop generating false signals.
 Next, exponential moving averages where much is directly quoted so I can replicate modern day examples.
The exponential moving average improves on the simple moving average because calculations allow the average to hug prices more smoothly and allows for faster response to market data. Yet it under performs in consolidations just as the simple moving average generating line breaks and sheer trading indecision.
Exponentials require two inputs, previous average and current price. The classic calculation is A X the previous moving average + B X new data where A+B = 1.0.
Usually a small part of the new data is added to a large piece of the old. To build on the earlier works of Haurlan, for example, an 18% exponential  where A=0.82 and B=0.18 can be compared to a normal moving average where B=2/(x+1).
So an 18% exponential (x=10) hugs prices as closely as a 10 day moving average (2/(10+1)= 18. The shape of the exponential may be different due to calculations. The exponential calculation of B can be adjusted to fit market data and prices where simple moving averages are fixed, its much more rigid due to its calculations.
   Exponential moving averages therefore follows prices and market changes better than a fixed simple moving average, it smooths the data better. Yet the exponential moving average is not perfect, adjustments are always needed and it can’t rise with falling prices and fall with rising prices. So what’s the answer, enter the McGinley Dynamic.
  Building on years of moving average research, the McGinley Dynamic was invented as a market  tool designed to generate less whipsaws, hugs prices more closely, adjustable calculations to fit the users needs and follows markets fast and slow automatically.
Think of the Dynamic Line of the McGinley Dynamic as Haurlan’s steering mechanism, a proportionate control tool that steers the Dynamic Line along with prices. The questions whether the McGinley Dynamic lives up to its reputation, the answer is unquestioningly yes. Does it perform the above functions, absolutely. Here’s how. Again I quote.
  Building on Dr. Lloyd Humphrey’s work of moving averages in his groundbreaking 1976 book The Moving Balance System where the previous Dynamic Line was modified, here is the new formula.
New Dynamic= Dynamic +(index-Dynamic -1)/(N X(Index/Dynamic -1) 5. The index may be the Dow, S&P or a stock.
Mr. McGinley divides the difference between the Dynamic and the index by N times the ratio of the two. The numerator gives the up or down sign and the denominator stays within percentages within the bounds defined by N.
McGinley further states the 4th power gives the calculation an adjustment factor that increases more sharply the greater the difference between the Dynamic Line and the current data. Quoting further, the size of the adjustment changes not linearly but logarithmically. This feature allows the Dynamic to hug prices.
 Mr. McGinley recommends N should be 60 % of the moving average one wishes to emulate. His example is a 20 day moving average that uses an N of 12. Herein the Dynamic Line adjusts itself by speeding up or slowing down as markets dictate. The second term of the equation McGinley states is not a factor unless the difference between the index and the Dynamic Line is large. This aspect of the equation deals with lengths or slopes.
 An important factor is the second term however. McGinley says in fast up markets, the Dynamic Line slows  down less than down markets. Its the factor of the 4th power that speeds up the Dynamic Line in down markets.
From McGinley’s example, insert 10 for the old Dynamic, 5 for the close and N=7, a product of -6.67. Further, make the close =14 and you get 0.15.
So 14 is far above the old Dynamic as 6 is below. Not a problem says McGinley as the object is to let profits run and bail out when the market drops. So upside profits run without whipsaws while the downside adjusts quickly to a drop allowing opportunity to cut losses. So do you avoid a loss or grab a gain must be the question here and decision for intended users of the Dynamic.
 Exactly what does the McGinley Dynamic do.
Mr McGinley set out to avoid whipsaws as moving averages are prone to do and find a tool that won’t separate prices from the average.
In this instance, it avoids large drop offs. The Dynamic Line rises with falling data. Only one piece of back data is needed.
In any trending or trading market, the Dynamic doesn’t need back testing or adjustments. In instances of extreme whipsaw markets, it still sells high and buys low. The main point is moving averages get separated from prices. What happens when a crossover occurs. One may have a loss. So the Dynamic avoids these dilemmas.
 Notice the term market tool used throughout this article. Mr. McGinley says the Dynamic Line is not an indicator and shouldn’t be used as such.
He abhors the idea using the Dynamic as a trading vehicle. This was his purpose to comment regarding the problems of the ratios of the up and down markets. Rather I believe, maybe the intention is it should be a market tool to gauge where the market may be in relation to other market tools used by traders. Just speculation.
It should also be noted, the McGinley Dynamic is not only a remarkable tool but its the product of many years of intense research and insight by a master technician.
 The author would like to thank John McGinley, a good man, for decency, patience and understanding to allow time to get it all right to bring forth the McGinley Dynamic.
 The author would like to offer many thank you’s and a debt of gratitude to Tom McClellan Editor of the McClellan Market Report and McClellan Financial Publications for access to loads of research.
 Suggested reading:  Colby and Meyers Encyclopedia of Technical Market Indicators, 1988.
                                Dr. Lloyd Humphrey, The Moving Balance System, Windsor 1976
                               Pete Haurlan Measuring Trend Values 1968– Details can be read at Mcoscillator.com.
                               Sherman and Marian McClellan, Patterns for Profit 1970– Details can be read at mcoscillator.com
November 2009 Brian Twomey

Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

Gauss and the Bell Curve

 Karl Fredrich Gauss was a brilliant mathematician who lived in the early 1800’s and gave the world quadratic equations, methods of least squares fitting and the normal distribution. Gauss defined the normal distribution as the mean error while mathematician Karl Pearson defined it as standard deviation in the early 1900’s.
Modern day terminology defines the normal distribution as the bell curve. Ironically, Gauss intended in 1809 to answer an astronomy question not to find or understand normal distributions. Another mathematician of the era Pierre Simon LaPlace actually was the founder of the normal distribution from the paper Gauss published regarding his astronomy question in 1809.
The normal distribution was founded by sheer accident yet credited to Gauss because it appeared in print by him and has been the subject of much study by mathematicians for 200 years. The entire study of statistics originated from Gauss and thankfully so because it allowed us to understand markets, prices and probabilities among other applications. The only way to understand Gauss and the bell curve is to understand statistics. So I will build a bell curve in this article beginning with means and apply it to a trading example.
     Three methods exist to determine distributions, mean, median and mode.
Means are factored by adding all scores and dividing by the number of scores. Median is factored by adding the two middle numbers of a sample and divide by two. Mode is the most frequent of the numbers in a distribution of numbers.
The best method is to use means because it averages all numbers and is less subject to sample fluctuations. This was the Gaussian approach and his preferred method. What we are measuring here is parameters of central tendency or to answer where are our sample scores headed. To understand this, we must plot our scores beginning with 0 in the middle and plot + 1, + 2 and + 3 standard deviations on the right and -1, -2 and -3 on the left.
    So on a chart, plot the scores. What we will find here is .68 % of all scores will fall within -1 and + 1 standard deviations, 95 % fall within 2 standard deviations and 99 % fall within 3 standard deviations of the mean.  But this is not enough to tell us about the curve. We need to factor variances.
 Variance answers the question how spread out is our distribution.
It factors in possibilities why outliers may exist in our sample and helps us to understand these outliers and where they are plotted. So find the mean, subtract the mean from each score for a deviation score, square each deviation score and add all. Divide the sum by the number of scores. This is the variance that explains variability and may help to explain a hypothesis regarding the outliers.
   For standard deviation, we want to measure our spread more closely. So factor the square root of the variance.Here we will know exactly where our standard deviations will fall in relation to our total distribution.Modern day terms call this dispersion. In a Gaussian distribution, if we know the mean and the standard deviation, we can know the percentages of the scores that fall within plus or minus 1,2 or 3 standard deviations from the mean. This is called the confidence interval. This is how we know 68% of distributions fall within plus or minus 1 standard deviation, 95% within plus or minus 2 standard deviations and 99 % within plus or minus 3 standard deviations. Gauss called these probability functions.
     Notice our whole discussion so far is all about explanation of the mean and the various computations to help us explain it more closely. Once we plotted our distribution scores, we basically drew our bell curve above all the scores.Yet we can’t assume that all distributions will be perfectly normal where the mean will always equal 0 and the tails will be of equal length. So still this is not enough because we have tails on our curve that need explanation to better understand the whole curve. To do this we go to the third and fourth moments of statistics of the distribution called Skew and Kurtosis.
  Skewness of tails measures asymmetry of the distribution. A positive skew has a variance from the mean that is positive and skewed right while a negative skew has a variance from the mean skewed left. A symmetrical skew has 0 variance that forms a perfect normal distribution. Visually, when the bell curve is drawn first with a long tail, this is positive while the tail at the beginning before the bell curve is negative. If a distribution is symmetric, the sum of cubed deviations  above the mean will balance the cubed deviations below the mean.A skewed right distribution will have a skew greater than 0 while a skewed left distribution will have a skew less than 0.
  Kurtosis explains the peakedness of the distribution. High kurtosis has more peak and is less flat.
A perfectly normal distribution called mesokurtic has a kurtosis equal to 0. A positive distribution called leptokurtic with a high bell usually has a value greater than 3 while a negative platykurtic peak has kurtosis less than 3.
 Skew is more important to measure trades than kurtosis. Both are used to measure treasury auctions by the amount of bills or bonds sold to the skew to determine if the auction was successful. A successful auction would show a big bell curve with a short skew and positive kutosis.
Treasury bills and bonds is  the measure of interest rates and determines prices for many other financial instruments such as stocks, options and currency pairs. Skews are used to measure option prices by measuring implied volatilities by strike prices on an L shaped graph among other uses.
     Standard deviation measures volatility and asks the question can past returns equal future returns. Smaller standard deviations may mean less risk for a stock while higher volatility may mean a higher standard deviation.
Traders can measure closing prices from the average as it is dispersed from the mean. Dispersion would then measure the difference from actual value to average value. A larger difference between the two means a higher standard deviation and volatility. Prices that deviate far away from the mean will always revert back to the mean so traders can always take advantage of these situations. Prices that trade in a small range are always ready for a breakout.
The best technical indicator to use for standard deviation trades is Bollinger Bands because its a measure of volatility set at two standard deviations for upper and lower bands with a 20 day moving average. Double Bands is recommended with standard deviations set at 3. The Gauss Distribution was just the beginning of understanding of markets. It later led to Time Price Series and Garch Models as well as more applications of skew such as the Volatility Smile and other volatility skews.
 Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

 

New Zealand Central Bank History

  New Zealand  Central Bank
     Since the unanimous passage in New Zealand’s Parliament of the Reserve Bank of New Zealand Act in 1989 due to dire economic conditions in the 1980’s and interest rates in the 18 percent range, the central bank gained a greater independence from central government control. Both New Zealand’s major parties, Labour and National , voted overwhelmingly to support a first ever agreement within  industrialized nations called the Policy Targeting Agreement to achieve price stability by explicitly targeting inflation. Since New Zealand’s passage, the Bank of Canada in 1991, Bank of Australia in 1992, Bank of England in 1992,
Sweden’s Riksbank in 1993 and a vast majority of nations adopted a monetary policy form of inflation targeting that gave central bankers greater independence and restrained disastrous fiscal policies by central governments.
    New Zealand’s economy throughout the 1980’s suffered from low productive output, exchange rate risk not aligned with the economy and  exports and elevated interest rates in the 18 to 20 percent range. From 1990 to the present, interest rates averaged about four percent in New Zealand but remained below worldwide averages by 2.7 percent from 1988-2000 and remains low and competitive today. Most credit the Policy Targeting Agreement with its main focus on price stability and the central banks focus on transparency and accountability to keep inflation low.
   Eight agreements have been signed since 1989 between the minister of  finance and the reserve bank president as defined by section 9 of the law and each agreement defines monetary targets for inflation and price stability in broad ranges with a two or three point spread. The wide spread allows for possible economic price shocks because the minister of finance can fire the reserve bank president if prices fall outside the intended target, an unlikely event.
Another unlikely event is agreements are renegotiated and introduced that coincide with election cycles that allow for monetary policy manipulation. The government can direct a different monetary target and even a different agreement anytime but an Order -In -Council within the New Zealand Parliament would have to be submitted along with a public statement of explanation, highly unlikely. Monetary policy statements such as quarterly interest rate decisions are reviewed by members of both political parties in the Finance and Expenditure Committee within New Zealand’s Parliament and reviewed for accuracy.
  Statistics New Zealand surveys, collects, issues and monitors New Zealand’s All Groups Consumer Price Index on a quarterly basis that reflects prices of 9 groups, 21 subgroups with 73 sections. The 9 groups include food, housing, household operations, apparel, transportation, tobacco and alcohol, personal and healthcare, recreation and education and credit services. 700 items are price surveyed every quarter to gauge prices within the economy, to gauge prices for Policy Target Agreement objectives of price stability and inflation and to guage the level of interest rates. To accomplish this, the Price Index was set in June of 1999 with a base price of 1000.
     The All Groups Consumer Price Index comes with the caveat to ask not only what are the present level of prices but what will prices be in percentage terms in the future. For example, September 2000 had a 1034 consumer price and 1051 in September 2001. To factor the increase in  prices, 1051 minus 1034 divided by 1034 and multiply by 100 = a 2.4 percent increase in prices over a one year period. Chances are good that the 2.4 percent  year over year increase in prices fell right in line with the Policy Target Agreement and inflation expectations. Statistics New Zealand is charged with various statistical monitoring and price projections. The focus for CPI numbers and agreement objectives  is the overall headline number rather than the core number that subtracts food and energy prices.
  New Zealand’s Central Bank called the RBNZ or the Royal Bank of New Zealand is further charged   by the Policy Target Agreement to conduct open market operations to target the settlement of their cash balances. Such transactions would include interbank operations. To hold non negative balances is illegal under the Reserve Act law.
A recent implementation is to target a band of interest rates for open market operations rather than set targets. Section 13 of the Policy Target Agreement further charges the central bank with conducts of interest rates and exchange rates and to avoid unnecessary instability in output.
   To understand New Zealand’s economy and to stay within inflation targets, the central bank since 1997 employed  a macroeconomic model called the Forecasting and Policy System. While this system served its purpose, KITT or the KIWI Inflation Targeting Technology system recently replaced the forecasting system in June 2009. Not only does this appear to be a better system but the central bank now focuses more closely on the factors of their economy. For example, if inflation is a factor of supply and demand as they suspect, what are the factors of supply and demand in the economy.
    The structure of the economy can be viewed in four sectors that tie into GDP, non trade able goods producers, trade able goods producers, producers of residential investment and exporters. To understand the correlation of these factors allows a closer monitor of Policy Targets by using such instruments as fan charts to plot the present economy and prepare for future economic events. Fan charts were originally adopted by the Bank of England.
To understand household patterns of consumption based on trade able, non trade able, housing services, fuel and house prices will allow a better understanding of CPI and inflation and further allow an enhanced forecasting method with accuracy. What are the marginal costs to firms doing business and what about construction costs. Currently, construction costs make up 5.5 percent of New Zealand’s economy. With this information, what would housing prices cost and can the KITT system forecast better costs. Absolutely.
   While microeconomic variables is an important factor for economic and inflation purposes, KITT also addresses macroeconomic variables as a forecast tool such as inflation, output, interest rates and exchange rates.
Exchange rates are highlighted in Clause 4B of the PTA. All are factors due to possible shocks to the system. What would occur in the New Zealand economy if the New Zealand dollar rose or fell by extreme  proportions.The primary focus of KITT is the focus on the domestic economy with faster responses of economic conditions as a better forecasting tool using its 27 data sources. Clearly a central bank that has stayed on the leading edge of technology and intelligence for over 20 years.
December 2009 Brian Twomey
 Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

International Fisher Effect

  International Fisher Effect
 The International Fisher Effect is an exchange rate model designed by Economist  Irving Fisher in the 1930’s that is based on present and future risk free nominal interest rates rather than pure inflation to predict and understand present and future spot currency price movements. In order for this model to work in its purest form, it must be assumed the risk free aspects of capital must be allowed to free flow between nations that comprise a particular currency pair.
The derivation of the separation to use a pure interest rate model rather than an inflation model or some combination stems from the assumption by Fisher in the 1930’s that real interest rates are not affected by changes in expected inflation rates because both will become equalized over time through market arbitrage.
Inflation is embedded within the interest rate and factored into market projections for a currency price.  So it is assumed that spot currency prices will naturally achieve parity with perfect ordering markets.This is known as the Fisher Effect and not to be confused with the International Fisher effect. So Fisher believed the pure interest rate model was more of a leading indicator to predict future spot currency prices 12 months in the future.
The minor problem with this assumption is that we can’t ever know with certainty over time the spot price or the exact interest rate. This is known as Uncovered Interest Parity. The question for modern studies is does the International Fisher Effect work now that currencies are allowed to free float.From the 1930’s to the 1970’s, we didn’t  have an answer because nations controlled their currency price for economic and trade purposes. So only in the modern day has credence been given to a model that hasn’t really been fully tested. Yet the vast majority of studies only concentrated on one nation and compared that nation to the United States currency.
  International Fisher Effect calculations 12 months in the future work like this. Multiply the current spot exchange rate by the nominal annual US interest rate then divide by the annual rate of another nation. For example suppose the GBP/USD spot exchange rate was 1.5339 and the current interest rate in the US is 5 percent and 7 percent in Great Britain.
What is expected 12 months in the future. Calculate ( 1.5339 X 1.05) X 1.07 = 1.7233.
Investors would sell the USD against the GBP to allow the free flow of capital to float between these nations and profit. What if we looked at this interest rate model in terms of inflation and the Fisher Effect to account for the 2 percent difference in yield.
  The Fisher Effect model says nominal interest rates reflect the real rate of return and expected rate of inflation. So the difference between real and nominal rates of interest is determined by expected rates of inflation.
The nominal rate of return = real rate of return X expected rate of inflation.
For example, if the rate of return is 3.5% and expected inflation is 5.4 % then the nominal rate of return is 0.035 + 0.054 + ( 0.035 X 0.054) = 0.091 or 9.1 percent. The International Fisher Effect takes this example one step further to assume appreciation or depreciation of currency prices is proportionally related to differences in nominal rates of interest.
Nominal interest rates would automatically reflect differences in inflation by a purchasing power parity or arbitrage system. Suppose inflation in the UK is 10 percent and 3 percent in the US and the spot rate is GBP/USD 1.4. Expected GBP/USD is 1.5 = (1+ 0.1) X ( 1+ 0.03) = expected GBP/USD = 1.5.
 A number of factors could occur with these models however. What would happen if  nominal interest rates are the same within a currency pair. The twofold answer is either stay invested in the home nation because expected returns are not known or focus on inflation in either of the two nations for possible investment opportunities.
Yet this goes against the grain of the model and is not a good predictor of currency movements. The Fisher Effect has proven that dramatic effects can occur within currency pairs by changes in interest rates and inflation if investors are on the right side of the market. The above GBP/USD example has proven correctly but what if the trade was USD/GBP.
This trade would’ve had dramatic losses. For the shorter term, the Fisher Effect has proven to be a disaster because of the short term predictions of nominal rates and inflation. Even with perfect market information, investors buying shorter term T-Bills would’ve fared much better than investing in currency pairs.
  Longer term International Fisher Effects have proven much better but not very much. Interest rates eventually offset exchange rates but prediction errors have been known to occur. Remember we are trying to predict 12 months in the future. IFE fails particularly when the cost of borrowing or expected returns differ or when purchasing power parity fails. This is defined when the cost of goods can’t be exchanged in each nation on a one for one basis after adjusting for exchange rate changes and inflation.
 The interesting failure of these models is the focus on nominal interest rates and inflation. The modern day doesn’t see the big interest rate changes as once happened just 20 years ago. One point or even half point nominal interest rate changes rarely occurs anymore.
Instead the focus for central bankers in the modern day is not an interest rate target but rather an inflation target where interest rates are determined by the expected rate of inflation. Central bankers focus on their nations Consumer Price Index to measure prices and adjust interest rates according to prices in an economy.  To do otherwise may cause an economy to fall into deflation or stop a growing economy from further growth. So a 12 month interest rate target and 12 month exchange rate target can only be measured in 1/4 points at best in the modern day. Does this leave these models in the backseat for the modern day. The answer is probably yes until a new model is developed with the thought that all models includes these served an effective purpose.
December 2009 Brian Twomey
 Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

 

History T-Bill Auction

 To understand the formal context that led to the first T-Bill auction in 1929, it  must be viewed as a series of 1920’s events beginning with the end of World War 1.
At the end of the war, the United States carried a war debt of approximately $25 billion between 1917 and 1919. To understand this number, the debt in 1914 was $968 million. Factor that debt with a war surtax placed on American incomes by President Woodrow Wilson and a 73 percent personal income tax rate, the 1920 economic recovery for the US was bleak.
How was the United States to pay down the debt financed strictly by Americans through sales of Liberty and Victory bonds and short term debt instruments called Certificates of Indebtedness. Further how was the Treasury to not payout more in issued Treasury interest than what is received through income taxes especially when income taxes was the only revenue of repayment and a public outcry existed to reduce those rates.
Lastly, how was an economic recovery to be sustained. President Harding signed the Revenue Act of 1921 and reduced the top income tax rate from 73 to 58 percent coupled with a small reduction of the surtax on incomes and raised Capital Gains taxes from 10 to 12.5 percent. With reduced revenue, the Treasury was then forced into serious debt management mode especially in the short term.
 During the war years, the government issued short term, monthly and biweekly subscriptions of Certificates of Indebtedness that had maturities of one year or less. By wars end in 1919, $3.4 billion of Certificates of Indebtedness were outstanding. The Treasury set the coupon rate at a fixed price and sold the Certificates at par value. The coupon rates were set in increments of 1/8 percents, just above money market rates. Instances of over subscription and this occurred often, the Treasury gave preference to small orders and small distributors so the market wasn’t dominated by single entities, particularly banks  so a secondary market could be established. Sales were so good, the Treasury opened a War Loan Deposit Account at banks that payed 2 percent interest to transfer monies easier. The problem with this system was after the war.
 The government held subscription offerings four times a year on the 15th of every third month, in line with tax receipts so payouts can be arranged. Problems occurred when the government payed out monies in surpluses when they never knew what the surplus would be or if a surplus would even exist. Plus banks became such steady customers for themselves and their own customers, they oversubscribed in many instances and credited the War Loan account without paying out actual cash. Despite moving to a cash refinancing system with payouts in new Certificates and cash repurchasings at or near maturities, the Treasury reduced its debt burden to $22 billion by 1923. Yet an answer was needed because of the creative finance structure of the market and because the government was never sure regarding its ability of refinance.
 Formal legislation was signed by President Hoover to incorporate a new security with new market arrangements because the Treasury didn’t have the authority to change the present finance structures. Zero coupon bonds were proposed up to one year maturities issued at a discount of face value. The Zero Coupon Bonds would shortly come to be known as Treasury Bills due to its short term nature. The legislation changed the Treasury’s fixed price subscription offerings to an auction system based on competitive bids to obtain the lowest market rates. After much public debate, the public won the right to decide rates based on the competitive bid system. All deals would be settled in cash and the government would be allowed to sell T-Bills when funds were needed not necessarily on tax dates.
 During the first offering, the Treasury offered $100 million, 90 day bills with payment due seven days later on settlement day. The auction actually saw investors bid for $224 million in bills with an average price of 99.181. Quoting bills three decimal places was part of the passed legislation. The government now earned cheap money to finance their operations.
 By 1930, the government sold bills at auctions the second month of every quarter to limit borrowings and reduce interest costs. All four auctions in 1930 saw buyers refinance with newer bills. By 1934 and due to the success of past bill auctions, Certificates of Indebtedness were eliminated. By the end of 1934, T-Bills were the only short term finance mechanisms for the government. 1935 saw President Franklin Delano Roosevelt sign the Baby Bonds Bill that would later allow the government to issue Series HH,EE and I bonds as another mechanism to finance its operations.
 Today, the US Government holds market auctions every Monday or as scheduled. Four week, 28 day T-Bills are auctioned every month, 13 week, 91 day T-Bills are auctioned every three months and 26 week, 182 day T-Bills are auctioned every six months.
  What started out as a question whether debt can be transferred to future generations was a misnomer in the 1920s as the government through skilled debt management produced a surplus every year of the 1920’s. Despite early and continuous problems of over subscriptions and under pricing of fixed price offerings, the government still financed its needs. It helped when investors were willing to pay par value for an issue and wait the scheduled length of time to receive their coupon payment.  A tricky problem then because the government never knew if it was paying out too much, too little or just enough. Proceeds were payed out using surplus tax revenues yet who could know if those receipts came in as scheduled or if the economy would hold up in uncertain economic times.Prior problems were eliminated when the T-Bill system came into effect. That market today is unquestionable one of the largest markets traded in the world.
December 2009 Brian Twomey
  Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

Commodity Cycles

 Predictions of beginning and ending of commodity cycles since the modern day economies emerged from world war two has been more an art than a science. Many economic variables have been tested for their correlative and predictive powers without a consensus over the years. Yet modern studies remain the best because time determines which variables hold up to academic rigor. Any study of commodity cycles can’t find a foundation alone in a commodity nation such as Canada, New Zealand, South Africa and Australia because commodities are priced in US dollars. So understanding periods of commodity boom and busts cycles must be grounded in the US economy as a predictor of future or declining economic  activity.
    The study of cycles is not a new phenomenon in the modern day. Joseph Shumpeter spent his life studying business cycles and published his classic work Theory of Economic Development in the 1930’s. Scholars, economists, market watchers and traders have since spent their time studying the factors, the variables that make cycles work, the boom and bust, and the tops and bottoms. We know cycles exist. But how do they work. What we learned economically since world war two is not one factor or variable holds up to absolute rigor as a mechanism of prediction of boom and bust. So we will focus on various economic factors with the hope that two or three variables will correlate to understanding booms and bust and possibly predict the future.
     The last significant commodity cycle occurred in the 1970’s and lasted from the early 1970’s to about  1980. Most would agree Nixon’s policy to take the United States off the gold standard was a major contributing factor that allowed such a long cycle to perpetuate. Since World War two, economies never experienced such a long cycle. Historically, commodity cycles normally have duration about 10 years. Gold, oil and physical commodities such as wheat, rice, corn and soybeans saw significant and sustained price increases during the 1970’s cycle.. What we learned from this experience and what we didn’t know before because of free floating exchange rates was the U.S. dollar factor.
    During periods of US dollar decline, commodity prices and commodity currencies rise. Investors must seek higher yields. They do this by purchasing commodity futures.These factors can be attributed to interest rates. US dollar declines are usually associated with interest rate decreases that presages a declining economy. What occurs many times during these periods is governments experience increased borrowing that leads to extended periods of the downward cycle. This allows the commodity cycle to continue unfettered while governments contemplate exit strategies from recessions.
   Boom cycles are quite different. Boom cycles see credit expansions, rising interest rates and rising asset prices. But these boom periods are followed by reversals that normally have tendencies to reverse rapidly. Predictions of boom and bust can be complicated. One way may be observing terms of trade.
  Looking at terms of trade for commodity nations such as Australia, New Zealand, South Africa, Canada and Brazil may serve as a predictor since these nations are dependent on exports for foreign exchange revenues. If exports are increasing to the United States, cycle beginnings may be occurring.
 Yield curves always served as valuable predictors in the modern day of boom and bust economic activity especially the 10 year Treasury Bond and the shorter 3 month T-Bill. If the 10 year bond price falls below the 3 month T-Bill or if those prices are falling towards the 3 month T-Bill, recession is looming. When a formal cross occurs, recession is imminent. This would confirm the need for investors to seek higher yields by purchasing commodity futures.
  Because commodity currencies have floating exchange rates, another predictor is to correlate exchange rates to commodity indices such as the Reuters/Jefferies CRB Index. The purpose to use Reuters/Jefferies is its not only the oldest of the other three that dates back to 1947 but its heavily favored towards physical commodities rather than metals. Physical commodities will always react faster in any boom or bust cycle than metals such as gold, silver, platinum or Paladium.
Metals are laggard indicators. Yet a correlation of the S&P/ Goldman Sachs Index that began in 1970, Dow Jones/ AIG Commodity Index began in 1991 and the 1980 IMF Non Fuel Commodity Prices Index may also serve as predictors when measured against commodity currency exchange rates. A true correlation is needed. This model has been a predictor of future economic activity one quarter ahead.
 Smarter market watchers will look at the Baltic Dry Index. This is a commodity in itself and trades on an exchange. The Baltic Dry Index not only determines how many ships leave ports loaded with commodities but they determine shipping rates. Lower shipping rates and few ships leaving ports for exports is a valuable indicator and early warning sign of boom and bust cycles.
 Except for the yield curve example, all predictors focused on short term cycles. What drives short term demand for currencies and futures prices can’t be explained by larger macroeconomic models.
They predict long term rather than short term movements. Scholars, economists, traders and market watchers can’t agree when cycles began or ended.. They only know when we are in one or the other. This determination can only be found by looking at past economic data. But one variable can’t determine which cycle exists. Past years of research looked at employment as a predictor until they found employment was a lagging indicator. Others looked at such factors as the National Purchasing Managers Index and even compared that data to prices and economic activity within the 12 Federal Reserve Districts. This proved faulty. So inflation studies began. This again proved faulty so we looked at core inflation and then subtracted core inflation from food and energy to predict cycles. None proved absolute.
 Modern day studies focus on the markets and market indices and compare technical analysis to fundamental analysis to determine if a valid prediction may exist. Much of the research is good and getting better but we still don’t have a definitive answer to when cycles began or end. Yet no study of micro or macroeconomic models can serve us properly unless we look at commodity supply and crop reports.Until an answer occurs, investors and traders would be best served watching the markets for direction.
November 2009 Brian Twomey
 Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University.

 

Treasury International Capital: Tic Data History

Since 1934 under the Presidency of Franklin Delano Roosevelt,  Treasury International Capital was implemented to provide data on United States international Portfolio Investment and Capital Movements, known in the modern day as TIC.  The data was once collected and reported by various agencies over the years such as the Bureau of Economic Administration and the Census but due to a lack of interaction between agencies by those who collect, report and analyze the data, problems existed.
Problems existed in the type and quality of information in the early years so the Office of Federal Statistical Policy and Standards coordinated statistical efforts across agencies and accounted for its smoothness in operation for many decades. When the Office of Federal Statistical Policy and Standards lost its role in the 1980’s, Treasury took over the functions of collecting and reporting.
In 1983, after many years of negotiation, Treasury agreed that the Fed was entitled access to bank TIC data. Thanks to better coordination between the two agencies, TIC data is now collected and only recently reported on a monthly and quarterly basis quite accurately. Reasons stem from the world crisis due to the Asian currency devaluation.
This caught the world off guard and alerted all nations that a better reporting system was needed. Since 1974 and TIC system redesign in 1978, TIC data was collected and reported every five years and only covered certain types of securities transactions. Only in the past 10 years has this data been reported on a quarterly then monthly basis. But the collection and types of data didn’t come without problems.
  Currency transactions was never a factor in the early reporting of transactions. Traditionally, when $10,000 entered or left the United States, a Currency and Monetary Instruments Report was filed with Customs. But this reporting never reflected TIC data. Today dollar values as well as currency claims and liabilities of transactions are accurately reported now thanks to easy computer transmission and accountability. This began in March 2003. For example, how does a trading firm incorporated in the United States handle transactions to and from the London office. What if that transaction resulted in a gain or loss. Or what if those monies sat idle in an overseas bank account. What if a bank wrote off a bad loan.This is all now fully reflected in the TIC reports.
 TIC Data is the collection and reporting of purchases and sales of U.S.securities and financial instruments by institutions, governments, central banks, corporations  and many other entities. In earlier days, the US was only concerned with reporting and collection of long term Government securities. Now the focus concerns all transactions short and long term such as stocks, derivatives, currencies, options, forwards and swaps as well as bank transactions and any cross border transactions. The purpose is twofold. To report cross border portfolio positions of nations, central bankers, corporations and other entities. Secondly, to determine dollar values that enter and exit the US. This is conducted for purposes of accountability and important for monetary policy purposes. Data is used to determine balance of payments, international policy and to track international financial markets. Balance of payments is published by the Bureau of Economic Analysis quarterly in three sections: current account, capital account and financial account. Its the debit and credits of the flow of funds into and out of the US. All information has a distinct analytical insight..
  What if governments purchased short term T-Bill’s rather than long term bonds. What if corporate bonds were purchased over agencies securities. What if central bankers were selling government securities or selling dollar assets. What does this say for monetary policy. Should deficits be financed by governments or private markets.
Should interest rates rise or fall based on inflows and outflows of dollars. Should the US government buy what foreigners sell or sell what foreigners buy. And what types of instruments. For example, in 1974 overall ownership of securities by foreigners was 4.8 %, 13.5 % by 2003 while US Treasuries accounted for 14.7 % in 1974 and 45.5 % by 2003. These numbers account for dispersions rather than concentrations by any one nation. Yet these numbers have dramatically increased since 2003 with nation specific concentrations since part of modern day reporting is nation specific. Since September 2009, China holds $798.9 billion in US Government debt, up from $618.2 billion in September 2008. The next largest holder of US debt is Japan who in September 2008 held $617.2 billion and $751 billion in September 2009. Great Britain is third but doesn’t come near these totals.
  The monthly TIC data is distinguished by Treasury’s TIC B Reports and the Federal Reserve’s S reports. BQ  reports are Treasury’s quarterly reports and FR or FF belong to the Federal Reserve for quarterly reporting purposes. Each will be handled and explained separately with explanation of changes along the historic journey.
  Treasury’s BL1 reports dollar denominated liabilities to foreigners.Excludes short term instruments. BL2 reports dollar denominated liabilities from foreigners.Includes longer term instruments. Institutions refer to depository, bank holding companies, financial holding companies, brokers and dealers. Foreign institutions refers to central banks, Ministries of Finance, Treasuries, Diplomatic Establishments, International and regional organizations. FR 2050 is a weekly report of EURODOLLAR liabilities held in foreign offices of US banks. FFIEC 002 that stands for Federal Financial Institutions Examinations Council Agency reports assets and liabilities of US branches and agencies of foreign banks. They collect balance and off balance sheet information. FFIEC 019- county exposure report for US branches and agencies of foreign banks. This information is collected nation by nation. FR 2069 now FR 2644, a weekly report that collects information on borrowings, loans, deposits and selected balance sheet items.
  FRY-7N US non bank subsidiaries held by foreign banking organizations. FRY-7Q capital and asset report for foreign banking organizations. These reports are compiled by the International Reports Division of the Federal Reserve Bank of New York and reported on TIC D forms that also covers derivatives. The derivitives market had a notional value of $87 billion in 1998 to $454 trillion in June 2006 measured in payments. Measured by market value its $3 trillion in June 1998 to $10 trillion in June 2006. Since 2007, derivatives data was reported in TIC data on TIC form D.  Federal Reserve S forms for monthly data are US entities who buy or sell long term securities directly from or to foreigners.
 Quarterly reports are represented by forms BQ2, foreign currency liabilities and claims of depository institutions and part 2 customers foreign currency liabilities to foreigners. BQ3, maturities of selected liabilities of depository institutions and bank holding companies to foreigners. FR2502a, assets and liabilities of large foreign offices of US banks. Monthly TIC data reports can be looked upon as rollovers leading into the quarterly and semi annual reports. Monthly and quarterly reports are released by the Treasury and found in detail on the Treasury’s web site.
 While the International Portfolio Investment of Capital Movements was the beginning program in 1934, the program was suspended until 1943. The monthly and quarterly reports began in 1994 with many additions over the years as new financial products were introduced and new laws reflected expanded banking opportunities. Computers also helped the free flow and speed of capital in and out of borders. While the monthly and quarterly releases may draw criticism and praise from commentators, those employed in the TIC Department of the Treasury are not only experts but quite dedicated professionals.
November 2009 Brian Twomey
 Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

International Monetary Market

The introduction of the International Monetary Market in December 1971 and formal implementation in May 1972 can be traced to the end of Bretton Woods through the 1971 Smithsonian Agreement and Nixon’s suspension of United States dollar convertibility to gold. The increase in international business and trade, currency and interest rate volatility due to floating exchange rates, corporations and speculators lock out in the interbank market and world trade imbalances resulted in the need for the IMM. The IMM Exchange was formed as a separate division of the Chicago Mercantile Exchange whose sole purpose was trade of agricultural futures. With IMM’s 500 chartered members, increased to 750 by 1976 and a $10,000 membership fee increased to $325,000 by 1987, the purpose of the IMM was trade of currency futures, a new product previously studied by academics to open a freely traded exchange market to facilitate trade among nations.

The first futures experimental contracts included trade against the US Dollar such as the British Pound, Swiss Franc, German Deutsch Mark, Canadian Dollar, Japanese Yen and September 1974, the French Franc. This list would later expand to include the Australian Dollar, the Euro, emerging market currencies such as the Russian Ruble, Brazilian Real, Turkish Lira, Hungarian Forint, Polish Zloty, Mexican Peso and South African Rand.. In 1992, the German Deutsche Mark/Japanese Yen was introduced as the first futures cross rate currency. These early successes didn’t come without a price.

The challenging aspects were how to connect values of IMM foreign exchange contracts to the interbank market since the interbank market was the dominant means of currency trading in the 1970’s and how to allow the IMM to be the free floating exchange envisioned by academics. Clearing member firms were incorporated to act as sort of arbitrageurs between banks and the IMM to facilitate orderly markets between bid and ask spreads. Continental Bank of Chicago was later hired as a delivery agent for contracts. These successes bred a futures competition for new products never envisioned in this short term duration.

The Chicago Board Options Exchange competed and received the right to trade US 30 year Bond Futures while the IMM secured the right to trade Eurodollar contracts, a 90 day interest rate contract settled in cash rather than physical delivery. US dollar deposits in European banks and other continents came to be known as Eurodollars. Eurodollars came to be known as the Eurocurrency market used mainly by the Organization  for Petroleum Exporting Countries because OPEC always required payment for oil in US dollars. This cash settlement aspect would later pave the way for index futures such as world stock market indices and the IMM Index. Cash settlement would also allow the IMM to be later known as the cash markets because of its trade in short term  interest rate sensitive instruments such as 30 day Fed Funds futures, 13 week T-Bills, 2 and 10 year Notes, Libor, EURO/YEN Tibor and 3 month OIS Futures. a swap that allows spread trades between a 3 month money market asset and the overnight cost of financing the asset over the 3 month period.

With new competition, a transaction system was desperately needed. The CME and Reuters Holdings created the PMT, Post Market Trade to allow a global electronic automated transaction system to act as a single clearing entity and link the world’s financial centers such as Tokyo and London. PMT is today known as Globex who facilitates not only clearing but electronic trading for traders around the world. In 1975, US T-Bills were born and traded on the IMM in January 1976  with T-Bill futures trading in April 1986 with approval from the Commodities Futures Trading Commission.

The real success would come in the mid 1980’s when options began trading on currency futures. The Deutsche Mark began January 1984, British Pound and Swiss Franc February 1985, Japanese Yen March 1986, French Franc 1984, Canadian Dollar June 1986, European Currency Unit January 1986 and Australian Dollar 1987. By 2003, Foreign Exchange trading had a notional value of $347.5 billion.

The 1990’s saw explosive growth for the IMM due to three world events. The first was Basel 1 in July 1988 where the 12 nation European Central Bank Governors agreed to standardize guidelines for banks. Bank capital had to be equal to 4% of assets. The second was the 1992 Single European Act that allowed not only capital to flow freely throughout national borders but all banks were allowed to incorporate in any EU nation. Basel 2 is geared to control risk by preventing losses, a current work in progress.

A banks role is to channel funds from depositors to borrowers. With these news acts, depositors could be governments, governmental agencies and  multinational corporations. The role for banks in this new international arena exploded so to meet the demands of financing capital requirements, new loan structures and new interest rate structures such as overnight lending rates, they increasingly used the IMM for all finance needs. Plus a whole host of new trading instruments were introduced such as money market swaps to lock in or reduce borrowing costs, swaps for arbitrage against futures or hedge risk. Swaps would not be introduced until the the 2000’s however. Types of trades changed as well such as calendar spreads, overnight trades and spread trades. Further, bank relationships to central bankers solidified completely with these new arrangements. No better example than crisis.

In financial crisis situations, central bankers must provide liquidity to stabilize markets because risk may trade at premiums to a bank’s target rates, called  money rates that central bankers can’t control. Central bankers then provide liquidity to banks who trade and control  rates. These are called repo rates that are traded through the IMM. Repo markets allow participants to undertake rapid refinancing in the interbank market independent of credit limits to stabilize the system. A borrower pledges securitized assets such as stocks in exchange for cash to allow their operations to continue.

Asian money markets linked to the IMM because Asian governments, banks and businesses needed to facilitate business and trade in a faster way rather than borrow US Dollar deposits from European banks. Asian banks like European banks were saddled with dollar denominated deposits because all trades were dollar denominated due to the US dollar’s dominance. Extra trades were needed to facilitate trade in another currency, particularly Euros, other than US Dollars taking more time than necessary. These two continents would share not only an explosion of trade but these are two of the most widely traded world currencies on the IMM. For this reason, the Japanese Yen is quoted in US cents while Eurodollar futures are quoted based on the IMM Index, a function of the 3 month Libor Rate.

The IMM Index base of 100 is subtracted from the 3 month Libor rate to ensure bid prices would be below the asked price. These are normal market prevailing procedures used in other widely traded instruments on the IMM to insure market stabilization and normal traded markets. For example, price quotes for T-Bill futures contracts are based on the IMM Index. Subtract the discount yield of the T-Bill from the IMM’s base of 100, a 9.75 yield would equal a 90.25 IMM Index. Index values move in the same direction as futures prices. Same with the EURO Index. Widely traded instruments are tracked by the IMM Index.

As of June 2000, the IMM switched from a not for profit to a profit, membership and shareholder owned entity. It  opens for trading at 8:20 Eastern time to reflect major US economic releases reported at 8:30. The IMM is the largest financial market in the world. Banks, central bankers, multinational corporations, traders, speculators and other institutions all use its various products to borrow, lend, trade, profit, finance, speculate and hedge risks.

November 2009 Brian Twomey

 

 

Brian Twomey is a currency trader and Adjunct Professor of Political Science at Gardner-Webb University

Debt Monetization

The public debate regarding the debt and debt monetization is as old as the Republic. James Madison called debt a curse on the public, First Treasury Secretary Alexander Hamilton called it a blessing provided the debt wasn’t large. The modern day debt monetization term emanated from the Treasury’s cost of financing World War 2’s war debt because the Federal Reserve’s holdings of government debt tripled from 1943-1946.

The public was fearful of buying any debt during this period. Historically the Treasury Department then and now determines the amount of debt and maturities issued. In this capacity, they have full control over monetary policy, defined as the supply of money and credit. The Federal Reserve was the distributor of all debt to the public and supported debt prices through sales of bonds, notes and bills. A collision would occur between the two agencies as to their roles due to the failure to timely finance the war debt. The 1951 Treasury -Fed Accord settled the question who controls the fed’s balance sheet by reversing roles. The Fed would control monetary policy by supporting debt prices without control over any debt it holds and buy what the public doesn’t want while the Treasury would focus on amount of issuance and categorical maturities. .

Monetary policy since 1951 would be controlled through the Fed’s Open Market Operations with a Treasuries only policy. This would separate the Fed from fiscal policy and credit allocation and allow for true independence. This freed the Fed from monetizing debt for fiscal policy purposes and prevented collusion such as agreements to peg interest rates directly to treasury issues. Credit policy was also separated and limited to Treasury, defined as bailing out institutions, sterilizing foreign exchange operations and transfer Fed assets to Treasury for deficit reduction. The Treasury Secretary and the Comptroller of the Currency were removed from the Federal Reserve Board so policy decisions were separate from fiscal policy. Today 12 Federal Reserve Bank Governors and the Chairman of the Fed make up the Federal Open Market Committee that sets interest rate and money supply policies.

Monetizing the debt can be defined as money growth in relation to interest rates but not money growth in relation to government purchases or open market operations. Monetizing the debt occurs when changes in debt produce changes in interest rates. Yet money growth alone is not a monetizing of the debt since money growth ebbs and flows through contraction and expansion cycles over the years without a change in interest rates. Suppose a wash sale occurred where all debt issued was sold, no monetization. This is fiscal policy objectives completed. Fiscal policy is tax and spending policy by current Presidential Administrations. What if money growth was equal to debt, no monetization? Money growth is found in M1, M2 and M3. M1 is money in circulation, M2 is M1 plus savings and time deposits under $100,000 and M3 is M2 plus large time deposits over $100,000. So Open Market Operations is the issuance of debt replaced with money.

Monetizing the debt can also be characterized as money growth in excess of the federal debt or no money growth in relation to debt. This last example is called the liquidity effect where low money growth leads to low interest rates. Either will change the velocity of money defined as how fast  money circulates. Usually the target is a debt growth equal to velocity. This allows the system to be in sync.

A better way to understand this relationship is to ask exactly what are the feds targets. Do they target growth to velocity, money growth to employment as was once the case, money growth equaled to the present supply of money, interest rate targets or even inflation. Targets to inflation has proven to not only be disastrous but studies show negative statistical relationships forcing an out of sync growth to debt relationship. Many avenues have been tried since the 1913 Federal Reserve Act was passed that created the Federal Reserve System.

The question of monetization and growth to debt must be understood in terms of the multiplier effect, how much the money supply increases in response to changes in the monetary base. This is a better method to understand  fed holdings. Suppose the fed changed the banks reserve requirements, the cash ratio banks must hold against customer deposits. This would change the rate of money growth in the multiplier, the monetary base and possibly cause an interest rate change. As long as debt is in sync with this money growth, no monetization occurs because all that was increased was the monetary base or the supply of money and credit. Previous studies over the years shows without question a statistical  impact between money growth and changes in debt.

Monetization occurs in other ways such as when money growth targets higher interest rates. This is money growth with desired growth targets. The only way this can occur is to reduce maturity levels to increase liquidity. An increase in liquidity with corresponding reductions in debt issuance would cause a higher money supply and disequilibrium in growth to debt. Interest rates would have to rise to bring equilibrium back to the system. The problem occurs when interest rates rise, the value of outstanding debt falls. Longer term debt falls more than short term debt so deficits ensue due to a slowdown in economic activity and an increase in the debt to income ratios. This method would boost GDP growth in the short term but slow down an economy in the longer term.

During contractionary cycles and low interest rate environments, money growth and debt usually decrease simultaneously. This means governments must payout the yields bonds, notes and bills command in the marketplace. New debt and taxes are needed to retire old debt and service the new debt. If bond prices are not rising and governments are only paying yields, this ensures further contractions and a lengthening of the cycle. Why the debt tripled between 1943-1946 was investors didn’t want to buy bonds whose prices were decreasing. Investors can’t earn money on yields alone. Yet as long as money growth equals debt, no monetization occurs.

Its important to watch the amount of debt and length of maturities offered by the Treasury. For the most part, equal maturities were traditionally offered in the 2, 10, and 30 year bonds and 13 week T-Bills. Watch for any changes to this dynamic as money growth to debt will change. Also watch prices of these various instruments. You don’t want short term debt to pay more than long term debt. This presages a wholesale change in the growth to debt ratio. Be especially careful of big demand for shorter term debt because this may crowd out longer term capital. This is called debt neutrality or the Ricardian Equivalence named after David Ricardo the famous 18th century economist. Debt neutrality can be viewed as Treasury issuing more shorter term debt than longer term maturities. The purpose is twofold. To hide deficits or, as in years past, to keep inflation and employment low. While net debt issued may have been equal, the long term effects can be devastating to an economy.

Lastly, be aware of fed statements as monetary policy can only target money supply or interest rates. Understanding money growth to debt issues will help those understand their direction.

November 2009 Brian Twomey

 

 

Brian Twomey is a currency trader and Adjunct Professor of Political Science at Gardner-Webb University

International Monetary Market

 The introduction of the International Monetary Market in December 1971 and formal implementation in May 1972 can be traced to the end of Bretton Woods through the 1971 Smithsonian Agreement and Nixon’s suspension of United States dollar convertability to gold. The increase in international business and trade, currency and interest rate volatility due to floating exchange rates, corporations and speculators lock out in the interbank market and world trade imbalances resulted in the need for the IMM. The IMM Exchange was formed as a separate division of the Chicago Mercantile Exchange whose sole purpose was trade of agricultural futures. With IMM’s 500 chartered members, increased to 750 by 1976 and a $10,000 membership fee increased to $325,000 by 1987, the purpose of the IMM was trade of currency futures, a new product previously studied by academics to open a freely traded exchange market to facilitate trade among nations.
 The first futures experimental contracts included trade against the US Dollar such as the British Pound, Swiss Franc, German Deutsch Mark, Canadian Dollar, Japanese Yen and September 1974, the French Franc. This list would later expand to include the Australian Dollar, the Euro, emerging market currencies such as the Russian Ruble, Brazilian Real, Turkish Lira, Hungarian Forint, Polish Zloty, Mexican Peso and South African Rand.. In 1992, the German Deutsche Mark/Japanese Yen was introduced as the first futures cross rate currency.These early successes didn’t come without a price.
  The challenging aspects were how to connect values of IMM foreign exchange contracts to the interbank market since the interbank market was the dominant means of currency trading in the 1970’s and how to allow the IMM to be the free floating exchange envisioned by academics. Clearing member firms were incorporated to act as sort of arbitrageuers between banks and the IMM to facilitate orderly markets between bid and ask spreads. Continental Bank of Chicago was later hired as a delivery agent for contracts. These successes bred a futures competition for new products never envisioned in this short term duration.
   The Chicago Board Options Exchange competed and received the right to trade US 30 year Bond Futures while the IMM secured the right to trade Eurodollar contracts, a 90 day interest rate contract settled in cash rather than physical delivery. US dollar deposits in European banks and other continents came to be known as Eurodollars. Eurodollars came to be known as the Eurocurrency market used mainly by the Organization  for Petroleum Exporting Countries because OPEC always required payment for oil in US dollars.This cash settlement aspect would later pave the way for index futures such as world stock market indices and the IMM Index.Cash settlement would also allow the IMM to be later known as the cash markets because of its trade in short term  interest rate sensitive instruments such as 30 day Fed Funds futures, 13 week T-Bills, 2 and 10 year Notes, Libor, EURO/YEN Tibor and 3 month OIS Futures. a swap that allows spread trades between a 3 month money market asset and the overnight cost of financing the asset over the 3 month period.
 With new competition, a transaction system was desperately needed. The CME and Reuters Holdings created the PMT, Post Market Trade to allow a global electronic  automated transaction system to act as a single clearing entity and link the world’s financial centers such as Tokyo and London. PMT is today known as Globex who facilitates not only clearing but electronic trading for traders around the world. In 1975, US T-Bills were born and traded on the IMM in January 1976  with T-Bill futures trading in April 1986 with approval from the Commodities Futures Trading Commission.
 The real success would come in the mid 1980’s when options began trading on currency futures. The Deutsche Mark began January 1984, British Pound and Swiss Franc February 1985, Japanese Yen March 1986, French Franc 1984, Canadian Dollar June 1986, European Currency Unit January 1986 and Australian Dollar 1987. By 2003, Foreign Exchange trading had a notional value of $347.5 billion.
  The 1990’s saw explosive growth for the IMM due to three world events. The first was Basel 1 in July 1988 where the 12 nation European Central Bank Governors agreed to standardize guidelines for banks. Bank capital had to be equal to 4% of assets. The second was the 1992 Single European Act that allowed not only capital to flow freely throughout national borders but all banks were allowed to incorporate in any EU nation. Basel 2 is geared to control risk by preventing losses, a current work in progress.
   A banks role is to channel funds from depositers to borrowers. With these news acts, depositers could be governments, governmental agencies and  multinational corporations. The role for banks in this new international arena exploded so to meet the demands of financing capital requirements, new loan structures and new interest rate structures such as overnight lending rates, they increasingly used the IMM for all finance needs. Plus a whole host of new trading instruments were introduced such as money market swaps to lock in or reduce borrowing costs, swaps for arbitrage against futures or hedge risk. Swaps would not be introduced until the the 2000’s however. Types of trades changed as well such as calendar spreads, overnight trades and spread trades. Further, bank relationships to central bankers solidified completely with these new arrangements. No better example than crisis.
  In financial crisis situations, central bankers must provide liquidity to stabilize markets because risk may trade at premiums to a bank’s target rates, called  money rates that central bankers can’t control. Central bankers then provide liquidity to banks who trade and control  rates. These are called repo rates that are traded through the IMM. Repo markets allow participants to undertake rapid refinancing in the interbank market independent of credit limits to stabilize the system. A borrower pledges securitized assets such as stocks in exchange for cash to allow their operations to continue.
  Asian money markets linked to the IMM because Asian governments, banks and businesses needed to facilitate business and trade in a faster way rather than borrow US Dollar deposits from European banks.Asian banks like European banks were saddled with dollar denominated deposits because all trades were dollar denominated due to the US dollar’s dominance. Extra trades were needed to facilitate trade in another currency, particularly Euros, other than US Dollars taking more time than necessary. These two continents would share not only an explosion of trade but these are two of the most widely traded world currencies on the IMM. For this reason, the Japanese Yen is quoted in US cents while Eurodollar futures are quoted based on the IMM Index, a function of the 3 month Libor Rate.
   The IMM Index base of 100 is subtracted from the 3 month Libor rate to ensure bid prices would be below the asked price. These are normal market prevailing procedures used in other widely traded instruments on the IMM to insure market stabilization and normal traded markets. For example, price quotes for T-Bill futures contracts are based on the IMM Index. Subtract the discount yield of the T-Bill  from the IMM’s base of 100, a 9.75 yield would equal a 90.25 IMM Index. Index values move in the same direction as futures prices. Same with the EURO Index. Widely traded instruments are tracked by the IMM Index.
 As of June 2000, the IMM switched from a not for profit to a profit, membership and shareholder owned entity. It  opens for trading at 8:20 Eastern time to reflect major US economic releases reported at 8:30. The IMM is the largest financial market in the world. Banks, central bankers, multinational corporations, traders, speculators and other institutions all use its various products to borrow, lend, trade, profit, finance, speculate and hedge risks.
November 2009 Brian Twomey
   Brian Twomey is a currency trader and Adjunct Professor of Political Science at Gardner-Webb University

 

Japanese Keiretsu

The Japanese corporate system of governance known as Keiretsu dates back to the Meiji Restoration of 1866 and the world’s introduction to the industrial revolution. Because Japan has always been a small, very educated and very advanced society, the only way to compete among its larger Asian neighbors and ensure perpetuity was to group their companies into tightly knit relationships, a cultural trait some would argue. The English translation for Keiretsu denotes lineage while its forerunner Zaibatsu means monopoly or financial clique.

Some would argue whether Keiretsu or its older rival even exists as the group form they suspect. Some find the modern day basis for existence in a 1952 Japanese law that mentions the word Keiretsu, others assume Zaibaitsu existed because of the perpetuation of major Japanese companies that formed long before the Meiiji Restoration and are still powerful and profitable today. One example is Matsui.

Matsui began as a dry goods shop in 1673, ten years later they opened money changing shops for the Japanese government, the Tokugawa Shogunate in the capital city of Kyodo. With the introduction of a Japanese monetary system, Matsui later became a bank, today a leading bank of Japan. Other examples include Sumitomo who originated as a mining and smelting company and later expanded into copper, today a leading bank of Japan. Mitsubishi later formed as a bank in Kochi Prefecture, a region of Japan much like the county system in the United States. Yasuda formed as a bank in Toyama Prefecture and Okura formed as a bank in Niigata Prefecture. Later these banks and other businesses formed as holding companies, all family owned and managed.

These so called Zaibatsu holding companies were eliminated after World War 2 by the United States and written into the new Japanese constitution because of its undemocratic nature and governmental policies that perpetuated their existence. With Japan devastated after the war, it was time for Japanese companies to reinvent themselves. Along came these called Keiretsu.

A Keiretsu is a corporate governance system that has a bank as its first line of formation. Major banks established in the Zaibatsu period all formed a leading Keiretsu based in the region they began as a Zaibatsu corporation. The next Keiretsu line is a major corporate conglomerate such as Toyota, Nissan, Matsushita Electric, and Nippon Steel. All formed for a specific business purpose, called vertical Keiretsu while horizontal Keiretsu are the six largest banks of Japan. The remaining companies of a Keiretsu are ancillary companies that  perpetuate the conglomerate company by supplying parts, distribution, trading for exports. All corporate needs are met within the Keiretsu so other Keiretsu don’t conduct business with each other. The all important companies in the Keiretsu are the bank and the conglomerate company, some argue the controllers of the Keiretsu who’s goals are profits and long term existence by restricting competitors and hostile takeovers.

Features of a Keiretsu are established long term relationships, vast supply of workers, permanent employment, a steady supply of capital from the bank, share information with suppliers, manage inventory to reduce costs and increase efficiency and increase supply chain management. Some allude to the just in time inventory devised by the automobile Keiretsu as a show of success of Keiretsu formations to increase demand for foreign demand.

Financing begins by Keiretsu companies owning shares of stock of other companies. especially between banks and the major conglomerate company. Yet major conglomerates are said to own majority stakes in smaller Keiretsu companies for control purposes as well as supply members to sit on their corporate boards. Control means conglomerates consult with smaller companies regarding investment decisions with the ability to take over smaller companies.

The cost of a Keiretsu includes inefficiency, no reason to worry about existence since a large supply of capital exists from banks. To much debt and bankruptcy prone. Risk averse, why take chances. Less profitable firms grew slowly without innovation or structural changes to its formation.

The term Keiretsu first appeared in July 1952 when the Small and Medium Enterprises Planning Bureau issued guidelines for a program to target general machinery for productivity improvement. This program was called Keiretsu Shindan, Keiretsu diagnosis. This led scholars and the popular press on a course to prove the existence of Keiretsu, diagnose its operations and cry foul when outside nations couldn’t establish operations in Japan.

Factors to consider regarding Keiretsu existence is the 2002 merger of Sumitomo and Mitsubishi banks as well as the second historic merger of Fuji and Daiichi and banks. Lunch clubs existed in Japan for major company executives once a month since 1967. Not only hard to prove that Keiretsu exists regarding these factors but it has never been proven despite the many studies published over many years.

Due to the economic crisis that hit Japan in the late 1990’s and major conglomerates loss of profits, all Japanese companies have opened to competition. Firms now compete for price and quality by using market based systems instead of what is termed Keiretsu relational arrangements. Globalization and technology is said to also open Japanese companies because of the need to identify new customers, increase efficiency of orders and research so all Japanese companies are leaving their Keiretsu ways and going it alone.

Never has the existence of Keiretsu been definitively proven. Some say Marxist economists identified Keiretsu because it satisfied their ideology. Others say it came from attacks from unsatisfied companies. Either way, Japanese companies are opening more and more as economic crisis hits them harder and harder.

November 2009 Brian Twomey

 

Brian Twomey is a currency trader and Adjunct Professor of Political Science at Gardner-Webb University

 

MCGinley Dynamic Part 2

The McGinley Dynamic is a little-known yet highly reliable indicator invented by John R. McGinley, a Certified Market Technician and former editor of the Market Technicians Association’s Journal of Technical Analysis. Working within the context of moving averages throughout the 1990s, McGinley sought to invent a responsive indicator that would automatically adjust itself in relation to the speed of the market. His eponymous Dynamic, first published in the Journal of Technical Analysis in 1997, is a 10-day simple and exponential moving average with a filter that smooths the data to avoid whipsaws.

Simple Moving Averages vs. Exponential Moving Averages

simple moving average (SMA) smooths out price action by calculating past closing prices and dividing by the number of periods. To calculate a 10-day simple moving average, add the closing prices of the last 10 days and divide by 10. The smoother the moving average, the slower it reacts to prices. A 50-day moving average moves slower than a 10-day moving average. A 10- and 20-day moving average can at times experience volatility of prices that can make it harder to interpret price action. False signals may occur during these periods, creating losses because prices may get too far ahead of the market.

An exponential moving average (EMA) responds to prices much more quickly than a simple moving average. This is because the EMA gives more weight to the latest data rather than older data. It’s a good indicator for the short term and a great method to catch short term trends, which is why traders use both simple and exponential moving averages simultaneously for entry and exits. Nevertheless, it too can leave data behind.

The Problem With Moving Averages

In his research, McGinley found moving averages had many problems. In the first place, they were inappropriately applied. Moving averages in different periods operate with varying degrees in different markets. For example, how can one know when to use a 10-day, 20-day, or a 50-day moving average in a fast or slow market? In order to solve the problem of choosing the right length of the moving average, the McGinley Dynamic was built to automatically adjust to the current speed of the market.

McGinley believes moving averages should only be used as a smoothing mechanism rather than a trading system or signal generator. It is a monitor of trends. Further, McGinley found moving averages failed to follow prices since large separations frequently exist between prices and moving average lines. He sought to eliminate these problems by inventing an indicator that would hug prices more closely, avoid price separation and whipsaws, and follow prices automatically in fast or slow markets.

McGinley Dynamic Formula

This he did with the invention of the McGinley Dynamic. The formula is:

\begin{aligned} &\text{MD}_i = MD_{i-1} + \frac{ \text{Close} – MD_{i-1} }{ k \times N \times \left ( \frac{ \text{Close} }{ MD_{i-1} } \right )^4 } \\ &\textbf{where:}\\ &\text{MD}_i = \text{Current McGinley Dynamic} \\ &MD_{i-1} = \text{Previous McGinley Dynamic} \\ &\text{Close} = \text{Closing price} \\ &k = .6\ \text{(Constant 60\% of selected period N)} \\ &N = \text{Moving average period} \\ \end{aligned}

The McGinley Dynamic looks like a moving average line, yet it is actually a smoothing mechanism for prices that turns out to track far better than any moving average. It minimizes price separation, price whipsaws, and hugs prices much more closely. And it does this automatically as a factor of its formula.

Because of the calculation, the Dynamic Line speeds up in down markets as it follows prices yet moves more slowly in up markets. One wants to be quick to sell in a down market, yet ride an up market as long as possible. The constant N determines how closely the Dynamic tracks the index or stock. If one is emulating a 20-day moving average, for instance, use an N value half that of the moving average, or in this case 10.

It greatly avoids whipsaws because the Dynamic Line automatically follows and stays aligned to prices in any market—fast or slow—like a steering mechanism of a car that can adjust to the changing conditions of the road. Traders can rely on it to make decisions and time entrances and exits.

The Bottom Line

McGinley invented the Dynamic to act as a market tool rather than as a trading indicator. But whatever it’s used for, whether it is called a tool or indicator, the McGinley Dynamic is quite a fascinating instrument invented by a market technician that has followed and studied markets and indicators for nearly 40 years. In creating the Dynamic, McGinley sought to create a technical aid that would be more responsive to the raw data than simple or exponential moving averages.

 

2009

 

Brian Twomey

Australia Vs United States Tax Treaty

 Australia’s 1953 tax treaty with the United States was voided when both nations ratified a new treaty in 1983 and renewed in 2006 that reflected modern day developments. The purpose of a treaty is to prevent individuals and companies of third nations from inappropriately obtaining treaty benefits when they are not residents of either state. The second purpose is to allow modern day provisions to be  defined and understood to allow the force of law of each nation and treaty obligations to be enforced by both parties. With 21 million residents and an export dependent nation that distributes its nations abundant natural resources such as coal, zinc, copper, gold, aluminum and iron, Australia protected this status within this highly technical treaty. Many of these protections will be addressed.
  To begin. When is an incorporated United States company considered an Australian company. When that company is managed and controlled in Australia, conducts business in Australia and voting power is controlled by Australian resident shareholders. If a company declared dual residency status, failure of residency status would be declared and treaty benefits would be voided by both states.
 Areas defined for treaty purposes is the continental shelf to protect the exploitation and exploration of natural resources. This is defined further in US section 638 of the Internal Revenue Code. For the United States, Puerto Rico, Guam and the Virgin Islands are not included. Australia covered Norfolk Island territories, Christmas Island, Cocos Islands, Ashmore and Cartier Islands and the Coral Sea Islands.
 For treaty purposes, a state can’t tax higher or lower than the law allows. Domestic law overrides any treaty obligations. An example can be found in Article 4 and 1, Paragraph 3. If a US citizen relinquished citizenship for tax avoidance, 877 of IRS code says that person will be taxed 10 years following citizenship loss. Secondly, Article 18 Paragraph 2 and 6 says child support, social security and alimony is taxed by the respective state if domestic law taxes such revenue.
 Australian companies incorporated in Australia are Australian for residency purposes. These include partnerships, estates and trusts. A trust is exempt from taxes if that trust is formed for charitable or scientific research. Residency is defined as the place where the home is located or where major economic relations are conducted. Disputes from this Article 4, Paragraph 1 provision can be found in the Mutual Agreement clause in  Article 24.
 A company is considered a permanent resident in Australia if management is conducted in Australia, a branch or office, building site or factory and establishment for extractions of natural resources.
 For treaty purposes, Australia’s corporate tax was 46 percent since lowered to a flat rate of 30 percent while permanent establishments but non residents pay a 51 percent corporate tax. United States corporate tax rates vary depending on types of corporate formation.
    Dividends paid to non residents can’t be taxed higher than a 15 percent gross amount. The prior rate was 30 percentf or both states, a leftover from the 1953 treaty. Undistributed profits are taxed at 15 percent for non resident companies based on Article 10 clauses. Suppose a non resident company has undistributed profits liable to tax. The 15 percent must be taxed on undistributed profits as well as payment of foreign corporation taxes.
   If interest is derived from contracting state, no tax is paid if interest has a source in either state, the owner is a resident of either state or monies are derived from a permanent establishment. The US can tax interest paid by an Australian company if the interest has a source in the US. Australia and the US tax 10 percent on interest to non residents. If interest is derived from respective governments, tax is exempt.
  Gains connected with permanent establishment are taxable where permanent establishment is located. Other gains may be taxed by the state of source of gains and state of residence of owner to avoid double taxation.
 If a citizen resides in either state more than 183 days, that person may be taxed by the same state.
  If a person resides in a third country but incorporates in Australia or the US, that person is granted treaty benefits.
 Can’t skirt tax obligations for example where an Australian company who establishes a trust in the US to collect dividends from an Australian company to avoid taxes.
  Under the Double Taxation clauses in Article 22, the US will give a foreign tax credit for income taxes paid to Australia subject to US Code. Australia agreed to allow Australian residents a credit against Australian income tax paid in the US other than solely by reason of US citizenship. If a US citizen is resident in Australia, both states tax worldwide income. This refers to income generated from outside treaty jurisdictions, a common denominator for all parties to treaties in the modern day. Yet this resident if paid Australian taxes will receive a credit from the US minus Australia’s foreign tax credit. The United States will not lower its normal taxable limits.
 Further Double taxation provisions state Australia’s imposition of a 5 percent additional corporate tax on profits of Australian branches of foreign corporations taxes in lieu of a withholding tax on profit remits. Source income by a US resident in Australia is taxed by Australia. A resident of Australia whose source income is US is taxed by the US. Entertainers doing even one show in Australia, pays Australian taxes for the one show.
 The difference between the two states is in the forms of taxation and recognition of various corporate formations. Australia doesn’t appear to recognize LLC’s, the United States does. The United States has a progressive tax policy, Australia does not. The United States has established tax codes, Australia is constantly updating theirs.
 If any problems arise with treaty provisions, citizens can go to the state of resident or state of citizenship. Dispute provisions are three years which doesn’t necessarily mean settlement in three years.
 Treaty provisions are supposed to be updated every year to reflect changes in domestic laws yet either party can terminate this treaty after five years with a six month notice.
October 2009 Brian Twomey
   Brian Twomey is a currency trader and Adjunct Professor of Political Science at Gardner-Webb University

 

Kairi Relative Index

 Kairi Relative Index is an old Japanese indicator with an unknown founder, an unknown date of inception and a waning popularity in the modern day due to more popular indicators such as Welles Wilder’s Relative Strength Index. Today’s new generation of traders, since the late 70’s,  have grown accustomed to newer, more modern day indicators whose popularity increased with time and practice. Because Kairi has an unknown derivation and used much less even in certain Japanese indicator loyalty zones of Russia and Asia, its continued use is quite questionable.
Add the fact that literally no prior writings can be found in the modern day regarding Kairi. The word itself translates to separate or dissociation. We don’t want deviation in our indicators or price separation, we want perfect market timing indicators that follow market trends and turns.Yet the difference between the two indicators is  slight and yet varied. The only way to understand Kairi Relative Index is to compare it with a Relative Strength Index.
 To begin, both are considered oscillators. Oscillator indicators move with a chart line up or down as markets fluctuate. Calculations vary among each oscillator so each oscillator serves a different market function. RSI and Kairi serve as momentum oscillators and considered leading indicators. Momentum oscillators measure market prices rate of change. As prices rise, momentum increases and a decrease measures a decrease in momentum. Momentum is reflected both in the manner RSI and Kairi operate and in its calculations.
  Kairi calculates as deviation of the current price from its simple moving average as a percent of the moving average. If the percent is high and positive, sell. If the percent is large and negative, buy. To calculate a simple moving average, take X closing prices over Y periods and divide by the periods. Kairi’s formula is Price – SMA over X periods divided by SMA over X periods and multiply by 100. Based on assumptions 10 and 20 day moving averages should be employed to determine price divergences or separations. These are early hints towards entries and exits. So Kairi’s formula indicates a constant moving market, a known method for all Japanese indicators.
    RSI calculates based on up and down closes. 100 – 100 divided by 1+ RS. RS = average gain divided by average losses. This is what allows RSI to be classified as an oscillator. Next, average gain = (previous average gain) X 13 + current gain / 14. First average gain – total of gains during past 14 periods/14. Average loss = (previous average loss) X 13 – current loss/ 14. RSI is a comparison of up and down closes or gains compared to losses. This formula asks the question where has the market been and will the future hold the same promise while Kairi is more of a moving target indicator so entries and exits are easier to hit.
 Both Kairi and RSI are set at standard 14 periods. For faster market responses, set periods lower while higher periods will indicate a slower but sometimes a more accurate market. Yet the recommended 14 periods for RSI works as intended as well for Kairi’s 14 periods. To understand higher periods of RSI, simply insert a higher number in the above formula.  Its Kairi however that sometimes diverges from its RSI counterpart that stems from the intended effects based on their divergent formulas. What is important in this phenomenon is the center line of both indicators.
  Both indicators are also known as center line oscillators. This is the all important line in the middle that determines entry and exits, longs or shorts,  trends and ranges. When lines are at the bottom, normally this indicates an oversold market so its a matter of time before the market bounces. The recommended methodology  for RSI is go long below 30 and short at 70. For the most part, this works because RSI is an accurate indicator. Yet a drawback to RSI is markets can remain in oversold and overbought territory for extended periods. This doesn’t represent a losing position if the market doesn’t bounce immediately, eventually it will, the timing of a buy trade was just to early. Kairi however is more of an early warning indicator to market turns yet prices can diverge from the indication.
The center line simply represents entries and exits for both indicators, 50 for RSI and 0 for Kairi. When the line crosses above the center line, go long, below go short. From the center line to the top represents approximately 500 currency pips while top to bottom entries and exits represent 1000 pips using Kairi and 1200 pips using RSI.
  So if you are long when the line hovers at the bottom, be careful when prices and the line hit resistance at the center line. Same for a short when prices and the line are above the center line. Markets have a tendency to bounce before prices hit the center line in trending markets for both indicators. Shorts may not hit the center line but instead turn down while longs will hit the center line and bounce. Both indicators can be used in any market on any time frame but because of divergent tendencies, monitoring may be required.
 As a forecaster of trends, both indicators work well although some price divergences may occur along the way. This fact of life assumes traders will not rely on one indicator. Its never recommended to use two of the same type indicators. Try a trend indicator rather than an oscillator. As range trades, both are not the best. Gains will be quick and short term until a trend develops.Yet RSI will forecast and earn more points than Kairi in trends.
  Price divergence occurs in two ways for both indicators, trends and center line positions. Both indicators may break the center line on the way down forcing short trades yet markets can easily turn back up and cross the center line leading to losses due to these false breaks. What happens when RSI and Kairi approach the higher levels and far from the center line. How do you know where prices will go. You don’t unless another indicator is used in conjunction. Both can stay in overbought or oversold levels for long periods in trends. So entries and exits are best at the center line with monitoring.
  Charting packages use two types of Kairi indicators. One type, Kairi looks and acts like RSI with another chart Kairi looks like a stock volume indicator or a bar chart. Its recommended to follow the bars up or down. When bars reach the top, sell and buy when they are at the bottom. Here is the greatest opportunity for price divergence regarding Kairi that can lead to false breaks. In this instance, follow candles or use another indicator along with Kairi. In the end, both indicators are accurate but both have divergences.
October 2009 Brian Twomey
  Brian Twomey is a currency trader and Adjunct Professor of Political Science at Gardner-Webb University.

McGinley Dynamic

 The McGinley Dynamic is a little known yet highly reliable indicator invented by John McGinley somewhere in the late 80’s, early 90’s. Almost nothing has been published regarding the McGinley Dynamic since its inception either by Mr.McGinley or fellow traders. We may not learn the calculations of his indicator but we can learn the value of his indicator by its characteristics. I base my assumptions of the McGinley Dynamic first from a one page journal article published almost 20 years ago and because I have used this indicator and find great value in its use.
  The McGinley Dynamic can be easily described as a 10 day simple and exponential moving average with a smoother, a filter that smoothes the data to avoid whipsaws.Yet its reliability as an indicator is much more reliable than a moving average since moving averages tend to forecast false signals, especially during periods of whipsaw price action such as an out of sync economic release or periods of stops and starts of trends. The McGinley Dynamic may look and act like a moving average but due to the filter, it gives this indicator its profound reliability.
   To further understand the McGinley Dynamic, a quick lesson in moving averages may help since we lack calculations of this indicator. Simple moving averages smoothes out price action by calculating past closing prices and divide by the number of periods.. To calculate a 10 day moving average, add closing prices of the last 10 days and divide by 10. The hope is to forecast future prices based on past price action.. The smoother the moving average, the slower to react to prices. A 50 day moving average moves slower than a 10 day moving average. A 10 and 20 day moving average can at times experience volatility of prices which cannot always gauge future price action. False signals may occur during these periods catching traders for losses because prices may get far ahead of  the market.
     An exponential moving average responds to prices much quicker than a simple moving average but may cause false breaks due to volatility or price spikes. Its a good indicator for the short term and great to catch short term trends so this is why traders use both simple and exponential moving averages simultaneously for entry and exits. Yet as stand alone indicators, traders could get caught with losses if not careful with a steadfast eye on the screen, hence the reason the McGinley Dynamic was anchored as a moving average.
    What separates the McGinley Dynamic from its moving average counterparts is it tracks like a moving average in trending markets yet its more of a constant indicator and holds its consistency both in the long and short term due to the mysterious filter. The draw back is simple.
The McGinley Dynamic lags prices and candles. This can scare traders and force them to make erroneous decisions regarding future price movements. Know that as long as the McGinley Dynamic line is pointing up or down, traders can feel confident regarding direction. Don’t be fooled by the color of a candle along the trend. Certain volatility periods will exist with this indicator because its an indicator that forecasts trends not short term volatilities. As a trend indicator for the long term, its a good and reliable signal. The longer the time frames used, the better the forecast of a trend. For time frames shorter than 60 minutes, this indicator works but like any indicator in shorter time spans, its buyer beware.
   If markets gain momentum, simple and exponential moving averages can lag while the McGinley Dynamic moves with prices. This is why the term dynamic is used, its dynamic because the line moves with prices up or down unless prices experience drastic spikes. In this instance, look at the dynamic line as a mean in a standard deviation equation. Prices will always revert back to the mean. So extreme price spikes should always be sold in uptrends and bought in downtrends until they again approach the dynamic line. The dynamic line always speeds up or slows down dynamically with prices, its a constant line that moves but not as fast as moving average lines.
    In instances of extreme volatility, the McGinley Dynamic can’t react fast enough to market changes. The best decision is use a volatility indicator such as Bollinger Bands or a stochastic oscillator with the McGinley Dynamic.This method will serve traders well.
   Because the Dynamic Line is a forcaster of trends, periods of range bound markets can be complicated. In this example, shorter time frame charts may be the answer to determine future direction. This would serve scalpers well and discourage swing traders. For periods of market uncertainty, the McGinley Dynamic tracks almost the same as Parabolic SAR, stop and reverse and the Tenkan line in the Ichimoku indicator. No indicator should ever be deployed alone because we can never be absolutely positive regarding market tendencies. Confirmation of trend direction is always useful.
   Entry and exits are tricky as a stand alone indicator. In uptrends, exit when the Dynamic Line flattens or when prices breach the Line. Always be careful of false breaks especially in fast markets as this can be a tendency even on longer term charts with this indicator. Be mindful that prices revert back to the mean. Enter an uptrend when the Dynamic Line turns up.
Enter downtrends when prices breach the line downward and exit when prices either breach the line upside or the line flattens.
 The Dynamic Line is set as a standard 14 periods yet these periods can be adjusted. For faster response to prices, set the periods lower but beware of true false breaks and whipsaws. This indicator will then act more like an oscillator than a moving average. For slower response periods, set the number higher. This will give a true reading of the market but may take longer to achieve price objectives. Yet why would you want to change the status quo when 14 periods correctly forecasts trends.
 The true objective to learn about the McGinley Dynamic is watch and analyze prices as the indicator moves. Use various time framed charts and more importantly, learn entry and exit points. Its an 18 year old  indicator that lives up to its reputation for its reliability.
October 2009 Brian Twomey
 Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

 

Plaza Accords

The historic 1985 Plaza Accords, signed at the Plaza Hotel in New York city, was a pro growth  agreement signed by what was then known as the G-5 nations, West Germany, France, United States, Japan and the United Kingdom to force the United States to devalue its currency due to a current account deficit approaching an estimated 3 percent of GDP. (Paragraph 6, Plaza Accords).
More importantly, the European nations and Japan were experiencing enormous current account surpluses as well as negative GDP growth that threatened not only external trade and GDP growth in their home nations but protectionist measures to protect these gains were looming, especially in the United States. Add the fact that developing nations were in debt and not able to participate in positive trade or positive growth in their home nations, the United States was forced to realign the exchange rate system due to present imbalances and to promote growth around the world at the expense of its own nation.
The Plaza Accords was a growth transfer policy for Europe and Japan that was wholly detrimental to the United States..
  The United States experienced  3 percent GDP growth during 1983 and 1984 with a current account deficit approaching an estimated 3- 3.5 percent of GDP while European nations saw a negative GDP growth of -0.7 percent with huge trade surpluses. Same for Japan. Trade deficits in general require foreign financing. For the United States during the early to mid 80’s, Japan and West Germany were buying United States bonds, notes and Bills from their surpluses to finance our current deficits at the expense of their own economies.
It was a matter of time before protectionist policies entered this equation that would not only hurt United States growth at home but force trade wars that would derail the entire system of trade for all nations.
   During this period not only was inflation the lowest it has been in 20 years for all nations but European nations and Japan were investing in their own economies to promote growth. With low inflation and low interest rates, the repayment of debt would be accomplished quite easily. The only aspect missing from these equations was an adjustment in exchange rates rather than an overhaul of the present system.
  So the world cooperated for the first time by agreeing to revalue the exchange rate system over a two year period by each nation’s central bank intervening in the currency markets.Target rates were agreed to. The United States experienced about a 50 percent decline in their currency while West Germany, France, the UK and Japan saw 50 percent appreciations. The Japanese Yen in September 1985 went from 242 USD/JPY, Yens to the dollar to 153 in 1986, a doubling of value.
By 1988, USD/JPY exchange rate was 120. Same with the German Deutsch Mark, French Franc and British Pound. These revaluations would naturally benefit developing nations such as Korea, Thailand and leading South American nations like Brazil because trade would again flow.
 What gave the Plaza Accords its historic importance was a multitude of firsts. First time central bankers agreed to intervene in the currency markets, first time the world set target rates, first time for globalization of economies and first time each nation agreed to adjust their own economies, sovereignty was exchanged for globalization. For example, Germany agreed to tax cuts, the UK agreed to reduce its public expenditure and transfer monies to the private sector while Japan agreed to open their markets to trade, liberalize their internal markets and manage their economy by a true Yen exchange rate. All agreed to increase employment. The United States bearing the brunt of growth only agreed to devalue their currency. The cooperative aspects of the Plaza Accords was the most important first.
 What the Plaza Accords meant for the United States was a devalued currency. This means United States manufacturers would again become profitable due to favorable exchange rates abroad, an export regimen that became quite profitable.
A high US dollar means American producers can’t compete at home with cheap imports coming from Japan and European nations because those imports are much cheaper than what American manufacturers can sell according to their profitability arrangements. An undervalued currency means those same imports would experience higher prices in the United States due to unfavorable exchange rates.
What a high dollar means for the United States is low inflation and low interest rates that benefit consumers because they have enough dollars to far exceed prices paid for goods. What the United States agreed to was a transfer of a part of their GDP to Europe and Japan so their economies would experience growth again. And all accomplished without fiscal stimulus, only an adjustment of exchange rates. What is much understood in the modern day are the harsh effects such devaluations may have on an economy.
  The Japanese felt the worst effects in the longer run of its signing of the Plaza Accords. Cheaper money for the Japanese meant easier access to money along with the Bank of Japan’s adoption of cheap money policies such as a lower interest rate, a credit expansion and Japanese companies that moved offshore. The Japanese would later become the world’s leading creditor nation of the world.  But cheap money policies would later create a slower consumption rate at home, rising land prices and the creation of an asset bubble that would burst years later that led to the period for Japan known as the lost decade. Japan’s recovery today from its lost decade is still very questionable due to the price of its currency. This  maybe the reason why currency prices today target inflation as a means to gauge growth policies rather than some arbitrary target  as was set with the Plaza Accords.
October 2009 Brian Twomey
 Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

Smithsonian Agreement

Although the historic, 10 nation 1971 Smithsonian Agreement can be credited with the end of fixed exchange rates, the end of the gold standard and a realignment of the par value system with 4.5 % trading bands, the agreement was disastrous for the United States and benefited European and Japanese economies because of the agreed upon stipulation the United States would devalue its currency. While the Smithsonian Agreement may not draw memorable historic attention, the fact that a nation can willfully sign an agreement to devalue its currency has lasting ramifications for an economy because a devaluation is a guarantor of deflation and enormous budget and trade deficits. The United States dollar declined approximately 8% during the next ensuing years causing the gold price to top out at $800 an ounce by the late 70’s because of its deleverage with the dollar and a commodity boom that would also last well into the late 70’s. Both are modern day ramifications of a declining dollar. To fully understand the Smithsonian Agreement and its implications, a brief walk through Bretton Woods may help.

The 1930’s saw a laizze faire free floating currency market that threatened not only destabilization and economic warfare  for smaller nations but exchange rates were unfair discouraging trade and investment. Along came Bretton Woods in 1944 that stabilized the system through a new monetary order that would peg exchange rates set at a par value with a gold exchange. Government intervention was allowed if 1 % of a nation’s balance of payments fell into disequilibrium. So convertible currencies were pegged to $35 an ounce gold plus X amount of their own currencies all pledged and managed by the International Monetary Fund, a post war organization that became the regulator, enforcer and funder of this new monetary order.

Since the US dollar was the only stable currency, they managed the system through the IMF and became its major financier. This led to major outflows of dollars in financing world economies causing massive deficits in the United States. Why? Only the United States had gold in the post war world. So how much could a dollar be worth with massive deficits backed by gold and a world dependent on the United States for its growth. What a predicament. To fix deficits would limit dollars and increasing deficits would erode dollars, both highly detrimental to European and Japanese growth. So dollar confidence waned causing 1930’s style currency speculations except for the United States whose currency was fixed by gold. Adjustments were needed because the United States couldn’t stop the deficits while the Europeans and Japanese economies were threatened by massive surpluses. The answer was the Smithsonian Agreement.

Nations again realigned the currency system agreeing to a devalued dollar, a new par value and trading bands of 4.5 % with 2.25% on the upper and lower side of trading. One year after signature, Nixon removed completely the gold standard because of further dollar depreciation and erosion of balance of payments. So interventions began through the swap market by the United States then Europe to support their currencies. Interventions would be a first after the Smithsonian Agreement breakdown. Almost two years after the Smithsonian Agreement, currencies free floated because the United States refused to enforce the agreements after raising the gold fixed price twice within this two year period.

Free floating is a misnomer because trading bands were the protection for nation’s exchange rates not to fall outside the agreed upon ban. And how could they, nations didn’t have gold or X amount of currencies to pledge on their own to the IMF. The United States gold and dollar supply had to be implemented to finance the system. This allowed the United States to become the world’s reserve currency, a permanent financing currency. But the United States only had so much gold and dollars so with economic growth on the horizon for nations after World War 2, it was inevitable that Bretton Woods would breakdown.  The United States would’ve destroyed its own economy for the sake of growth in Europe and Japan.

Bretton Woods and the Smithsonian Agreements were not monetary systems to allow currencies to trade like a fiat currency based on supply and demand through an open market. Instead Bretton Woods and later the Smithsonian Agreement was a monetary system designed for trade and investment managed by the IMF but financed by the United States. As the United States pledged its gold and dollars, they were gaining Special Drawing Rights trade credits and using those credits against other nation’s currencies to finance trade. In this respect, the United States had to fix their currency price so other nations would have a peg to the dollar and access to credits. For larger growth states, this was perfect yet detrimental for smaller states because they didn’t have enough gold or dollars to gain trade credits.

So a currency pricing imbalance would exist for many years through economic growth years after World War 2. The time for real tradable market driven exchange rates for retail traders would still be many years away. What would come later to assist poorer nations lacking access to the world’s trading system was trade weighted dollars used for trade. But this would take many more agreements before actual implementation.

The need for the IMF in this equation was substantial. The IMF ensured against the world’s central bankers not to dominate the exchange rate market on their own or in conjunction with other nations, a prevention against economic warfare. The par value system allowed trade to equalize through the use of trade credits. This equalization meant basing the price of a currency based on its balance of payments. If balance of payments fell into disequilibrium, the IMF allowed a nation’s current price to be adjusted up or down.

While the Smithsonian Agreement was not perfect and actually hurt the United States in the short term, it was an instrument needed then to further our path towards real market driven exchange rates.

September 2009

Brian Twomey is a currency trader and Adjunct Political Science Professor at Gardner-Webb University

 

History of Coinage in the United States

  Before the first coinage act in the United States, citizens of the United States exchanged goods and services through the barter system because no coins were available except for various foreign coins such as the widely traded and trusted Spanish Real dollars. With the signage of the constitution and with a newly formed nation that allowed Congress to coin money under Article 1 Section 8, the first coinage act was proposed and passed Congress under the Presidency of George Washington, the first president. This article will cover a brief history of coins and events that surrounded changes made beginning in 1792 and ending in 2005.
  The first coinage act was passed April 2, 1792, Statute 246  that established the first mint in Philadelphia with the Treasury to oversee all mint operations and manage the mint’s first employees such as an engraver, an assayer and a chief coiner. All employees by law had to post a $10,000 bond to be considered for these positions. The first coins in the United States were minted using either gold, silver or copper with words and inscriptions of liberty engraved.
The first coins minted with year of mint were the $10 Gold Eagle with  270 grains of pure gold, $5 Gold Half Eagles with 135 grains of pure gold, $2 Quarter Eagles and Half Dollars with 67 grains of standard gold, $1 dollars with 416 grains of pure silver, Half Dollars with 208 grains of standard silver, Quarter Dollar with 104 grains of standard silver, Dimes spelled Disme until the 1800’s had 41 grains and 3/5 parts of a grain of silver, Half Dimes with 20 grains and 4/5 parts of standard silver, One Cent with 11 pennyweights of copper and Half Cents with 5 pennyweights and 1/2 of copper.
To offer an idea what these weights meant in 1792’s marketplace, one gram = 15.4323584 grains. So 277.7824512 grains = 18 grams while one pennyweight = 1.55517384 grams. The gold/silver ratio was 1:15.
So one Troy ounce of gold would buy 15 ounces of silver. Section 19 addresses debasing the currency. Violators were charged with a felony and would suffer death.           Dollars were minted in the tradition of the Spanish Milled Dollar. English speakers referred to the Spanish Milled Dollar as the Spanish 8 Real.
The word milled meant that coin blanks called Planchets were made on a milling machine to stay consistent with weights and sizes and prevent counterfeiting. Speculation exists that the Spanish term E Pluribus Unum, out of many, one  was placed on gold coins in 1795 and silver in 1798 due to Colonel Reed of Uxbridge Massachusettes.
  The price of gold remained consistent at $19.39 an ounce from 1792 until a small spike to $21.79 in 1814 and $22.16 in 1815 then back to $19.39.
By 1833, $19.39 gold would never be seen again so Congress reconciled the new value of gold with the passage of the 1834 Coin Act under the Presidency of Andrew Jackson. A new regulation of weight and value of gold was adopted to bring the value of gold in sync with the marketplace and its relative value to silver.
6% of gold was taken from the weight of each dollar and creditors were justly compensated less 6%. Constitutionally, 5th amendment questions such as taking private property for public use without just compensation was never challenged. This act reduced the weight of gold coins so later minted coins wouldn’t be melted and allowed to circulate in commerce.
The Half Eagle suffered the worst prior effects as 744 were minted between 1792- 1834 and rising to 2.1 million struck between 1834-1838, most struck in Philadelphia. E Pluribus Unum was again removed from newly minted coins under the 1834 Act. By 1836, silver dollars had a value of 1.02 of the gold dollar so the 1837 Coin Act was passed.
    This act under President Grant  fixed the weight of the dollar at 412 1/2 grains of a troy ounce. 412 oz troy= 12814.632 grams while 480 grains =1 troy ounce. Undervalued dollars went out of circulation.
This act was also called the Crime Act by Western Silver farmers because of a silver boom that enriched western states economies and because silver was dropped for the gold standard that would later be adopted by governments around the world.
A powerful force called the Free Silver Movement was established that would be instrumental in the passage of the 1878 Bland Allison Act. This act allowed the Treasury Department to purchase $2-4 million a month of domestic silver to be coined into a newly designed Morgan Silver Dollar. 10 million coins were minted.
This act passed Congress over the veto of President Rutherford B Hayes yet the act was not fully adopted until 1900 with the Presidency of William McKinley. The Sherman Silver Purchase Act passed in 1890 and saw an increased purchase of 4.5 million ounces of silver bullion a month.
President Cleveland repealed this act because the Treasury issued a new coin for silver purchases that would be later exchanged for gold dollars as investors profited and Treasury was losing gold reserves. Fully minted pure gold coins stopped by 1935 and resumed in 1971 with Eisenhower Dollars.
  Southern ministers encouraged Treasury Secretary Salmon P. Chase in 1861 to enscribe In God We Trust  on coins so Congress approved the 2 cent coin in 1864 with In God We Trust enscribed.
In God We Trust was expanded to gold and silver coins with the passage of the 1865 Act and approved on the 3 cent coin in 1866 by passage of the 1866 Act. By 1873, all coins were approved with In God We Trust without further congressional approval.
 Under President Johnson, the 1965 Coin Act was passed that eliminated silver from coins due to a silver and coin shortage.
Coins were factored to have a 25 year life span. Silver quarters and  dimes saw complete elimination by 1966 with half dollars suffering a 40 % reduction. Silver was replaced with alloys of Copper, Zinc, Maganese and Nickel that would bring the cost to mint a quarter to 2.5 cents.
Silver dollars ended for the first time since 1792. To prevent hoarding, a date freeze was also passed.
All newly minted coins had  a 1964 date for a period of time. Mint marks were also eliminated for 5 years. Mint marks are the letter of the mint that struck the coin and established for responsibility purposes. Mint marks were mandated by the 1835 Coin Act.
 The Coin Act of 2005 saw commemerative coins that recognized all prior presidents that began in 2007. Prior commemerative $1 coins would continue such as the Sachagaweega $1 but will consists of no less than 1/3 the total of all $1 coins.
January 2010
      Brian Twomey is a currency trader and adjunct professor of Political Science at Gardner-Webb University

 

A Primer on Cross Currency Triangulation

The major significance of cross-currency triangulations–in which foreign money exchanges do not involve the U.S. dollar–results from the fact that many currencies are not typically traded against each other in the interbank market. Major companies, importers and exporters, governments, investors, and tourists, all needed a method to simultaneously transact business in euros while allowing for money and profits to repatriate back to their home currencies. With a realignment of the currency markets due to the adoption of the euro, cross-currency pairs such as the EUR/JPY, GBP/CHF, GBP/JPY, and EUR/GBP, as well as many other cross-currency pairs, developed over time, for many reasons.

Notice that none of the base currencies in the pairs listed above is a nation that has adopted the Maastricht Treaty, and therefore rejected the adoption of the euro. With the European Union’s implementation of Rule 1103/97 on Sept. 11, 1997, formal legality existed for calculating conversions to euros. This rule also established convertibility to six decimal places (rather than just three) and the adoption of triangulation as the legal norm for transacting business in the eurozone. This legality gave investors, traders, and bankers a new means to trade currencies, with a whole host of new profit opportunities. This article will focus on triangulation as a means to trade and profit.

KEY TAKEAWAYS

  • Cross-currency triangulation takes advantage of the discrepancies in the bid-ask spread between non-U.S. dollar exchange rates in order to turn a profit.
  • The most popular triangular opportunities are usually found with the CHF, EUR, GBP, JPY, and U.S. dollars in order to convert from euros to home currencies.
  • The basic cross exchange rate formula is A/B x B/C = C/B.

How Triangulation Changes the Process

Before triangulation existed, a company in the U.K. selling products in Switzerland and receiving Swiss francs had to sell Swiss francs for U.S. dollars and then sell U.S. dollars for British pounds. Before cross currencies existed, repatriations occurred by triangulating pairs with U.S. dollars. Therefore, triangulation with crosses gave us the means to take advantage of the bid-ask spreads in the interbank market.

On a daily basis, well-capitalized investors and traders can always find discrepancies between bid-ask spreads through the many cross pairs that exist today, thanks to the inclusion of euros. Although these arbitrage opportunities may last for as little as 10 seconds, many capitalize on these differences to turn a profit. Fortunately, computers linked directly to the interbank market can easily meet this challenge and profit through bid-ask spreads around the world from banks that make markets in currencies.

Cross Exchange Rate Formula

The basic formula always works like this: A/B x B/C = C/B. The cross rate should equal the ratio of the two corresponding pairs, therefore, EUR/GBP = EUR/USD divided by GBP/US, just like GBP/CHF = GBP/USD x USD/CHF.

Cross Exchange Rate Formula Example

For example, suppose we know the bid and offer of AUD/USD and NZD/USD, and we want to profit from AUD/NZD.

AUD/NZD bid = AUD/USD bid divided by NZD/USD offer = a certain rate
AUD/NZD offer = AUD/USD offer divided by NZD/USD bid = a rate

The product of the rate through the bid-ask spread will determine whether a profit opportunity exists.

Three-Pair Triangulation Example

Suppose that we have a three-pair triangulation opportunity such as GBP/CHF, EUR/GBP, and EUR/CHF, in which GBP/CHF is quoted from EUR/GBP and EUR/CHF. Notice the base currencies within EUR/GBP and EUR/CHF; they equal the GBP/CHF, but we must make our euro conversions in order to achieve our objective.

GBP/CHF bid = EUR/CHF bid divided by EUR/GBP offer = a certain rate
GBP/CHF offer = EUR/CHF offer divided by EUR/GBP bid = a certain rate calculated in euros

Whether you earned a profit in this example would depend on exchange rates. Notice the conversion of euros from GBPs and CHFs; triangulating currencies usually involves either euro or U.S. dollar conversions.

Triangulation Example With U.S. Dollar

Suppose we triangulate a U.S. dollar conversion from CHF/JPY; CHF/JPY is simply USD/CHF and USD/JPY. The bid equals the division of the bid of the cross rate terms currency (top), by the offer of the base (bottom). To find the offer, divide the offer of the terms currency by the bid of the base.

If the USD/CHF rate is 1.5000-10 and USD/JPY is 100.00-10 for a CHF/JPY cross rate, the bid would be 100.00 divided by 1.5010 or 66.6223 JPY/CHF; the offer would be 100.10 divided by 1.5000 or 66.7337 JPY/CHF.

Why Triangulate?

In most instances, triangulation involves profiting from exchange rate disparities. This can be accomplished in many ways. For example, suppose you institute two buys on a certain pair and one sell, or you sell two pairs and buy one pair. Any number of triangulation opportunities exist every day from banks in Tokyo, London, New York, Singapore, Australia, and all the places in between. These same opportunities may exist around the world, trading the exact same pair. The most popular triangular opportunities are usually found with the CHF, EUR, GBP, JPY, and U.S. dollars, in order to convert from euros to home currencies.

What is noticeable, more and more, is that many brokers, including retail currency brokers, are including cross currency pairs in their dealing rates section of their trade stations. One can now trade the GBP/USD as easily as the USD/GBP, and the EUR/USD as easily as the USD/EUR. The difference between the interbank market and the retail side of trading is the spot market. Many may want to transact their business through the spot market where they know their trade will be executed because prices in the interbank market are so ephemeral.

Traders can easily transact any triangular arbitrage opportunities with two or three currency pairs crossed by many nations, as well as take advantage of any other bid-ask spread opportunities. For the small retail trader with limited funds, this would probably work. However, for the well-capitalized trader, it may not because the spot market doesn’t always reflect exact exchange rates. Larger traders may have to wait on certain spot prices before transacting their business–a wait they may not be willing to risk when it comes to profits.

The Bottom Line

Many opportunities exist for the arbitrage and triangular traders, that don’t always include exchange rate arbitrages. Traders may want to capitalize on merger and acquisition opportunities through the currency markets, swap trades, forward trades, yield curve trades, and options trades. The same opportunities exist for each one of these markets.

September 2009

 

Brian Twomey