jueves, 19 de diciembre de 2013

La visualización de datos ahorra tiempo

Data Visualization Saves Time




This is Part 2 of a series on analytics which will appear each Wednesday on The TIBCO blog. You can read Part 1, Isolate Problems and Use Analytics to Solve Themhere
No matter your role within an organization, whether it be analytics, product manager, or leader, let’s face it: Excel-based bar charts aren’t terribly compelling.
If you rely on Excel reports to evaluate current customer trends, operating conditions, or business performance, they simply don’t provide much insight into why something happened (e.g., customer satisfaction dropped five points in the Pacific Northwest last month).

Don’t Let Reports Become One-Dimensional

While boring Excel spreadsheets do little to stoke the imagination, there are other shortcomings to using stale BI reports. Namely, they’re a drain on productivity.
Let’s say a team of airline executives is trying to identify more effective ways to improve fleet utilization. An Excel report might inform the executives of the airline’s current utilization rates. It might even demonstrate how the utilization rates have changed month to month. However, it’s unlikely to provide them with any meaningful insights as to why utilization rates have changed or the opportunities to improve upon the deployment of aircraft between different cities and hubs. This is one of the reasons why more companies are turning to the use of cloud-based analytics, which offer rich data visualization capabilities. Humans are visual creatures—research has shown that people think more efficiently when data is presented visually.
For instance, a study conducted by Mindlab International at The Sussex Innovation Center evaluated how office workers manage and analyze data using traditional software and the resulting effectiveness. The research reveals that when data is displayed more visually—such as through the use of visual maps and other illustrative techniques—employees are 17% more productive.

Visualization Tools Empower Everyone

One of the strengths of cloud-based visualization tools is that they enable users to conjure fresh ways of looking at real-time data that can trigger new approaches to problem solving. For example, one of the benefits of data visualization techniques is how it promotes pattern matching. A retail store manager can use data visualization methods to identify not only how a growing percentage of customers are defecting to other stores, but the reasons why they’re jumping ship.
In today’s fast-paced business environment, knowledge workers don’t have the time to wait for IT to generate BI reports that might be outdated by the time they hit their desks. Self-service data visualization tools can benefit both decision makers and analysts by enabling them to visualize information, and spot emerging business and operational trends faster.For instance, real-time data visualization capabilities can aid a marketing analyst for a beverage company who is under pressure from his boss to identify the company’s window of opportunity for launching a new product.
Visual analytics can enable marketing analysts to see how much time the company has to enjoy time-to-market advantages before one or more competitors are likely to enter the market. Meanwhile, a cloud-based offering facilitates collaboration between work teams, enabling multiple stakeholders to view, interact with, and drill down on the same data together, and collaborate on insights and actions that can be taken.
Try Spotfire Cloud now for 30 days and see how this impacts your business.

domingo, 1 de septiembre de 2013

No sólo de cálculos vive el hombre: Arte en Excel

La planilla de Excel también sirve para crear obras de arte

Un artista japonés desarrolla imágenes utilizando el popular software de Microsoft. Mirá sus creaciones. -

No todo se trata de números cuando se utiliza el Excel. También sirve para realizar obras de arte. Al menos, esa es la veta que el artista japonés Tasuo Horiuchi, encontró a la hora de crear cuadros a partir de planillas del popular software de Microsoft.



Según sostuvo el artista de 73 años, comenzó a experimentar con la computadora y el Excel hace 13 años. "Los programas gráficos son caros, pero Excel viene preinstalado en la mayoría de las computadoras. Además, tiene más funciones y es más fácil que el Paint", aseguró según sostiene el sitio Spoon Tamago.





Horiuchi comenzó a recibir atención en 2006 cuando partició del "Concurso de Arte de Autoformas en Excel". Sus obras se destacaron por encima del resto y logró el primer premio.

Algunas de sus obras pueden descargarse de manera gratuita, de manera que los usuarios pueden comprobar de que se trata de cuadros realizados en planillas de Excel. Entre las obras disponibles se encuentran "Flores de cereza en el castillo Jogo" y "Las cascadas Kego".








miércoles, 31 de julio de 2013

Un truco para Excel

This Excel Trick Lets You Work Across Sheets Without Switching Between Tabs


Microsoft Excel offers many ways for you to manipulate and organize data, including using multiple "sheets" or tabs within the same spreadsheet file.
Occasionally, you may find it useful to be able to quickly add sums across tabs. Luckily, Excel has a function for this too.


Produced by Daniel Goodman



martes, 16 de julio de 2013

Wilder (1999): Investigating Parameter Time Invariance or Stationarity with Excel Graphics

Volume 13, Issue 1, 1999

Investigating Parameter Time Invariance or Stationarity with Excel Graphics

Lisa A Wilder
Bowling Green State University
Articles in recent issues of CHEER have shown how spreadsheets can aid in teaching economics. Some papers have shown how spreadsheet applications have been particularly useful in macroeconomics (Paetow, 1998Thomas, 1996, and Goddard, Romilly and Tavakoli, 1995).1 In addition, Judge, 1990 includes many spreadsheet applications suitable for introductory economics. This paper shows how we can use a spreadsheet to explore the assumption of stationarity in econometrics, macroeconomics or more specialized courses.
A fundamental assumption in model building is that the behavior of the phenomena to be studied is stationary or invariant over time. This stationarity is also a crucial assumption in empirical studies so that the collected data represent repeated observations of a particular population. In order for reliable model solutions or estimates of population parameters to be found, a series must continue to retain the same basic underlying data structure.
Today, many studies of economic phenomena include the ideas of non-stationarity and the use of cointegration. However, introducing this material to students at either the undergraduate or graduate level is a difficult task or, at best, a time consuming one. This paper will demonstrate spreadsheet graphs can be used to show students the implications of non-stationarity on model parameters.
Section 1 of this paper discusses the benefits of using these techniques. Section 2 discusses the estimation method and shows how to construct a spreadsheet to explore the time invariance properties. Section 3 demonstrates the estimation techniques with a common example, the consumption function.

1. Examining Stationarity in Economics Courses

The Benefits of Introducing Stationarity through Spreadsheets

Including spreadsheets in economics courses and utilizing the spreadsheet to explore estimator properties over time can serve several purposes. These include:
  • The use of spreadsheets satisfies the increased need for curriculum integration. One way integration can be accomplished is through the use of statistics and computers in the economics classroom. Often students in advanced (and sometimes introductory) economics classes have been exposed to statistical techniques such as regression. I find that many students do not retain enough statistical knowledge to successfully implement regression analysis nor do they see the connection between statistics and other courses. Using estimation in economics can demonstrate the importance of statistics and can provide additional practical experience.

  • Students learn actively in an interactive environment.

  • Students gain increased capability and confidence with spreadsheet software.

  • Using the computer in economics classes may enhance student motivation and morale.Often students recognize the importance of having good computer skills upon graduation. They may not have the same confidence in the importance of economics. As instructors, we can use this motivation to our advantage.

  • Students may mistakenly believe that only statistical software such as Minitab or SAS can produce statistical estimates. Many instructors of statistics courses rely on statistical software packages. As a result, introducing statistical functions in spreadsheets provides another benefit. Since students may not find specific statistical software on the job, using spreadsheets to produce regression provides them with valuable experience. Microsoft Excel is used in this paper, but the methods can be easily replicated in other packages.

  • Students gain a better understanding of stationarity assumptions by experimentation. I consider this the greatest benefit to students to be an increased understanding of the assumption of stationarity. While important in many areas of economics, the role of stationarity is critical to models in econometrics and theoretical economics courses. Without this assumption, our process of estimation makes little sense and models fall apart. Recent trends in economic research stress the importance of stationarity in econometric work. To adequately prepare advanced economics students so that they may read and participate in economic research, we expose them to important issues including non-stationarity, unit roots, spurious regression and cointegration. The proposed graphical analysis not only exposes students to the meaning of non-stationarity but enables them to actively participate and discover the consequences of a non-stationary data series and the benefits of differencing.

The Need for Stationarity or Time Invariance

In Theory
We define stationarity as a condition in which the fundamental characteristics of a series do not change over periods. These periods could refer to a series over time or across cross sectional units. Where Znotes the entire population of study and Diis the distribution function describing this population in periodi, strict stationarity implies

D( Z ; q )= D( Z ; q )= ... =D( Z ; q )
Stationarity is a common assumption found in theoretical economic models. A classic example of non-stationarity looks at a shock to income (Hall, 1978). Assume first that income is a stationary series.Figure 1 displays a stationary response to winning a lottery. Income will vary from the norm for a period of time. However, the response to this shock must die away in order for us to return to the same distribution. Eventually the average of income will become infinitely close to its original mean value.
As an alternative, consider that income is non-stationary. In this case, winning the lottery in a given period will cause a permanent change in average income. For example, if we could perpetuate the change in income then average income would remain higher throughout a lifetime. The shock does not to dissipate over time and the mean of income will change. Figure 1 displays one possible non-stationary series.
The Real Business Cycle literature has dealt to a great extent with income shocks (Kyland and Prescott, 1980). For a dynamic equilibrium to exist, we require stationarity. Otherwise, any shock to a variable will substantially change the underlying characteristics which in part determine the equilibrium value or path. We see, therefore, that our theoretical requirement for stationarity is very closely linked to the descriptive statistics and probabilistic characteristics of our data.

In Statistical Analysis

Stationarity also plays an important role in the statistical analysis of economic relationships. In order to get an accurate estimate of the coefficients determining the relationship between two or more variables, the underlying characteristics of the data series must remain constant across the sample period. Very commonly in statistical analysis, we require a series to be second-order or weakly stationary. In second order stationarity, the mean, variance and contemporaneous covariance of the series must be constant.(Spanos, 1986).
If Y and X are characterized by a normal distribution, then it can be shown that the OLS coefficients b and s2 in a simple linear regression model

Y = b0 + b1 + e

are simply of combination of the means, variances of X and Y and the covariance of X and Y (Spanos, 19942. Specifically, we see that if

then

b0 = my - b1my and

b1 = s12/s2x
Therefore, the means and variances of X and Y must be constant if we can place any confidence on our estimates of b and s2.
This probabilistic interpretation of stationarity and its relationship to parameter estimation will be vital in our method of assessing parameter stability. In this paper, we will examine the assumption of stationarity by viewing the characteristics of estimated parameters, which are both theoretically interesting and functions of the joint distribution moments.

2. Graphically Examining the Stability of b Estimates

As mentioned in Section 1, the coefficients in linear regression models are simply arithmetic combinations of the moments of the joint distribution of Y and X. By varying the sample from which these coefficients are calculated, we can examine parameter properties and better understand the time invariance, or homogeneity, of Y and X.
We will consider two graphical methods to examine the properties of our Linear Regression Model (LRM) estimates. Both graphical methods can be easily implemented in a spreadsheet with limited use of specialized functions. A simple modification of formulas allows us to introduce two different methods of estimation on the same spreadsheet.

ROLS

The first graphical technique involves Recursive Ordinary Least Squares (ROLS) Regression (Spanos, 1986). We begin by selecting a sub-sample of our data, with T sufficiently large, and calculating the parameters of the LRM. We find an estimate of each coefficient and the estimate’s standard error using these sub-sample observations. This process is then repeated adding one more observation to the sub-sample until the sub-sample size increases to include all the observations. If stationarity exists, we should see convergence to a stable value of the coefficient and a decreasing variance. This would indicate that additional observations bring us closer and closer to the stated value of the parameter. An example time plot of a stationary parameter is shown in Figure 2a.
Estimation is accomplished in a spreadsheet package such as Microsoft Excel through the use of simple functions. For example, to find the estimates of the coefficients of a LRM, we type the commands:
INTERCEPT(b$1: b20, c$1: c20) and SLOPE(b$1: b20, c$1: c20)
where column b contains observations of the variable to be estimated, and column c contains observations of the regressor. The use of the dollar sign before cell 1 in each formula is crucial to the functioning of ROLS. This will enable us to copy the formulas to subsequent cells and gradually increase the number of observations used in estimation. When we copy these formulae, we have:
1
INTERCEPT(b$1:b20, c$1:c20)
SLOPE(b$1:b20, c$1,c20)
2
INTERCEPT(b$1:b21, c$1:c21)
SLOPE(b$1:b21, c$1,c21)
3
INTERCEPT(b$1:b22, c$1:c22)
SLOPE(b$1:b22, c$1,c22)
...
....
....
In this way, we have created a large number of parameter estimates, each with an increasing sample size.
An important decision regards the number of observations (20 observations here) included in the first sub-sample. There is no magic number for how many observations to include in the first sub-sample. This is only a starting point for analysis. The larger the starting sample, the more likely one is to find stationary behavior.
A similar method is used to create standard error and confidence intervals for the estimates in each sub-sample regression. The formulas in this case are more complex, but again a simple copy command with the careful use of a dollar sign enables us to form numerous ROLS estimates for the standard error and intervals. To find the standard error for the intercept term of the LRM, we must first create a column of the regressor variable squared using a simple algebra function (@c1^2). We then find the standard error of the intercept estimate is
SQRT(SUM(d$1:d20)/(COUNT(c$1:c20)*DEVSQ(c$1:c20)))*STEYX(b$1:b20,c$1:c20)
where column d contains the squares of the regressors. The standard error for the slope estimate is
STEYX(b$1:b20,c$1:c20)/SQRT(DEVSQ(c$1:c20))
Copying such formulae will result in multiple estimates of the standard errors, one for each sub-sample. An Excel algebra function can then be used to construct intervals around the estimates of each parameter. Specifically,
e$1-2f$1 AND e$1+2f$1
where column e contains the estimates of the intercept and column f contains the estimates of the standard error of the intercept estimates. These commands are copied down the column to find intervals around the point estimate for all the sub-samples.
We can then plot the estimate of each parameter (with the Insert Chart command) and its 2 standard deviation interval to examine its stationarity properties. If stationarity is an acceptable assumption, then we will find the mean of our coefficient estimate to remain nearly constant and the variance of the coefficient estimate to be shrinking. In Figure 2a, we see a plot of a parameter that remains relatively stable and has a shrinking variance - both traits indicate homogeneity.

WOLS

Similar evidence can be displayed through Ordinary Least Squares based upon a window of observations (WOLS). In this case also, we begin with a sub-sample of our total data set. We calculate OLS estimates given this sub-sample. In order to examine the stability of these estimates, we slide this window across the entire sample, adding one observation and taking one away (Spanos, 1986). In this case, the number of observations utilized to calculate the parameters remains constant, as opposed to ROLS where the number of observations increased in each successive iteration.
Due to this constant number of observations in the sample, we are not looking for convergence. That is, we do not expect to see a reduction in the variance. Instead, stationarity will be displayed when the estimates of the coefficients remain relatively fixed. An ideal Windows Ordinary Least Squares (WOLS) diagram would show a near constant value of the coefficient and a near constant variance (see Figure 2b).
WOLS computation can be performed with a simple modification of the ROLS estimation spreadsheet. By removing the dollar signs in each formula and then copying the revised formulae to cells, we will create a sliding window. The formulae for the parameters INTERCEPT(b1:b20, c1:c20) and SLOPE(b1:b20, c1:c20) will become
1
INTERCEPT(b1:b20, c1:c20)
SLOPE(b1:b20, c1:c20)
2
INTERCEPT(b2:b21, c2:c21)
SLOPE(b2:b21, c2:c21)
3
INTERCEPT(b3:b22, c3:c23)
SLOPE(b3:b22, c3:c22)
...
...
...
Similar modifications of the standard error formulae allow us to construct intervals around the estimates from WOLS. In Figure 2b, we see the behavior of a stationary parameter since the estimator’s mean and variance are relatively fixed.
click here to download image
Fig 2b.
Click above to download image

3. Interpreting ROLS and WOLS Plots: An Application from Macroeconomics

As an example of studying stationarity properties in macroeconomics, we look at the relationship between consumption and income. This relationship has been explored to a great extent in the literature (see Hall, 1989) for a survey of the historical debate). It plays a prominent role in both introductory and more advanced macroeconomics. The consumption function is also often an important application in econometrics textbooks (Gudjarati, 1995Lardaro, 1993Ramanathan, 1998). As discussed in Section 1, non-stationarity in income would imply that a temporary change in income would lead to a persistent change in the mean and variance of that variable. To estimate and make inferences about the consumption function and to be able to speak meaningfully about the marginal propensity to consume, we must see stationarity in a statistical sense. That is, the joint distribution, D(C,Y) must display at least second order stationarity for OLS estimation to be meaningful.
We will estimate a simple linear regression model where annual consumption expenditures are a function of personal disposable income (measured in constant dollars).3
Estimation yields:
C =-422.402 |0.936 Y, R2 = .9957
(152.40)(0.0104)


where C is per capita consumption and Y is real per capita disposable income in the U.S. from 1959 to 1994. Next, we will examine the stability of the slope using ROLS and WOLS. In addition to providing information about the statistical reliability of our estimates, this will also display graphically patterns in the marginal propensity to consume (MPC) over time.

The Stationarity of the Marginal Propensity to Consume

As we see from Figure 3, we have cause to doubt the stability of the parameter estimates. The initial sub-sample size in this case was selected to be 25 observations.4 The ROLS plot in Figure 3a shows that the MPC seems to be increasing over time. The variance in the MPC is not converging and may even be increasing as the sample size increases. Both are consistent with a non-stationary series. In the WOLS plot (Figure 3b), we see a pronounced upward trend in the MPC with a nearly constant variance.
click here to download image
Fig 3a. The ROLS plot 
Click above to download image
click here to download image
Fig 3b. The WOLS plot
Click above to download image
As a result, meaningful estimates of the MPC are not possible using this model. The value obtained for the MPC depends critically on the sample period selected since D(C,Y) is changing. To create a reliable estimate of MPC, we first must account for the non-stationarity.

Sources of Non-Stationarity in Theory and Application

Given our findings in the previous section, we reject the hypothesis that our parameters are time invariant or stationary.5 However, we do not as yet know how to characterize the non-stationarity of our time series. We can identify particular types of non-stationarity with important economic interpretations. Identifying the source of non-stationarity takes us to two literature branches. First, we look at empirical models of structural change and methods to assess these drastic changes in the economy. Secondly, we consider the implications of a unit root autoregressive dynamic structure.

Structural Change

It is possible that the MPC non-stationarity results from a change in the relationship between consumption and income. For example, preferences may have changed so that a greater proportion of income is spent or saved. If this change in tastes occurs suddenly at a particular point in time, we say that a structural break has occurred.
A method of coping with non-stationarity resulting from structural change is to decrease the length of the sample in the hope that the underlying relationship is stationary over a shorter period of time (Chow, 1960). By restricting the period of time, we may limit the amount of change in the relationship between income and consumption and therefore create a better estimate of the MPC.
To examine the stationarity of our series, we will compute ordinary least squares estimates of two sub-samples. If these coefficients vary greatly while displaying stationarity within the sub-sample, then we have shown a case for structural change. In this case an improvement in the stationarity of our series can be accomplished by viewing various sub samples. If, however, ROLS and WOLS estimates display the same non-stationarity even in the smaller samples, then we have not adequately solved this problem.
To divide our data into sub samples, however, requires the choice of a breaking point. The choice of breaking point depends upon the model in question and usually has some link to policy changes or other exogenous events. We may also use WOLS as a guide in selecting breaking points. In Figure 3, we do not notice a distinct jump in value. However, the estimates of our parameters tend to be fairly stable from the start of our sample until the mid-1980’s and then steadily increase after this time.
Estimating our Consumption function for two distinct intervals produces:
1959-1986: C =61.639 +0.8940 Y, R2 = .9965
(133.108)(.011)

1987-1995: C =3153.99 +1.098 Y, R2 = .9288
(2239.496)(.1089)
We see significantly different MPCs in these two intervals and that the MPC has increased in the second time period. The breaking point of 1987 is consistent with evidence from WOLS and also suggests the possible importance of tax changes in 1986. We can follow up with a standard Chow test to determine if this hypothesis of a structural break in 1986 seems valid (Chow, 1960).
However, graphical evidence below (figure 4) shows that reducing the sample length does not eliminate the non-stationary behavior in MPC. While time invariance has improved, we still see an increasing MPC. Therefore, we must look for another explanation of the non-stationarity.
click here to download image
Fig 4.
Click above to download image

Unit Roots

An alternative source of non-stationary behavior is a unit root autoregressive process. In this case, the value of current output depends on average upon output in the previous period. If this dependence over time does not decay b1 is equal to 1) then we see persistence of shocks and non-stationarity. This corresponds closely with the theoretical non-stationarity discussed in Section 1.
Many economists have noted the presence of numerous unit roots in economic time series including output, capital, prices and trade behavior. For a summary of these results, see Nelson and Plosser, 1982.
If a series is unit root non-stationarity, then the estimate of b1 will be statistically indifferent from the value 1 in 6

Yt = b0 + b1Yt-1 + et 

By definition, a true unit root series can be made stationary by differencing. In some cases, multiple differences may be required. This technique has been well explored in the literature on integration and cointegration (see Dolado, Jenkinson and Rivera, 1990 for an excellent educational resource and survey.)
Next, we can examine the effect of first differences on the stationarity of the marginal propensity to consume. Forming first differences in a spreadsheet involves a simple, algebra command. The resulting columns are then used to form estimates of the LRM. With the consumption model, we find:

1987-1995: DC =60.370 +0.7237 DY, R2 = .626
(25.430)(.096)

Notice the change in value of the slope term (which has changed its meaning in theory, but still captures the relationship between consumption and income) and in R2. Since this method attempts to eliminate spurious regression, we would expect to see this decrease in R2 (Spanos, 1986).
We next plot the ROLS and WOLS estimates of the slope parameter in the differenced model. As we can see in Figure 5, stationarity has been improved through differencing. We see relatively stable estimates of the relationship between consumption and income, regardless of the sample length chosen. We see a decreasing variance in ROLS indicating that additional observations improve the reliability of our estimates of the slope term. In WOLS, we find the variance remains relatively constant supporting the idea that the variance also is time-invariant. Looking at the level of integration is an important technique in construction of macroeconomic empirical models and may greatly enhance stationarity properties.
click here to download image
Fig. 5a
Click above to download image
click here to download image
Fig. 5b
Click above to download image

Conclusion

Stationarity of economic time series is a vital assumption in both theoretical and empirical terms. Without stationarity assumptions imposed on the first two moments of the joint distribution, we cannot justify the value of our OLS estimates. Our ordinary least squares estimates will only represent an average of various different parameter values.
This paper has demonstrated a useful approach for assessing the stationarity of an economic process and for determining if and when structural change has occurred. Dividing a sample into various sections may solve the problem of non-stationarity if the changes in the parameters result from structural change. If we see distinct changes in the parameter values, we may find we have abandoned one model for an alternative. In this case, to sample from one population only, we need to reduce the length of the sample period. However, if observed non-stationarity results from a unit root relationship, then shortening the sample period will not remove the heterogeneity. In such a case, any shock to a series will move us away from a constant mean and therefore from the assumption of stationarity. Differencing and an exploration of cointegrating vectors are necessary steps in dealing with unit roots.
Using Excel graphics enables students to see what happens to parameter estimates when a data series is non-stationary. Through exploration of data series, students can see improvements in models reconstructed to account for unit roots or structural breaks.

Notes

1. Other CHEER articles have shown spreadsheet applications useful for microeconomics (Paetow, 1994), and even specialized areas including mathematical economics (MacDonald, 1995Houston, 1997), game theory (Rosser, 1995) and environmental economics (Whitmarsh, 1991).
2. Similar reasoning applies to estimation by Method of Moments and Maximum Likelihood. Since we substitute the sample moments for population moments in Method of Moments estimation, stationarity is necessary. In Maximum Likelihood Estimation, we search for values of the parameters of the conditional distribution that will lead to the highest likelihood value. Since the likelihood function is based upon the assumed distribution, we cannot trust our estimates when the characteristics of our data change greatly throughout the sample.
3. For ease of implementation in Excel, we limit ourselves to a model with a single regressor. While it is possible to complete estimation in Excel for a case with numerous regressors (for example, using the Analysis Toolkit), this is not necessary to display both the use of statistics in macroeconomic analysis and the test for stationarity. Also, the necessary toolkit may or may not be available on the systems used by students. There are a large number of alternative models of consumption (including dynamic models with many lags, error correction models, etc.) which could be examined instead. The estimation methods for OLS, ROLS and WOLS remain very similar. However, easy spreadsheet implementation of a multivariate LRM would rely on the use of statistical functions. I often incorporate multivariate estimation of ROLS and WOLS in my econometrics course using a Matrix language like Gauss or Matlab.
4. While there is no specific rule for how large an initial sample is large enough, this size enabled us to construct 16 different estimates.
5. While this graphical analysis is a useful diagnostic tool, to reject or fail to reject a hypothesis we would need a specific test. Individuals may interpret graphs differently, and therefore we need to have some standard by which we can say that this series is “stationary enough” to call the same. Such tests have been explored in the literature on Structural change and Unit roots.
6. Since the presence of a unit root under the null will change the distribution of our estimator b1, we cannot utilize standard tests. Instead, we look to the Bootstrapped estimates of the Dickey Fuller Test (Fuller, 1976).
The author may be contacted at

Lisa A. Wilder,
Department of Economics,
Bowling Green State University,
Bowling Green
OH 43403
419-372-8397 (office);
419-353-1730 (home office);

References

Dolado, J.J., T. Jenkinson and S. Sosvilla-Rivero. 1990. “Cointegration and Unit Roots” Journal of Economic Surveys 4: 249-273.
Fuller, W. 1976. Introduction to Statistical Time Series. Wiley: New York.
Goddard, J.A., P.J. Romilly and M. Tavakoli. 1995. “Macroeconomics and Information Technology Applications” Computers in Higher Education Economics Review. v.9, no. 1.
Gudjarati, D. 1995. Basic Econometrics. New York: McGraw-Hill Inc.
Hall, R. 1989. “Consumption” in Modern Business Cycle Theory, R. Barro ed. Cambridge MA: Harvard University Press.
Hall, R. 1978. “Stochastic Implications of the Life Cycle-Permanent Income Hypothesis: Theory and Evidence” Journal of Political Economy 10(2): 972-987.
Hamilton, J. 1994. Time Series Analysis. Princeton NJ: Princeton University Press.
Judge, G. 1990. Quantitative Analysis for Economics and Business Using Lotus 1-2-3. Hemel-Hempstead: Harvester-Wheatsheaf.
Kyland, F. and E. Prescott. 1980. “A Competitive Theory of Fluctuations and the Feasibility and Desirability of Stabilization Policy” in Rational Expectations and Economic Policy, S. Fisher ed. Chicago : University of Chicago Press.
Lardaro, L. 1993. Applied Econometrics. New York: Harper Collins Publishers.
MacDonald, Z. 1995. “Teaching Linear Programming using Microsoft Excel Solver” Computers in Higher Education Economics Review 9 (3).
Nelson, C.R. and C.I. Plosser. 1982. "Trends and Random Walks in Macroeconomic Time Series"Journal of Monetary Economics 10: 139-162.
Paetow, H. 1998. “Long Run Dynamic Equilibrium Simulation through the Use of Spreadsheets”Computers in Higher Education Economics Review 12 (1).
Paetow, H. 1994. “Illustrating Microeconomic Theory by using Spreadsheets” Computers in Higher Education Economics Review 22.
Ramanathan, R. 1998. Introductory Econometrics with Applications. Fort Worth, TX: The Dryden Press.
Rosser, M. 1995. “Modelling Game Theory with Spreadsheets” Computers in Higher Education Economics Review 9 (2).
Spanos, A. 1994. An Introduction to Modern Econometrics. Cambridge: Cambridge University Press.
Spanos, A. 1986. Statistical Foundations of Econometric Modelling. Cambridge MA: Cambridge University Press.
United States Department of Labor, Bureau of Labor Statistics, 1998. Consumption and Disposable Personal Income, 1959-1994.

Whitmarsh, D. 1991. “A Spreadsheet Model of Renewable Resource Exploitation” Computers in Higher Education Economics Review 13.

martes, 9 de julio de 2013

Calculando tasas de interés real suponiendo una situación inflacionaria


Calculating Real Rates of Interest Assuming an Inflationary Situation:

An Excel Application

Ray Thomas
The Open University

Introduction

Actual or nominal rates of interest are sometimes contrasted with real interest rates which take into account inflation. Estimates of real rates of interest have usually been made in the context of the construction of models which explain interactions in money markets in terms of expectations about future interest rates. This note describes a method of measuring real interest rates empirically and historically.

The method

The method was developed to help measure the impact of inflation on new town development corporations. The corporations borrowed money for a fixed period of fifty years at fixed rates of interest. Rates of interest varied from 3% in 1947 at the beginning of the programme up to 18% charged in the mid 1970s. Typically the annual borrowings of an individual corporation increased every year for a period of five to ten years as construction got under way. Then after a peak there were continued annual borrowings for up to thirty or years at a lower level as the new town continued to grow.The period of new town construction in Britain was one of continued inflation. After the first decade of the life of the corporation most of the advances on which the corporation was paying interest were made at a time when the price level was substantially lower. The real rate of interest, taking into account inflation, is lower than the rate of interest actually paid.
The real rate of interest does not take into the full impact of inflation on the finances of the development corporations, but is limited to indicating the impact of inflation on the General Revenue or current account. Inflation also had an impact on capital account as expressed in the Balance Sheet. This note does not attempt any estimation of the influence of inflation on capital account because of the practical and conceptual problems involved, but does later discuss the influence of inflation on the relationship between the pictures given by the current and capital accounts.
The calculation of the real rate for any particular year is based upon all advances made earlier than that year, the rate of interest which was fixed at that time an advance was made, and the inflation which has taken place since the advance was made:
Let Ln= advances made in year n, and Rn= the rate of interest charged for advances made in year n (which are fixed for the life of the loan). Then It, the nominal interest paid in year t for a series of advances beginning in year 1, is given by:


     It = *Ln*Rn                         (from n=1 to n=t).

Then ARt, the average nominal rate of interest paid in year t, for a series of advances made in year 1 or in later years, is given by:


 ARt = *Ln*Rn/*Ln         (from n=1 to n=t)

Let Pn= the price level in year n. Then the basic assumption in the estimation of real interest rates is that RRtm, the real interest rate paid in year t for an advance given in year m, is given by:

  RRtm = Rm*Pm/Pt  
This assumption defines the real rate of interest as something measured historically and not something which can be measured at the time an advance is made. The definition implies that the real rate of interest continues to fall for as long as inflation continues (and that the real rate would increase if there were a fall in the general price level). With hyperinflation or inflation over a very long period of time the real rate may approach zero, but it cannot be negative as it can be in some models which have used the concept of real rates of interest in association with expectations about future levels of inflation.
It follows from this definition that ARRt, the average real rate of interest paid in year t for a series of advances, is given by:


 ARRt = (*Ln*Rn*Pn/Pt)/*Ln       (from n=1 to n=t)

The method of calculation of real interest rates actually used is slightly different,. The accounts of the development corporations give total interest payments made each year, ie It=*Ln*Rn (from n=1 to n=t). These figures were used in as the preferred source for interest payments because no details are given in the annual reports of the method of calculation used for the calculation of the "average" rate of interest paid which are also included in most of the annual reports.
Another factor is that the accounts of the development corporations include figures for the sum of advances, ie *Ln, as the sum at the end of the financial year. In the formulae given above it has been implicit that the sum of advances refers in some way to the year as a whole. In the calculations made the sum of advances has been measured as the mean of advances made at the beginning and end of the year. The average nominal rate of interest in year t can then be written as:


   ARt = 200*It/(*Lt + *Lt-1)    

The method used for the calculation of the real rates of interest was designed to take advantage of the relative addressing facilities of a spreadsheet. RI2, the real level of interest payments made in year 2, for example, was calculated as the sum of interest paid for advances made in year 2 plus the interest paid in year 1 deflated by the increase in the price level in year 2 relative to year 1:


              RI2 = I2 - I1(1-P1/P2)

Generalising this, and using the relative addressing facilities of the spreadsheet, the level of real interest for all years were calculated as the sum of interest paid for advances made in the current year (measured in terms of It-It-1) plus the real interest paid in the previous year (ie RIt-1) deflated by the increase in the price level since the previous year:


   RIt = It - It-1 + RIt-1*Pt-1/Pt   

The Retail Price Index was used as a measure of the price level. The calculations were made using the Excel 5 spreadsheet. Columns A to D gave the input for the calculations. Column A was used for the financial years. Column B for advances made (Ln), column C for the interest payments made (In), and column D for Retail Price Index expressed with a common base year.
Columns E and F were used for calculation of the nominal and real rates of interest. The expression copied for the average nominal rates given in column E was ARt = 200*Ct/(Lt+Lt-1) - and the figures obtained by this method approximated to the figures given in many of the annual accounts for the average interest rate paid. Column F gives the average real rate of interest. The cell for year 1 for column F was entered manually as ARR1=F1=C1. The expression for the average real rate of interest copied for year 2 to year t is:


 ARRt = 200*(Ct - Ct-1 + Ft-1*Dt-1/Dt)/(Bt+Bt-1)

Results for Harlow and Milton Keynes

Some practical results using this calculation are illustrated in the Chart which shows the rate of inflation as measured by year to year changes in the Retail Price Index and both real and nominal rates of interest paid by Harlow and Milton Keynes Development Corporations. Milton Keynes was designated twenty years later than Harlow and faced a very different economic environment in terms of higher levels of both interest rates and inflation. The estimation of real rates of interest makes it possible to make direct comparisons between the impact of interest payments on the two corporations.
Chart Inflation Nominal and Real
At the end of its main nineteen year development period in 1986 Milton Keynes Development Corporation was paying an average rate of interest of 12.5%. But the real rate is estimated at 8.5%. The difference of 4% is a measure of the extent to which inflation reduced the real impact of Milton Keynes interest payments. At the end of nineteen years of Harlow�s development in 1966 the Corporation was paying an average rate of interest of 4.9%. But the real rate was 3.9%. Harlow at that stage benefited by inflation by only 1% - a quarter of the reduction of Milton Keynes.
Milton Keynes could be said to have gained more from inflation than Harlow, but this gain did relatively little to reduce the impact of the higher interest rates which Milton Keynes had to pay. The nominal interest rate of 12.5% paid by Milton Keynes at the end of its development stage in 1986 was a little more than two and a half times the 4.9% paid by Harlow in its nineteenth year. But the difference in real terms was not very much smaller. The real rate of interest of 8.5% paid by Milton Keynes was a little more that twice the 3.9% paid by Harlow.
The main difference in the impact of inflation on the financial situation of the two development corporations lies in the length of the development periods. Harlow Development Corporation had a longer life than Milton Keynes and continued to benefit from inflation for another fifteen years. Harlow had time to demonstrate profitability, and by the 1970s had begun to cumulate substantial financial surpluses. After 1974 Harlow was able to finance capital expenditures from its own resources and did not need to borrow money. The effect of rapid inflation in the 1970s was to reduce the real rate of interest paid by Harlow Development Corporation to less than 2% by 1976.
The reasons why both the real and nominal rates of interest paid by Harlow increased after 1976 provide an interesting vignette of the financial relations between central government and the development corporations. By the 1970s Harlow Development Corporation was able not only to finance its own investment but also lend to money. In the early 70s Harlow was paying 6% on the money it had borrowed from the Treasury but was earning something in the range of 10-15% in interest on the money it had on loan to other bodies. Harlow Development Corporation actually earned 2.2 millions in interest on its loans in 1976. It appears that the Treasury was not happy to see Harlow Development Corporation develop as a semi-autonomous lending institution, and the government appropriated 9m surplus from the Corporation in 1976. The nominal and real rates of interest paid by Harlow Development Corporation rose after 1976 because the Harlow Development Corporation was obliged to borrow more money at the then prevailing interest rates of 10% or much more.
At the end of its life in 1980 the real rate of interest paid by Harlow Development Corporation was a little over 3% as compared with the nominal rate of a little over 7%. Inflation had by the end of the life of Harlow Development Corporation reduced the interest real burden by 4%.
The influence of inflation on interest payments on the financial performance of the corporations is indirect rather than direct. Inflation does not influence the amount of interest actually paid. The concept of real interest rates as defined here implicitly assumes that money borrowed is invested in assets whose value increases with inflation. Assets belong to the capital account and are recorded in the corporations Balance Sheets at historic cost. But rent income derived from these assets is classified to the General Revenue Account where it is set against interest payments. The rate of return on assets is a measure of financial performance which is independent of interest rates paid by the corporation but is expressed in the same units as the rate of interest, and is a measure which will increase with inflation if the development corporation increases its levels of rents in accordance with the general level of prices.
The rate of return on the assets transferred to the Commission for New Towns by Harlow Development Corporation in 1980 at the end of its life was 12% - as compared with the nominal rate of interest paid of 7%. But the real rate of interest was only 3%. It seems reasonable to suggest that 4% of this 12% should be regarded as gain solely attributable to inflation. The other 8% is an indicator of the rate of return at that stage on Harlow's planning activities. The qualification "at that stage" is appropriate because it can be expected that rent income will usually lag inflation. The crucial component of the development corporations income comes from industrial and commercial property let on leases which may be subject to review only every five years or longer.
At the end of its development stage in 1986 the rate of return on capital expenditure by Milton Keynes Development Corporation was only 2%. The difference of 4% in that year between the nominal and real rates of interest paid suggests that the rate of return of 2% can be wholly attributable to inflation, and that at that stage of development, the benefits from the creation of new urban values in Milton Keynes had not begun to manifest themselves in levels of rent. In the case of Milton Keynes the value of the assets created became manifest only when they were revalued or sold in 1987 or later years. Concluding remarks The estimation of real interest rates seems particularly useful in the context of new town development because of the long period over which expenditures are made and over which financial performance might be evaluated. But the method of measurement described could be applied to the investigate the influence of inflation on any organisation which finances its investment from fixed interest loans. The method of calculation is of interest in demonstrating the extraordinary usefulness of the relative addressing facility of the spreadsheet.
CHEER 1996

domingo, 7 de julio de 2013

Whigham y Whyte (1999): Explaining Input-Output and Equilibrium Relationships Using Excel Display Facilities

Volume 13, Issue 1, 1999

Explaining Input-Output and Equilibrium Relationships Using Excel Display Facilities



David Whigham and Jeanie Whyte
Glasgow Caledonian University

Introduction

The background to this work was the need to provide an efficient visual representation of inter and intra industry economic flows for a research student who was to be employed in a funded project to develop a set of National Accounts for Scotland - the Caledonian Blue Book 1997. Since one of the eventual objectives of the project would involve the generation and use of input-output tables, it was decided to develop a simple model of an economic system that would thereby display the relationships more overtly than would conventional matrix methods. It was intended that this visual model would then lay the foundations for a more intuitive understanding of the matrix procedures that would inevitably have to be employed in more realistic applications1. Thereafter, it became clear that the models developed could be of considerable use for teaching purposes.
In short, the generated models provide an introductory visual guide to the intricacies of input - output analysis and allow experimentation and investigation of the models’ respective properties to be carried out by the user in an overtly interactive way. Full versions of the models are obtainable free of charge by contacting the authors.

The simple model

In the first model, the system of economic relations is very simple. Two primary products, Coal (C) and Iron (I), are used to produce a final consumption good, Steel (S), that provides the means of survival for all employed economic agents. The latter include Coal, Iron and Steel workers (CW, IW and SW) who require to be provided with a proportion of the final output of Steel (S to CW, S to IW and S to SW) in order to supply their labour. It is also assumed that the production of Coal and Iron requires Steel (S to C and S to I) and that production of Steel itself also requires some Steel (S to S).
The ‘hub’ of the system, however, is the total initial demand for Steel that is exercised by the owners of the property from which Coal and Iron are derived, i.e. Coal mine owners (CMO) and Iron mine owners (IMO). These rental demands (S to CMO and S to IMO) must be satisfied if any economic activity is to take place.
The structure of this basic model is contained in Model 1 and shown in Figure 1.


 Figure 1
Click above to download image
It is suggested that users should employ the model in the following way.
The output of Steel required by the two groups of owners (20 in the illustration) should be entered as a starting value to the cell labelled S. Once this has been done the implied resource requirements of the system are transmitted as shown. Given the parameter values in the un-shaded cells it is now an easy matter to trace through the system and observe the effects of any initial value for the amount of Steel required by the owners. Thus, since each unit of Steel requires 2 units of Coal (c/s), 20 units gives a required Coal output of 40. Each unit of Coal, however, requires 0.05 Coal workers (cw/c) so a total of 2 Coal workers are needed. In turn, each of these Coal workers needs 0.1 units of Steel (s/cw) for final consumption purposes and so an additional demand for 0.2 units of Steel is generated as indicated in the top left hand shaded cell in the diagram.
However, since the ultimate task is to determine the total amount of Steel that must be produced, cell labelled S NEEDED has been used to contain the sum of all the requirements that have been generated by the initial attempt to supply the 20 units demanded by the mine owners. This is obtained from:
S to CMO + S to IMO + S to CW + S to IW + S to SW + S to C + S to I + S to S
As is clear from the illustration the total system requirements (36) exceed the 20 units produced, and so the system is not in balance.
These real flows have been translated into corresponding financial flows in the lower left hand section ofFigure 1, where the price of Steel (PS) has been set at an arbitrary value.
Formulae were written to compute the rents, wages and prices required to supply each group of agents with exactly sufficient income to purchase the amounts of Steel that they require. For example, in order for the Iron mine owners to purchase their required 10 units of Steel at a steel price (PS) = 10, the payment to the owners (RIMO) must equal 100 currency units. Thus, the appropriate entry would be PS times (S to IMO). Similar logic applies to the rents, wages and prices obtained by the other groups - except for the price of Steel. This is because the system has one degree of freedom, and so the price of Steel in this model is an arbitrary numeraire (set to a value of 10 in the illustration). This is easily confirmed by allowing the price of Steel to change, whereupon all the other prices, wages and rents will adjust proportionally after calculation.
Clearly since the real flows of the system are not in balance, neither are the financial flows and this is indicated by the fact that the real deficit of Steel (-16 units) has been translated to a financial deficit for Steel producers of -160 (i.e. -16 times the (arbitrary) price of Steel).
Finally, the right hand section of Figure 1 shows the complete set of disaggregated financial flows associated with the current disequilibrium situation, where any individual entry is the financial expenditure paid by the row element to the associated column element. Thus, producers of Coal spend 80 currency units on Steel, 2 currency units on Coal workers and 100 currency units on rent to Coal mine owners.
This model can now be used to resolve for the equilibrium amount of Steel to be produced.
This is easily done by replacing the value of 20 that is currently being used for the cell labelled S with the formula: =SREQU, where the cell, B23, containing the total required amount of Steel (currently 36 units) has been named as SREQU. Entering this formula ensures that the output of Steel must equal the sum of all demands for Steel generated by the initial amount of Steel that has to be produced.
When this entry is made, with the default settings, Excel will usually return the error message “cannot resolve circular references”, and a value of zero will be returned. This clearly shows the circular nature of the flows that have been generated. By this it is meant that since production of Steel requires some Steel itself, the eventual output of Steel including the amount for its own needs cannot be computed until the amount of Steel required is itself known.
However, it is an easy matter to force Excel to resolve this circular argument. To do this, select: Tools, Options, Calculation and then check the iteration box. When OK is clicked, the model will iterate a default 100 times and will eventually settle down at the equilibrium value of S. The flashing values that appear in all calculated cells after Calc (F9) has been pressed give a dynamic demonstration of the iterative procedure that is taking place.
The eventual equilibrium value for S with the given parameters is contained in Model 2 and shown in Figure 2, where it is also clear that the financial flows for each of the groups are now in balance. Clearly the model has not only determined the required output of Steel, and the associated outputs and employment of the primary inputs and workers, but also the prices, wages and rents necessary to allow the required amounts of Steel to be purchased. The value added at each stage of the production process have also been added to the generated accounts.

Figure 2
Click above to download image

Investigating Change

At this stage it is suggested that time is spent investigating the effects of changes in the parameter values of the system. This is because a number of important implications can easily be derived. The following are some suggestions.
(a) Increase the amount of Coal needed for each unit of Steel (c/s) from 2 to 2.5. The amount of Steel produced will increase but the absolute prices of both Coal and Iron will decline. This however, is done in such a way that the price of Coal relative to the price of Iron increases.
Now restore the amount of Coal needed for each unit of Steel to its original value of 2.
(b) Increase the number of Coal workers required per unit of Coal (cw/c) from 0.05 to 0.1. The amount of Steel produced increases, the absolute price of Coal increases and the absolute price of Iron decreases. Consequently, the price of Coal relative to Iron must increase. It should also be noted that the wages of Coal workers are unchanged although the income that they receive increases by dint of the need to employ more of them.
Now restore the number of Coal workers needed for each unit of Coal to its original value of 0.05.
(c) Suppose that the amount of Steel required by each Coal worker (s/cw) increases from 0.1 to 0.15. The amount of Steel required will increase, the absolute price of Coal will increase and the absolute price of Iron will decrease. Consequently, the price of Coal relative to the price of Iron will increase. Importantly, however, not only are there more Coal workers employed, but their wage rate has also increased, while the wage rates of the other groups of workers remain unchanged.
Now restore the amount of Steel required by each Coal worker to its original value of 0.1.
(d) Suppose that the amount of Steel required by the owners of Coal (S to CMO) falls from 10 to 5, while the amount of Steel required by the owners of Iron (S to IMO) increases from 10 to 15. Now, although the total amount of Steel required by both groups of owners has not changed, the relativedistribution of this total has moved in favour of the owners of Iron.
As would be expected, the total amount of Steel produced by the system remains unchanged in such circumstances, but this is not true for the prices of Coal and Iron, nor for the rents paid to the owners. To be exact, the absolute price of Coal decreases and the absolute price of Iron increases, with the result that the price of Coal relative to the price of Iron decreases.
Also, the rents payable to the owners of Coal and Iron decrease and increase respectively. These price and rent changes are entirely understandable when it is noted that by requiring less Steel the owners of Coal reduce the costs that the producers of Coal must endure, thereby allowing an absolute reduction in the price of Coal.
Now restore the amounts of Steel required by the owners of Coal and Iron to their original values of 10 and 10 respectively.
(e) It is an easy matter to ‘break’ the model in the sense that the generated demands of the system areimpossible to satisfy. For example, increase the amount of Coal required by each unit of Steel (c/s) from 2 to 3, and it will be found that the iteration process cannot find a finite solution. This is because the necessary amount of Steel now required to supply the Coal industry cannot be generated by the Steel industry at its current level of technical efficiency.

Extending the Model

It is, of course the case, that a realistic model would require the inclusion of many more primary, intermediate and final products, and it is simply not practical to adopt the current approach in such circumstances - a conventional matrix approach would be required.
Nevertheless, it should be emphasised that the purpose of the outlined model is to provide a firm, easily observed platform from which realistic extensions could be considered by matrix techniques.
This said, there are further extensions of the current model that can prove instructive without making the models too cumbersome.

Model Extension 1

The first of these is to suppose that Coal, Iron and Steel each require distribution to the ultimate users, i.e. Coal distribution (CD), Iron distribution (ID), and Steel distribution (SD). Also assume that this will require Steel itself (S to CD, S to ID and S to SD), as well as Coal, Iron and Steel distribution workers (CDW, IDW and SDW) who in turn require Steel as payment for their services (S to CDW, S to IDW and S to SDW).
This extension of the model is contained in Model 3 and shown in Figure 3 where it is assumed that theend user pays for the distribution of the inputs and the final products.


Figure 3
Click above to download image
Once again the generated flows are easily observed and the required equilibrium prices and wage rates now include the prices of distributing Coal, Iron and Steel as well as the wage rates that are to be paid for the services of Coal, Iron and Steel distribution workers. The value added at each stage is also computed.
As with the simple model, the extended version is easily subjected to investigation of the effects of parameter variations, although in this case, because of the more complex nature of the circular relations generated it would be necessary to change the Excel calculation mode from automatic to manual.

Model Extension 2

The second extension of the model is to introduce a third essential input (Chemicals - N) to the Steel production process. This is done in such a way however, that Chemicals can only be produced by a second country - B, which cannot produce Iron. Thus country A produces Coal, Iron and Steel and imports Chemicals from country B, while Country B produces Coal Chemicals and Steel and must import its Iron requirements from Country A.
To investigate the properties of this two country model, it is assumed in the first instance, that the conditions of production are ‘symmetrical’ for both Iron and Chemicals. In other words, each unit of Steel production in A uses the same amount of Chemicals as each unit of Steel production in B needs of Iron. Equivalently, each unit of Steel in B requires the same amount of Iron as each unit of Steel in A needs of Chemicals.
These symmetry assumptions ensure that the economies of the two countries are effectively images of each other, with the major effect being that Iron production in A must now not only satisfy the Iron requirements of its own Steel industry, but also those from the Steel industry in country B. Conversely, the Chemicals industry in B must now satisfy the requirements of country B’s steel industry, as well as those of its own.
When these added circular relations, as well as the initial set of assumed productive relations, are added to the model, the system will settle down to an equilibrium situation in which production of Steel is the same in each country. Furthermore, as Model 4 can be used to demonstrate, the numeraires are seen to be equal in this special case.
However, if the amount of Chemicals required per unit of steel in country A were to exceed the amount of iron required per unit of Steel in country B then a reallocation will be required. This is easily confirmed by making the appropriate change to Model 4, whereupon it will be found that steel production in B must increase relative to steel production in B. For example, with each unit of Steel in A requiring 0.2 units of Chemicals, and each unit of Steel in B requiring only 0.1 units of Iron, then the equilibrium level of steel production will require that B produces more Steel than A.
More importantly however, it will also be found that with equal numeraires in each country, the financial balances will no longer all be zero. To be exact, country B will now run a financial surplus while country B will run a financial deficit.
This stems entirely from the fact that even with the same numeraire the internal prices of Iron in A and Chemicals in B are no longer equal (as a result of each unit of steel production in A now requiring more chemicals from B than Steel production in B requires form A’s Iron industry. The net effect is clearly to generate a trade imbalance between the two countries as long as the numeraires remain identical.
However, it is an easy matter to get the model to recompute one or other of the numeraires so that the trade imbalance is eliminated. This can be done by employing the Solver to find the value of the numeraire in one of the countries that eliminates the surpluses and deficits from the financial balances in both partners. The Solver settings are included in Model B and need never be changed.
When this is done it should be clear that what has really been computed is the exchange rate between the numeraires in each of the countries. Thus for example, if country A’s numeraire is set at 1 and if each unit of output in its steel industry still requires more chemicals than in B, the numeraire in country B will be computed at a value of less than 1 - implying a devaluation of country A’s currency vis a vis country B, and having the effect of removing the trade imbalance. When used with the values suggested above (Chemicals per unit of Steel in A = 0.2; Iron per unit of Steel in B = 0.1 then the effects are calculated by Model 4 and indicated in Figure 4.

Fig 4. Model Extension 2

Model Extension 3

The final extension of interest is to assume that there is a work force of given size. Suppose, for example, that the equilibrium requirement for all types of workers is less than the available work force. Also suppose that the state decides to give each of these unemployed workers ub units of steel, whereub is normally less than the amount paid to employed workers. These benefits to the unemployed are obtained by a uniform rate of taxation on the output of the Steel industry.
The effects of such a regime are contained in Model 5 and shown in Figure 5.

Fig 5. Model Extension 3
This model can be used to show that if ub = 0, then (with the given parameters) the equilibrium production level of Steel is 100 units, and 10 workers out of the work force of 45 are unemployed. If, however, ub is increased to 0.05 units of Steel then the need to supply this extra Steel causes the number of unemployed workers to decline to 9.2, and the output of Steel to increase to 102.3 units.

A comparative model

The relations considered in Model 2 have been reconstituted into a version that allows any two ‘before and after’ sets of production parameters to be compared directly in both numerical and graphical forms. This comparative model is contained in Model 6 and allows endless investigation of the effects of changes in the conditions of production to be observed.

Conclusions

This discussion and the models presented have attempted to provide a clear visual display of what are by no means trivial inter - relationships in a simple economic system. It is to be hoped that the key feature of these interactions - their circularity - is readily appreciated as a result of Excel’s ability to display the constituent parts of the model in considerable detail and to give clear indications of the iterative procedures that have to be employed in order to resolve the circularities.
Note: The authors may be contacted by e-mail to d.whigham@gcal.ac.uk

References

Gibson H., Riddington G., Whigham D. and Whyte, J., Caledonian Blue Book - National Accounts for Scotland 1951-96. Glasgow Caledonian University 1997.
Judge G. Quantitative Analysis for Economics and Business. Harvester Wheatsheaf 1990.
Sraffa P. Production of Commodities by Means of Commodities. Cambridge: Cambridge University Press, 1960.
Whigham D. Quantitative Business Methods Using Excel 5.0. Oxford University Press, 1998.

Notes


(1) For an excellent introduction to input-output analysis using matrix techniques on a spreadsheet see Judge, 1990.