Saturday, June 24, 2017

Bifurcation Analysis in a Model of Oligopoly

Figure 1: Bifurcation Diagram

I have presented a model of prices of production in which the the rate of profits differs among industries. Such persistent differential rates of profits may be maintained because of perceptions by investors of different levels of risk among industries. Or they may reflect the ability of firms to maintain barriers to entry in different industries. In the latter case, the model is one of oligopoly.

This post is based on a specific numeric example for technology, namely, this one, in which labor and two commodities are used in the production of the same commodities. I am not going to reset out the model here. But I want to be able to refer to some notation. Managers know of two processes for producing iron and one process for producing corn. Each process is specified by three coefficients of production. Hence, nine parameters specify the technology, and there is a choice between two techniques. In the model:

  • The rate of profits in the iron industry is rs1.
  • The rate of profits in the corn industry is rs2.

I call r the scale factor for the rates of profits. s1 is the markup for the rate rate of profits in the iron industry. And s2 is the markup for the rate of profits in the corn industry. So, with the two markups for the rates of profits, 11 parameters specify the model.

I suppose one could look at work by Edith Penrose, Michal Kalecki, Joseh Steindl, Paolo Sylos Labini, Alfred Eichner, or Robin Marris for a more concrete understanding of markups.

Anyways, a wage curve is associated with each technique. And that wage curve results in the wage being specified, in the system of equations for prices of production, given an exogenous specification of the scale factor for the rates of profits. Alternatively, the scale factor can be found, given the wage. Points in common (intersections) on the wage curves for the two techniques are switch points.

Depending on parameter values for the markups on the rates of profits, the example can have no, one, or two switch points. In the last case, the model is one of the reswitching of techniques.

A bifurcation diagram partitions the parameter space into regions where the model solutions, throughout a region, are topologically equivalent, in some sense. Theoretically, a bifurcation diagram for the example should be drawn in an eleven-dimensional space. I, however, take the technology as given and only vary the markups. Figure 1, is the resulting bifurcation diagram.

The model exhibits a certain invariance, manifested in the bifurcation diagram by the straight lines through the origin. Suppose each markup for the rates of profits were, say, doubled. Then, if the scale factor for the rates of profits were halved, the rates of profits in each industry would be unchanged. The wage and prices of production would also be unchanged.

So only the ratio between the markups matter for the model solution. In some sense, the two parameters for the markups can be reduced to one, the ratio between the rates of profits in the two industries. And this ratio is constant for each straight line in the bifurcation diagram. The reciprocal of the slopes of the lines labeled 2 and 4 in Figure 1 are approximately 0.392 and 0.938, respectively. These values are marked along the abscissa in the figure at the top of this post.

In the bifurcation diagram in Figure 1, I have numbered the regions and the loci constituting the boundaries between them. In a bifurcation diagram, one would like to know what a typical solution looks like in each region and how bifurcations occur. The point in this example is to understand changes in the relationships between the wage curves for the two techniques. And the wage curves for the techniques for the numbered regions and lines in Figure 1 look like (are topologically equivalent to) the corresponding numbered graphs in Figure 2 in this post

The model of oligopoly being analyzed here is open, insofar as the determinants of the functional distribution of income, of stable relative rates of profits among industries, and of the long run rate of growth have not been specified. Only comparisons of long run positions are referred to in talking about variations, in the solution to a model of prices of production, with variations in model parameters. That is, no claims are being made about transitions to long period equilibria. Nevertheless, the implications of the results in this paper for short period models, whether ones of classical gravitational processes, cross dual dynamics, intertemporal equilibria, or temporary equilibria, are well worth thinking about.

Mainstream economists frequently produce more complicated models, with conjectural variations, or game theory, or whatever, of firms operating in non-competitive markets. And they seem to think that models of competitive markets are more intuitive, with simple supply and demand properties and certain desirable properties. I think the Cambridge Capital Controversy raised fatal objections to this view long ago. Reswitching and capital reversing show that equilibrium prices are not scarcity indices, and the logic of comparisons of equilibrium positions, in competitive conditions does not conform to the principle of substitution. In the model of prices of production discussed here, there is a certain continuity between imperfections in competition and the case of free competition. The kind of dichotomy that I understand to exist in mainstream microeconomics just doesn't exist here.

Tuesday, June 20, 2017

Continued Bifurcation Analysis of a Reswitching Example

Figure 1: Bifurcation Diagram

This post is a continuation of the analysis in this reswitching example. That post presents an example of reswitching in a model of the production of commodities by means of commodities. The example is one of an economy in which two commodities, iron and corn, are produced. Managers of firms know of two processes for producing iron and one process for producing corn. The definition of technology results in a choice between two techniques of production.

The two-commodity model analyzed here is specified by nine parameters. Theoretically, a bifurcation diagram should be drawn in nine dimensions. But, being limited by the dimensions of the screen, I select two parameters. I take the inputs per unit output in the two processes for producing iron as given constants. I also take as given the amount of (seed) corn needed to produce a unit output of corn, in the one process known for producing corn. So the dimensions of my bifurcation diagram are the amount of labor required to produce a bushel corn and the amount of iron input required to produce a bushel corn. Both of these parameters must be non-negative.

I am interested in wage curves and, in particular, how many intersections they have. Figure 1, above, partitions the parameter space based on this rationale. I had to think some time about what this diagram implies for wage curves. In generating the points to interpolate, my Matlab/Octave code generated many graphs analogous to those in the linked post. I also generated Figure 2, which illustrates configurations of wage curves and switch points, for the number regions and loci in Figure 1. So I had some visualization help, from my code, in thinking about these implications. Anyways, I hope you can see that, from perturbations of one example, one can generate an infinite number of reswitching examples.

Figure 2: Some Wage Curves

One can think of prices of production as (not necessarily stable) fixed points of short period dynamic processes. Economists have developed a number of dynamic processes with such fixed points. But I leave my analysis open to a choice of whatever dynamic process you like. In some sense, I am applying bifurcation analysis to the solution(s) of a system of algebraic equations. The closest analogue I know of in the literature is Rosser (1983), which is, more or less, a chapter in his well-known book.

Update (22 Jun 2017): Added Figure 2, associated changes to Figure 1, and text.

References
  • J. Barkley Rosser (1983). Reswitching as a Cusp Catastrophe. Journal of Economic Theory V. 31: pp. 182-193.

Thursday, June 15, 2017

Perfect Competition With An Uncountable Infinity Of Firms

1.0 Introduction

Consider a partial equilibrium model in which:

  • Consumers demand to buy a certain quantity of a commodity, given its price.
  • Firms produce (supply) a certain quantity of that commodity, given its price.

This is a model of perfect competition, since the consumers and producers take the price as given. In this post, I try to present a model of the supply curve in which the managers of firms do not make systematic mistakes.

This post is almost purely exposition. The exposition is concrete, in the sense that it is specialized for the economic model. I expect that many will read this as still plenty abstract. (I wish I had a better understanding of mathematical notation in HTML.) Maybe I will update this post with illustrations of approximations to integrals.

2.0 Firms Indexed on the Unit Interval

Suppose each firm is named (indexed) by a real number on the (closed) unit interval. That is, the set of firms, X, producing the given commodity is:

X = (0, 1) = {x | x is real and 0 < x < 1}

Each firm produces a certain quantity, q, of the given quantity. I let the function, f, specify the quantity of the commodity that each firm produces. Formally, f is a function that maps the unit interval to the set of non-negative real numbers. So q is the quantity produced by the firm x, where:

q = f(x)
2.1 The Number of Firms

How many firms are there? An infinite number of decimal numbers exist between zero and unity. So, obviously, an infinite number of firms exist in this model.

But this is not sufficient to specify the number of firms. Mathematicians have defined an infinite number of different size infinities. The smallest infinity is called countable infinity. The set of natural numbers, {0, 1, 2, ...}; the set of integers, {..., -2, -1, 0, 1, 2, ...}; and the set of rational numbers can all be be put into a one-to-one correspondence. Each of these sets contain a countable infinity of elements.

But the number of firms in the above model is more than that. The firms can be put into a one-to-one correspondence with the set of real numbers. So there exist, in the model, a uncountable infinity of firms.

2.2 To Know

Cantor's diagonalization argument, power sets, cardinal numbers.

3.0 The Quantity Supplied

Consider a set of firms, E, producing the specified commodity, not necessarily all of the firms. Given the amount produced by each firm, one would like to be able to say what is the total quantity supplied by these firms. So I introduce a notation to designate this quantity. Suppose m(E, f) is the quantity supplied by the firms in E, given that each firm in (0, 1) produces the quantity defined by the function f.

So, given the quantity supplied by each firm (as specified by the function f) and a set of firms E, the aggregate quantity supplied by those firms is given by the function m. And, if that set of firms is all firms, as indexed by the interval (0, 1), the function m yields the total quantity supplied on the market.

Below, I consider for which set of firms m is defined, conditions that might be reasonable to impose on m, a condition that is necessary for perfect competition, and two realizations of m, only one of is correct.

You might think that m should obviously be:

m(E, f) = ∫Ef(x) dx

and that the total quantity supplied by all firms is:

Q = m((0,1), f) = ∫(0, 1) f(x) dx

Whether or not this answer is correct depends on what you mean by an integral. Most introductory calculus classes, I gather, teach the Riemann integral. And, with that definition, the answer is wrong. But it takes quite a while to explain why.

3.1 A Sigma Algebra

One would like the function m to be defined for all subsets of (0, 1) and for all functions mapping the unit interval to the set of non-negative real numbers. Consider a "nice" function f, in some hand-waving sense. Let m be defined for a set of subsets of (0, 1) in which the following conditions are met:

  • The empty set is among the subsets of (0, 1) for which m is defined.
  • m is defined for the interval (0, 1).
  • Suppose m is defined for E, where E is a subset of (0, 1). Let Ec be those elements of (0, 1) which are not in E. Then m is defined for Ec.
  • Suppose m is defined for E1 and E2, both being subsets of (0, 1). Then m is defined for the union of E1 and E2.
  • Suppose m is defined for E1 and E2, both being subsets of (0, 1). Then m is defined for the intersection of E1 and E2.

One might extend the last two conditions to a countable infinity of subsets of (0, 1). As I understand it, any set of subsets of (0, 1) that satisfy these conditions is a σ-algebra. A mathematical question arises: can one define the function m for the set of all subsets of (0, 1)? At any rate, one would like to define m for a maximal set of subsets of (0, 1), in some sense. I think this idea has something to do with Borel sets.

3.2 A Measure

I now present some conditions on this function, m, that specifies the quantity supplied to the market by aggregating over sets of firms:

  • No output is produced by the empty set of firms:
  • m(∅, f) = 0.
  • For any set of firms in the sigma algebra, market output is non-negative:
  • m(E, f) ≥ 0.
  • For disjoint sets of firms in the sigma algebra, the market output of the union of firms is the sum of market outputs:
  • If E1E1 = ∅, then m(E1E1, f) = m(E1, f) + m(E2, f)

The last condition can be extended to a countable set of disjoint sets in the sigma algebra. With this extension, the function m is a measure. In other words, given firms indexed by the unit interval and a function specifying the quantity supplied by each firm, a function mapping from (certain) sets of firms to the total quantity supplied to a market by a set of firms is a measure, in this mathematical model.

One can specify a couple other conditions that seem reasonable to impose on this model of market supply. A set of firms indexed by an interval is a particularly simple set. And the aggregate quantity supplied to the market, when each of these firms produce the same amount is specified by the following condition:

Let I = (a, b) be an interval in (0, 1). Suppose for all x in I:

f(x) = c

Then the quantity supplied to the market by the firms in this interval, m(I, f), is (b - a)c.

3.3 Perfect Competition

Consider the following condition:

Let G be a set of firms in the sigma algebra. Define the function fG(x) to be f(x) when x is not an element of G and to be 1 + f(x) when x is in G. Suppose G has either a finite number of elements or a countable infinity number of elements. Then:

m((0,1), f) = m((0,1), fG)

One case of this condition would be when G is a singleton. The above condition implies that when the single firm increases its output by a single unit, the total market supply is unchanged.

Another case would be when G is the set of firms indexed by the rational numbers in the interval (0, 1). If all these firms increased their individual supplies, the total market supply would still be unchanged.

Suppose the demand price for a commodity depends on the total quantity supplied to the market. Then the demand price would be unaffected by both one firm changing its output and up to a countably infinite number of firms changing their output. In other words, the above condition is a formalization of perfect competition in this model.

4.0 The Riemann Integral: An Incorrect Answer

I now try to describe why the usual introductory presentation of an integral cannot be used for this model of perfect competition.

Consider a special case of the model above. Suppose f(x) is zero for all x. And suppose that G is the set of rational numbers in (0, 1). So fG is unity for all rational numbers in (0, 1) and zero otherwise. How could one define ∫(0, 1)fG(x) dx from a definition of the integral?

Define a partition, P, of (0, 1) to be a set {x0, x1, x2, ..., xn}, where:

0 = x0 < x1 < x2 < ... < xn = 1

The rational numbers are dense in the reals. This implies that, for any partition, each subinterval, [xi - 1, xi] contains a rational number. Likewise, each subinterval contains an irrational real number.

Define, for i = 1, 2, ..., n the two following quantities:

ui = supremum over [xi - 1, xi] of fG(x)

li = infimum over [xi - 1, xi] of fG(x)

For the function fG defined above, ui is always one, for all partitions and all subintervals. For this function, li is always zero.

A partition can be pictured as defining the bases of successive rectangles along the X axis. Each ui specifies the height of a rectangle that just includes the function whose integral is being sought. For a smooth function (not our example), a nice picture could be drawn. The sum of the areas of these rectangles is an upper bound on the desired integral. Each partition yields a possibly different upper bound. The Riemann upper sum is the sum of the rectangles, for a given partition:

U(fG, P) = (x1 - x0) u1 + ... + (xn - xn - 1) un

For the example, with a function that takes on unity for rational numbers, the Riemann upper sum is one for all partitions. The Riemann lower sum is the sum of another set of rectangles.

L(fG, P) = (x1 - x0) l1 + ... + (xn - xn - 1) ln

For the example, the Riemann lower sum is zero, whatever partition is taken.

The Riemann integral is defined in terms of the least upper bound and greatest lower bound on the integral, where the upper and lower bounds are given by Riemann upper and lower sums:

Definition: Suppose the infimum, over all partitions of (0, 1), of the set of Riemann upper sums is equal to the supremum, also over all partitions, of the set of Riemann lower sums. Let Q designate this common value. Then Q is the value of the Riemann integral:

Q = ∫(0, 1)fG(x) dx

If the infimum of Riemann upper sums is not equal to (exceeds) the supremum of the Riemann lower sums, then the Riemann integral of fG is not defined.

In the case of the example, the Riemann integral is not defined. One cannot use the Riemann integral to calculate the changed market supply from a countably infinite firms each increasing their output by one unit.

5.0 Lebesque Integration

The Riemann integral is based on partitioning the X axis. The Lebesque integral, on the other hand, is based on partitioning the Y axis, in some sense. Suppose one has some measure of the size of the set in the domain of a function where the function takes on some designated value. Then the contribution to the integral for that designated value can be seen as the product of that value and that size. The integral of a function can then be defined as the sum, over all possible values of the function, of such products.

5.1 Lebesque Outer Measure

Consider an interval, I = (a, b), in the real numbers. The (Lebesque) measure of that set is simply the length of the interval:

m*(I) = b - a

Let E be a set of real numbers. Let {In} be a set of an at most countable infinite number of open intervals such that

E is a subset of ∪ In

In other words, {In} is an open cover of E. The (Lebesque) measure of E is defined to be:

m*(E) = inf [m*(I1) + m*(I2) + ...]

where the infimum is taken over the set of countably infinite sets of intervals that cover E.

The Lebesque measure of any set that is at most countably infinite is zero. So the rational numbers is a set of Lebesque measure zero. So is a set containing a singleton.

A measurable set E can be used to decompose any other set A into those elements of that set that are also in E and those elements that are not. And the measure of A is the sum of the measures of those two set.

If a set is not measurable, there exists some set A where that sum does not hold. Given the axiom of choice non-measurable sets exist. As I understand it, the set of all measurable subsets of the real numbers is a sigma algebra.

5.2 Lebesque Integral for Simple Functions

Let E be a measurable subset of the real numbers. Define the characteristic function, χE(x), for E, to be one, if x is an element of E, and zero, if x is not an element of E.

Suppose the function g takes on a finite number of values {a1, a2, ..., an}. Such a function is called a simple function. Let Ai be the set of real numbers where gi = ai. The function g can be represented as:

g(x) = a1 χA1(x) + ... + an χAn(x)

The integral of such a simple function is:

g(x) dx = a1 m*(A1) + ... + an m*(An)

This definition can be extended to non-simple functions by another limiting process.

5.3 Lebesque Upper and Lower Sums and the Integral

The Lebesque upper sum of a function f is:

UL(E, f) = sup over simple functions gf of ∫Eg(x) dx

One function is greater than or equal to another function if the value of the first function is greater than or equal to the value of the second function for all points in the common domain of the functions. The Lebesque lower sum is:

LL(E, f) = inf over simple functions gf of ∫Eg(x) dx

Suppose the Lebesque upper and lower sums are equal for a function. Denote that common quantity by Q. Then this is the value of the Lebesque integral of the function.

Q = ∫Ef(x) dx

When the Riemann integral exists for a function, the Lebesque integral takes on the same value. The Lebesque integral exists for more functions, however. The statement of the fundamental theorem of calculus is more complicated for the Lebesque integral than it is for the Riemann integral. Royden (1968) introduces the concept of a function of bounded variation in this context.

5.4 The Quantity Supplied to the Market

So the quantity supplied to the market by the firms indexed by the set E, when each firm produces the quantity specified by the function f is:

m(E, f) = ∫Ef(x) dx

where the integral is the Lebesque integral. In the special case, where the firms indexed by the rational numbers in the interval (0, 1) each supply one more unit of the commodity, the total quantity supplied to the market is unchanged:

Q = ∫(0, 1)fG(x) dx = ∫(0, 1)f(x) dx

Here is a model of perfect competition, in which a countable infinity of firms can vary the quantity they produce and, yet, the total market supply is unchanged.

6.0 Conclusion

I am never sure about these sort of expositions. I suspect that most of those who have the patience to read through this have already seen this sort of thing. I learn something, probably, by setting them out.

I leave many questions above. In particular, I have not specified any process in which the above model of perfect competition is a limit of models with n firms. The above model certainly does not result from taking the limit at infinity of the number of firms in the Cournot model of systematically mistaken firms. That limit contains a countably infinite number of firms, each producing an infinitesimal quantity - a different model entirely.

I gather that economists have gone on from this sort of model. I think there are some models in which firms are indexed by the hyperreals. I do not know what theoretical problem inspired such models and have never studied non-standard analysis.

Another set of questions I have ignored arises in the philosophy of mathematics. I do not know how intuitionists would treat the multiplication of entities required to make sense of the above. Do considerations of computability apply, and, if so, how?

Some may be inclined to say that the above model has no empirical applicability to any possible actually existing market. The above mathematics is not specific to the economics model. It is very useful in understanding probability. For example, the probability density function for any continuous random variable is only defined up to a set of Lebesque measure zero. And probability theory is very useful empirically.

Appendix: Supremum and Infimum

I talk about the supremum and the infimum of a set above. These are sort of like the maximum and minimum of the set.

Let S be a subset of the real numbers. The supremum of S, written as sup S, is the least upper bound of S, if an upper bound exists. The infimum of S is written as inf S. It is the greatest lower bound of S, if a lower bound exists.

References
  • Robert Aumann (1964). Markets with a continuum of traders. Econometrica, V. 32, No. 1-2: pp. 39-50.
  • H. L. Royden (1968). Real Analysis, second edition.

Sunday, June 11, 2017

Another Three-Commodity Example Of Price Wicksell Effects

Figure 1: Price Wicksell Effects in Example
1.0 Introduction

This post presents another example from my on-going simulation experiments. I am still focusing on simple models without the choice of technique. The example illustrates an economy in which price Wicksell effects are positive, for some ranges of the rate of profits, and negative for another range.

2.0 Technology

I used my implementation of the Monte-Carlo method to generate 20,000 viable, random economies in which three commodities are produced. For the 316 of these 20,000 economies in which price Wicksell effects are both negative and positive, the maximum vertical distance between the wage curve and an affine function is approximately 15% of the maximum wage. The example presented in this post is for that maximum.

The economy is specified by a process to produce each commodity and a commodity basket specifying the net output of the economy. Since the level of output is specified for each industry, no assumption is needed on returns to scale, I gather. But no harm will come from assuming Constant Returns to Scale (CRS). All capital is circulating capital; no fixed capital exists. All capital goods used as inputs in production are totally used up in producing the gross outputs. The capital goods must be replaced out of the harvest each year to allow production to continue on the same scale. The remaining commodities in the annual harvest constitute the given net national income. I assume the economy is in a stationary state. Workers advance their labor. They are paid wages out of the harvest at the end of the year. Net national income is taken as the numeraire.

Table 1 summarizes the technique in use in this example. The 3x3 matrix formed by the first three rows and columns is the Leontief input-output matrix. Each entry shows the physical quantity of the row commodity needed to produce one unit output of the column commodity. For example, 0.5955541 pigs are used each year to produce one bushel of corn. The last row shows labor coefficients, that is, the physical units of labor needed to produced one unit output of each commodity. The last column is net national income, in physical units of each commodity.

Table 1: The Technology for a Three-Industry Model
InputCorn
Industry
Pigs
Industry
Ale
Industry
Net
Output
Corn0.09057260.00216510.00228850.274545
Pigs0.59555410.22313790.00545690.097880
Ale0.12021800.63622780.02324520.804348
Labor0.262730.185550.31306

3.0 The Wage Curve

I now consider stationary prices such that the same rate of profits is made in each industry. The system of equations allow one to solve for the wage, as a function of a given rate of profits. The blue curve in Figure 2 is this wage curve. The maximum rate of profits, achieved when the wage is zero, is approximately 276.5%. The maximum wage, for a rate of profits of zero, is approximately 2.0278 numerate units per labor unit. As a contrast to the wage curve, I also draw a straight line, in Figure 2, connecting these maxima.

Figure 2: Wage Curve in Example

I do not think it is easy to see in the figure, but the wage curve is not of one convexity. The convexity changes at a rate of profits of approximately 25.35%, and I plot the point at which the convexity changes.

4.0 The Numeraire Value of Capital Goods

Since I have specified the net national product, the gross national product can be found from the Leontief input-output matrix. The gross national product is the sum of the capital goods, in a stationary state, and the net national product. The employed labor force can be found from labor coefficients and gross national product.

Given the rate of profits, one can find prices, as well as the wage. And one can use these prices to calculate the numeraire value of capital goods. Figure 1, at the top of this post, graphs the ratio of the value of capital goods to the employed labor force, as a function of the rate of profits.

A traditional, incorrect neoclassical idea is that a lower rate of profits incentivizes firms to increase the ratio of capital to labor. And a higher wage also incentivizes firms to increase the ratio of capital to labor. The region, for a low rate of profits, in which price Wicksell effects are positive already poses a problem for this vague neoclassical idea.

5.0 Conclusion

This example makes me feel better about my simulation approach. From some previous results, I was worried that I would have to rethink how I generate random coefficients. But, maybe if I generate enough economies, even with all coefficients, etc. confined to the unit interval, I will be able to find examples that approach visually interesting counter-examples to neoclassical economics.

Thursday, June 08, 2017

Elsewhere

  • Ian Wright has had a blog for about six months.
  • Scott Carter announces that Sraffa's notes for are now available online. (The announcement is good for linking to Carter's paper explaining the arrangement of the notes.)
  • David Glasner has been thinking about intertemporal equilibrium.
  • Brian Romanchuk questions the use of models of infinitesimal agents in economics. (Some at ejmr say he is totally wrong, but others cannot make any sense of such models, either. I am not sure if my use of a continuum of techniques here can be justified as a limit.)
  • Miles Kimball argues that there is no such thing as decreasing returns to scale.

Don't the last two bullets imply that the intermediate neoclassical microeconomic textbook treatment of perfect competition is balderdash, as Steve Keen says?

Tuesday, June 06, 2017

Price Wicksell Effects in Random Economies

Figure 1: Blowup of Distribution of Maximum Distance of Frontier from Straight Line
1.0 Introduction

This post is the third in a series. Here is the first, and here is the second.

In this post, I am concerned with the probability that price Wicksell effects for a given technique are negative, positive, or both (for different rates of profits). A price Wicksell effect shows the change in the value of capital goods, for different rates of profits, for a technique. If a (non-zero) price Wicksell effect exists, for some range(s) of the rate of profits in which the technique is cost-minimizing, the rate of profits is unequal to the marginal product of capital, in the most straightforward sense. (This is the general case.) Furthermore, a positive price Wicksell effect shows that firms, in a comparison of stationary states, will want to employ more capital per person-hour at a higher rate of profits. The rate of profits is not a scarcity index, for some commodity called "capital", limited in supply.

My analysis is successful, in that I am able to calculate probabilities for the specific model of random economies. And I see that an appreciable probability exists that price Wicksell effects are positive. However, I wanted to find a visually appealing example of a wage frontier that exhibits both negative and positive Wicksell effects. The curve I end up with is close enough to an affine function that I doubt you can readily see the change in curvature.

Bertram Schefold has an explanation of this, based on the theory of random matrices. If the Leontief input-output matrix is random, in his sense (which matches my approach), the standard commodity will tend to contain all commodities in the same proportion, that is, proportional to a vector containing unity for all elements. And I randomly generate a numeraire (and net output vector) that will tend to be the same. So my random economies tend to deviate only slightly from standard proportions. And this deviation is smaller, the larger the number of commodities produced. So this post is, in some sense, an empirical validation of Schefold's findings.

2.0 Simulating Random Economies

The analysis in this post is based on an analysis, for economies that produce a specified number of commodities, of a specified sample size of random economies (Table 1).

Table 1: Number of Simulated Economies
Seed for
Random
Generator
Number of
Commodities
Number of
Economies
66,96522,020
775,545320,458
586,65842,747,934

Each random economy is characterized by a Constant-Returns-to-Scale (CRS) technique, a numerate basket, and net output. The technique is specified by a:

  • A row vector of labor coefficients, where each element is the person-years of labor needed to a unit output of the numbered commodity.
  • A square Leontief input-output matrix, where each element is the units of input of the row commodity needed as input to produce a unit of the column commodity.

The numeraire and net output are column vectors. Net output is set to be the numeraire. The elements of the vector of labor coefficients, the Leontief matrix, and the numeraire are each realizations of independent and identically distributed random variables, uniformly distributed on the unit interval (from zero to one). Non-viable economies are discarded. So, as shown in the table above, more economies are randomly generated than the specified sample size (1,000).

I am treating both viability and the net output differently from Stefano Zambelli's approach. He bases net output on a given numeraire value of net output. Many vectors can result in the same value of net output in a Sraffa model. He chooses the vector for which the value of capital goods is minimized. This approach fits with Zambell's concentration on the aggregate production function.

3.0 Price Wicksell Effects

Table 2 shows my results. As I understand it, the probability that a wage curve for a random economy, in which more than one commodity is produced, will be a straight line is zero. And I find no cases of an affine function for the wage curve, in which the maximum wage (for a rate of profits of zero) and the maximum rate of profits (for a wage of zero) are connected by a straight line in the rate of profits-wage space.

Table 2: Price Wicksell Effects
Number
of
Industries
Number w/
Negative
Price
Wicksell
Effects
Number w/
Positive
Price
Wicksell
Effects
Number w/
Both Price
Wicksell
Effects
25484520
360341619
467933413

The wage curve in a two-commodity economy must be of a single curvature. So for a random economy in which two commodities are produced, price Wicksell effects are always negative or always positive, but never both. And that is what I find. I also find a small number of random economies, in which three or four commodities are produced, in which the wage curve has varying curvature through the economically-relevant range in the first quadrant.

4.0 Distribution of Displacement from Affine Frontier

I also measured how far, in some sense, these wage curves for random economies are from a straight line. I took the affine function, described above, connecting the intercepts, of the wage curve, with the rate of profits and the wage axes as a baseline. And I measured there absolute vertical distance between the wage curve and this affine function. (My code actually measures this distance at 600 points). I scale the maximum of this absolute distance by the maximum wage. Figure 1, above, graphs histograms of this scaled absolute vertical distance, expressed as a percentage. Tables 3 and 4 provide descriptive statistics for the empirical probability distribution.

Table 3: Parametric Statistics
Number of Produced Commodities
TwoThree Four
Sample Size1,0001,000 1,000
Mean1.9621.025 0.498
Std. Dev.3.4281.773 0.772
Skewness5.1114.837 3.150
Kurtosis48.46738.230 12.492
Coef. of Var.0.57240.578 0.645

Table 3: Nonparametric Statistics
Number of Produced Commodities
TwoThree Four
Minimum0.000180.000251 0.0000404
1st Quartile0.1790.114 0.0608
Median0.6530.402 0.203
3rd Quartile2.2111.120 0.583
Maximum50.61323.374 5.910
IQR/Median3.1102.504 2.574

We see that the wage curves for these random economies tend not to deviate much from an affine function. And, as more commodities are produced, this deviation is less.

5.0 An Example

For three commodity economies, the maximum scaled displacement of the wage curve from a straight line I find is 23.4 percent. But, of those three-commodity economies with both negative and positive price Wicksell effects, the maximum displacement is only 0.736 percent. Table 5 provides the randomly generated parameters for this example.

Table 5: The Technology for a Three-Industry Model
InputCorn
Industry
Pigs
Industry
Ale
Industry
Net
Output
Corn0.55251520.00248600.26527610.26077
Pigs0.51646750.74692860.11284060.42705
Ale0.56363080.03683990.21105450.98691
Labor0.7993640.0281110.012866

Figure 2 shows the wage curve for the example. This curve is not a straight line, no matter how close it may appear so to the eye. Figure 3 shows the distance between the wage curve and a straight line. Notice that the convexity towards the left of the curve in Figure 3 varies slightly from the convexity for the rest of the graph. This is a manifestation of price Wicksell effects in both directions. (I need to perform some more checks on my program.)

Figure 2: A Wage Frontier with Both Negative and Positive Price Wicksell Effects

Figure 3: Vertical Distance of Frontier from Straight Line

6.0 Conclusion

I hope Bertram Schefold and Stefano Zambelli are aware of each other's work.

Postscript: I had almost finished this post before Stefano Zambelli left this comment. I'd like to hear from him at rvien@dreamscape.com.

References
  • Bertram Schefold (2013). Approximate Surrogate Production Functions. Cambridge Journal of Economics.
  • Stefano Zambelli (2004). The 40% neoclassical aggregate theory of production. Cambridge Journal of Economics 28(1): pp. 99-120.

Saturday, May 27, 2017

Some Main Points of the Cambridge Capital Controversy

For the purposes of this very simplified and schematic post, I present the CCC as having two sides.

  • Views and achievements of Cambridge (UK) critics:
    • Joan Robinson's argument for models set in historical time, not logical time.
    • Mathematical results in comparing long-run positions:
    • Rediscovery of the logic of the Classical theory of value and distribution.
    • Arguments about the role that a given quantity of capital plays in disaggregated neoclassical economic theory between 1870 and 1930.
    • Arguments that neoclassical models of intertemporal and temporary equilibrium do not escape the capital critique.
    • A critique of Keynes' marginal efficiency of capital and of other aspects of The General Theory.
    • The recognition of precursors in Thorstein Veblen and in earlier capital controversies in neoclassical economics.
  • Views of neoclassical defenders:
    • Paul Samuelson and Frank Hahn's, for example, acceptance and recognition of logical difficulties in aggregate production functions.
    • Recognition that equilibrium prices in disaggregate models are not scarcity indices; rejection of the principle of substitution.
    • Edwin Burmeister's championing of David Champerowne's chain index measure of aggregate capital, useful for aggregate theory when, by happenstance, no positive real Wicksell effects exist.
    • Adoption of models of inter temporal and temporary general equilibrium.
    • Assertion that such General Equilibrium models are not meant to be descriptive and, besides, have their own problems of stability, uniqueness, and determinateness, with no need for Cambridge critiques.
    • Samuel Hollander's argument for more continuity between classical and neoclassical economics than Sraffians see.

I think I am still ignoring large aspects of the vast literature on the CCC. This post was inspired by Noah Smith's anti-intellectualism. Barkley Rosser brings up the CCC in his response to Smith. I could list references for each point above. I am not sure I could even find a survey article that covered all those points, maybe not even a single book.

So the CCC presents, to me, a convincing demonstration, through a counter-example to Smith's argument. In the comments to his post, Robert Waldmann brings up old, paleo-Keynesian as an interesting rebuttal to a specific point.

Thursday, May 25, 2017

Some Resources on Neoliberalism

Here are three:

  • Anthony Giddens, in The Third Way: The Renewal of Social Democracy (1999), advocates a renewed social democracy. He contrasts what he is advocating with neoliberalism, which he summarizes as, basically, Margaret Thatcher's approach. Giddens recognizes that more flexible labor markets will not bring full employment and argues that unregulated globalism, including unregulated international financial markets, is a danger that must be addressed. He stresses the importance of environmental issues, on all levels from the personal to international. I wish he had something to say about labor unions, which I thought had an institutionalized role in the Labour Party, before Blair and Brown's "new labour" movement.
  • Charles Peters had a A Neo-Liberal's Manifesto in 1982. (See also 1983 piece in Washington Monthly.) This was directed to the Democratic Party in the USA. It argues that they should reject the approach of the New Deal and the Great Society. Rather, they should put greater reliance on market solutions for progressive ends. I do not think Peters was aware that the term "neoliberalism" was already taken. Contrasting and comparing other uses with Peters' could occupy much time.
  • I have not got very far in reading Michel Foucault. The Birth of Biopolitics: Lectures at the Collège de France, 1978-1979. Foucault focuses on German ordoliberalism and the Chicago school of economics.

Anyways, neoliberalism is something more specific than any centrist political philosophy, between socialist central planning and reactionary ethnic nationalism. George Monbiot has some short, popular accounts. Read Noah Smith if you want confusion, incoherence, and ignorance, including ignorance of the literature.

Friday, May 19, 2017

Reversing Figure And Ground In Life-Like Celluar Automata

Figure 1: Random Patterns in Life and Flip Life
1.0 Introduction

I have occasionally posted about automata. A discussion with a colleague about Stephen Wolfram's A New Kind of Science reminded me that I had started this post some time last year.

This post has nothing to do with economics, albeit it does illustrate emergent behavior. And I have figures that are an eye test. I am subjectively original. But I assume somebody else has done this - that I am not objectively original.

This post is an exercise in combinatorics. There are 131,328 life-like Celluar Automata (CA), up to symmetry.

2.0 Conway's Game of Life

John Conway will probably ever be most famous for the Game of Life (GoL). I wish I understood monstrous moonshine.

The GoL is "played", if you can call it that, on an infinite plane divided into equally sized squares. The plane looks something like a chess board, extended forever. See the left side of Figure 1, above. Every square, at any moment in time, is in one of two states: alive or dead. Time is discrete. The rules of the game specify the state of each square at any moment in time, given the configuration at the previous instant.

The state of a square does not depend solely on its previous state. It also depends on the states of its neighbors. Two types of neighborhoods have been defined for a CA with a grid of square cells. The Von Neumann neighbors of a cell are the four cells above it, below it, and to the left and right. The Moore neighborhood (Figure 2) consists of the Von Neumann neighbors and the four cells diagonally adjacent to a given cell.

Figure 2: Moore Neighborhood of a Dead Cell

The GoL is defined for Moore neighborhoods. State transition rules can be defined in terms of two cases:

  • Dead cells: By default, a dead cell stays dead. If a cell was dead at the previous moment, it becomes (re)born at the next instant if the number of live cells in its Moore neighborhood at the previous moment was x1 or x2 or ... or xn.
  • Alive Cells: By default, a live cell becomes dead. If a cell was alive at the previous moment, it remains alive if the number of live cells in its Moore neighborhood at the previous moment was y1 or y2 or ... or ym.

The state transition rules for the GoL can be specified by the notation Bx/Sy. Let x be the concatenation of the numbers x1, x2, ..., xn. Let y be the concatenation of y1, y2, ..., ym. The GoL is B3/S23. In other words, if exactly three of the neighbors of a dead cell are alive, it becomes alive for the next time step. If exactly two or or three of the neighbors of a live cell are alive, it remains alive at the next time step. Otherwise a dead cell remains dead, and a live cell becomes dead.

The GoL is an example of recreational mathematics. Starting with random patterns, one can predict, roughly, the distributions of certain patterns when the CA settles down, in some sense. On the other hand, the specific patterns that emerge can only be found by iterating through the GoL, step by step. And one can engineer certain patterns.

3.0 Life-Like Celluar Automata

For the purposes of this post, a life-like CA is a CA defined with:

  • A two dimensional grid with square cells and discrete time
  • Two states for each cell
  • State transition rules specified for Moore neighborhoods
  • State transition rules that can be specified by the Bx/Sy notation.

How many life-like CA are there? This is the question that this post attempts to answer.

The Moore neighborhood of cell contains eight cells. Thus, for each of the digits 0, 1, 2, 3, 4, 5, 6, 7, and 8, they can appear in Bx. For each digit, one has two choices. Either it appears in the birth rule or it does not. Thus, there are 29 birth rules.

The same logic applies to survival rules. There are 29 survival rules.

Each birth rule can be combined with any survival rule. So there are:

29 29 = 218

life-like CA. But this number is too large. I am double counting, in some sense.

4.0 Reversing Figure and Ground

Figure 1 shows, side by side, grids from the GoL and from a CA called Flip Life. Flip Life is specified as B0123478/S01234678. Figure 3 shows a window from a computer program. In the window on the left, the rules for the GoL are specified. The window on the right is used to specify Flip Life.

Figure 3: Rules for Life and Flip Life

Flip Life basically renames the states in the GoL. Cells that are called dead in the GoL are said to be alive in Flip Life. And cells that are alive in the GoL are dead in Flip Life. In counting the number of life-like CA, one should not count Flip Life separately from the GoL. In some sense, they are the same CA.

More generally, suppose Bx/Sy specifies a life-like CA, and let Bu/Sv be the life-like CA in which figure and ground are reversed.

  • For each digit xi in x, 8 - xi is not in v, and vice versa.
  • For each digit yj in y, 8 - yj is not in u, and vice versa.

So for any life-like CA, one can find another symmetrical CA in which dead cells become alive and vice versa.

5.0 Self Symmetrical CAs

One cannot just divide 218 by two to find the number of life-like CA, up to symmetry. Some rules define CA that are the same CA, when one reverses figure and ground. As an example, Figure 4 presents a screen snapshot for the CA called Day and Night, specified by the rule B1/S7.

Figure 4: Day and Night: An Example of a Self-Symmetrical Cellular Automaton

Given rules for births, one can figure out what the rules must be for survival for the CA to be self-symmetrical. Thus, there are as many self-symmetrical life-like CAs as there are rules for births.

6.0 Combinatorics

I bring all of the above together in this section. Table 1 shows a tabulation of the number of life-like CAs, up to symmetry.

Table 1: Counting Life-Like Celluar Automata
Number
Birth Rules29
Survival Rules29
Life-Like Rules29 29 = 262,144
Self-Symmetric Rules29
Non-Self-Symmetric Rules29(29 - 1)
Without Symmetric Rules28(29 - 1)
With Self-Symmetric Rules Added Back28(29 + 1) = 131,328

7.0 Conclusion

How many of these 131,328 life-like CA are interesting? Answering this question requires some definition of what makes a CA interesting. It also requires some means of determining if some CA is in the set so defined. Some CAs are clearly not interesting. For example, consider a CA in which all cells eventually die off, leaving an empty grid. Or consider a CA in which, starting with a random grid, the grid remains random for all time, with no defined patterns ever forming. Somewhat more interesting would be a CA in which patterns grow like a crystal, repeating and duplicating. But perhaps an interesting definition of an interesting CA would be one that can simulate a Turing machine and thus may compute any computable function. The GoT happens to be Turing complete.

Acknowledgements: I started with version 1.5 of Edwin Martin's implementation, in Java, of John Conway's Game of Life. I have modified this implementation in several ways.

References

Saturday, May 13, 2017

Innovation and Input-Output Matrices

Figure 1: National Income and Product Accounts
1.0 Introduction

This post contains some speculation about technical progress.

2.0 Non-Random Innovations and Almost Straight Wage Curves

The theory of the production of commodities by means of commodities imposes one restriction on wage-rate of profits curves: They should be downward-sloping. They can be of any convexity. They are high-order polynomials, where the order depends on the number of produced commodities. So no reason exists why they should not change convexity many times in the first quadrant, where the the rate of profits is positive and below the maximum range of profits. The theory of the choice of technique suggests that, if multiple processes are available for producing many commodities, many techniques will contribute to part of the wage-rate of profits frontier.

The empirical research does not show this. When I looked at all countries or regions in the world, I found very little visual deviation from straight lines for most wage curves, for the ruling technique1. The exceptions tended to be undeveloped countries. Han and Schefold, in their empirical search for capital-theoretic paradoxes in OECD countries, also found mostly straight curves. And only a few techniques appeared on the frontier.

I have a qualitative explanation of this discrepancy between expectations from theory and empirical results. The theory I draw on above takes technology as given. It is as if economies are analyzed based on an instantaneous snapshot. But technology evolves as a dynamic process. The flows among industries and final demands have been built up over decades, if not centuries.

In advanced economies, technology does not change randomly. Large corporations have Research and Development departments, universities form extensive networks, and the government sponsors efforts to advance Technology Readiness Levels2. Sponsored research is not directed randomly. Technical feasibility is an issue, albeit that changes over time. Another concern is what is costly at the moment, with cost being defined widely. I suggest a constant effort to lower a reliance on high cost inputs in production process, over time, results in coefficients of production being lowered such that wage curves become more straight3.

The above story suggests that one should develop some mathematical theorems. I am aware of two areas of research in Sraffian economics that seem promising for further inquiry along these lines. First, consider Luigi Pasinetti's structural economic dynamics. I have an analysis of hardware and software costs in computer systems, which might be suggestive. Second, Bertram Schefold has been analyzing the relationship between the shape of wage curves; random matrices; and eigenvalues, including eigenvalues other than the Perron-Frobenius root.

3.0 Innovations Dividing Columns in Input-Output Table, Not Adjoining Completely New Ones

I have been moping during my day job how I cannot keep up with some of my fellow software developers. I return to, say, Java programming after a few years, and there is a whole new set of tools. And yet, much of what I have learned did not even exist when I received either of my college degrees. For example, creating an Android app in Android Studio or IntelliJ involves, minimally, XML, Java, and Virtual Machines for testing. Back in the 1980s, I saw some presentations from Marvin Zelkowitz for what might be described as an Integrated Development Environment (IDE). He had an editor that understood Pascal syntax, suggested statement completions, and, if I recall correctly, could be used to set breakpoints and examine states for executing code. I do not know how this work fed, for example, Eclipse.

Nowadays, you can specialize in developing web apps4. Some of my co-workers are Certified Information Systems Security Professionals (CISSPs). They know a lot of concepts that are sort of orthogonal to programming5. I also know people that work at Security Operations Centers (SOCs)6. And there are many other software specialities.

In short, software should no longer be considered a single industry. Glancing quickly at the web site for the Bureau of Economic Analysis, I note the following industries in the 2007 benchmark input-output tables:

  • Software publishers (511200)
  • Data processing, hosting, and related services (518200)
  • Internet publishing and broadcasting and Web search portals (518200)
  • Custom computer programming services (541511)
  • Computer systems design services (541512)
  • Other computer related services, including facilities management (54151A)

Coders, programmers, and software engineers definitely provide labor inputs in many other industries. Cybersecurity does not even appear above.

What would input-tables looked like, for software, in the 1970s? I speculate you might find industries for the manufacture of computers, telecommunication equipment, and satellites & space vehicles. And data processing would probably be an industry.

I am thinking that new industries come about, in modern economies, more by division and greater articulation of existing industries, not by suddenly creating completely new products. And this can be seen in divisions and movements in industries in National Income and Product Accounts (NIPA). One might explore innovation over the last half-century or so by looking at the evolution of industry taxonomies in the NIPA.7.

4.0 Conclusion

This post suggests some research directions8. At this point, I do not intend to pursue either.

Footnotes
  1. Reviewers, several years ago, had three major objections to this paper. One was that I had to offer some suggestion why wage curves should be so straight. The other two were that I needed to offer a more comprehensive explanation of how to map from the raw data to the input-output tables I used and that I had to account for fixed capital and depreciation.
  2. John Kenneth Galbraith's The New Industrial State is a somewhat dated analysis of these themes.
  3. They also move outward.
  4. The web is not old. Tools like Glassfish, Tomcat, and JBoss, and their commercial competitors are neat.
  5. Such as Confidentiality, Integrity, and Availability; two-factor identification; Role-Based Access Control; taxonomies for vulnerabilities and intrusions; Public Key Infrastructure; symmetric and non-symmetric encryption; the Risk Management Framework (RMF) for Information Assurance (IA) Certification and Accreditation; and on and on.
  6. A SOC differs from a Network Operations Center. Operators of a SOC have to know about host-based and network-based Intrusion Detection, Security Incident and Event Management (SIEM) systems, Situation Awareness, forensics, and so on.
  7. One should be aware that part of the growth on the tracking of industries might be because computer technology has evolved. Von Neumann worried about numerical methods for calculating matrix inverses. Much bigger matrices are practical now.
  8. I do not think my ideas in Section 3 are expressed well.

Saturday, May 06, 2017

Distribution of Maximum Rate of Profits in Simulation

Figure 1: Blowup of Distribution of Maximum Rate of Profits

This post extends the results from my last post. I think of the results presented here as providing information about the implementation of my simulation. I do not claim any implications about actually existing economies. I did not have any definite anticipations about what I would see. I suppose it could be of interest to regenerate these results where coefficients of production are randomly generated from some non-uniform distribution.

I continue to use a capability to generate a random economy, where such an economy is characterized by a single technique. A technique is specified by a row vector of labor coefficients and a corresponding square Leontief input-output matrix. The labor coefficients are randomly generated from a uniform distribution on (0.0, 1.0]. Each coefficient in the Leontief input-output matrix is randomly generated from a uniform distribution on [0.0, 1.0). The random number generator is as provided by the class java.util.Random, in the Java programming language. I am running Java version 1.8.

Each random economy is tested for viability. Non-viable economies are discarded. Table 1 shows how many economies needed to be generated, given the number of produced commodities, to end up with a sample size of 300 viable economies. The maximum rate of profits is calculated for each viable economy. The maximum rate of profits occurs when the wage is zero, and the workers live on air. Thus, labor coefficients do not matter for the calculation of the maximum rate of profits.

Table 1: Number of Simulated Economies
Seed for
Random
Generator
Number of
Commodities
Number of
Economies
368,424,2342610
345,65736,124
4,566,8434826,471
547,5275> 231 - 1

I looked at the distribution of the maximum rate of profits, calculated as a percentage, in several ways. Figure 2 presents four histograms, superimposed on one another. Figure 1 expands the left tails of these histograms. I suppose Figure 2 is somewhat easier to make sense of than Figure 1, when you click on the image. Maybe the statistics in Tables 2 and 3 are clearer. One can see, for example, in random economies in which two commodities are produced, the mean of the maximum rate of profits is 43.9%. The minimum, in these 300 random economies, of the maximum rate of profits is about 0.03% and the maximum is 318%. If I wanted to be more thorough, I would have to review how skewness and kurtosis are calculated by default in the Java class org.apache.commons.math3.stat.descriptive.DescriptiveStatistics. The coefficient of variation is the ratio of the standard deviation to the mean. The nonparametric analogy, reported in the last row in Table 3, is the ratio of the Inter-Quartile Range to the median. Anyways, the distribution of the maximum rate of profits, in random viable economies generated by the simulation, is non-Gaussian and highly skewed, with a tail extending to the right.

Figure 2: Distribution of Maximum Rate of Profits

Table 2: Parametric Statistics
Number of Produced Commodities
TwoThree FourFive
Sample Size300300 300300
Mean43.915.7 8.284.95
Std. Dev.50.219.3 7.535.90
Skewness2.103.89 1.222.63
Kurtosis5.1422.2 0.8829.64
Coef. of Var.0.8750.811 1.100.839

Table 3: Nonparametric Statistics
Number of Produced Commodities
TwoThree FourFive
Minimum0.03270.113 0.01070.00405
1st Quartile9.354.51 2.521.17
Median25.39.72 5.702.99
3rd Quartile57.319.9 11.36.27
Maximum318168 36.244.2
IQR/Median1.901.58 1.541.70

With the simulation, the maximum rate of profits tends to be smaller, the more commodities are produced. I wish I could extend these results to a lot more produced commodities. National Income and Product Accounts (NIPAs), at the grossest level of aggregation have on the order of 100 produced commodities. Even if results with the assumption of an arbitrary probability distribution for coefficients of production could be directly applied empirically, one would like confirmation that trends seen with a very small number of produced commodities continue.

Wednesday, May 03, 2017

I Just Simulated 6 Billion Random Economies

Figure 1: Probability a Random Economy Will Be Viable

I have begun working towards replicating certain simulation results reported by Stefano Zambelli's.

At this point, I have implemented a capability to generate a random economy, where such an economy is characterized by a single technique. A technique is specified by a row vector of labor coefficients and a corresponding square Leontief input-output matrix. The labor coefficients are randomly generated from a uniform distribution on (0.0, 1.0]. Each coefficient in the Leontief input-output matrix is randomly generated from a uniform distribution on [0.0, 1.0). The random number generator is as provided by the class java.util.Random, in the Java programming language. I am running Java version 1.8.

A Monte Carlo simulation, in the results reported here, tests each random economy for viability, where the technique, for each economy, is used to produce a specified number of commodities. A viable economy can reproduce the inputs used up in producing the outputs. If the economy is just viable, nothing is left over to pay the workers and the capitalists. The Hawkins-Simon condition can be used to check for viability.

Table 1 reports the results. The number of Monte Carlo runs, for each row, is 1,000,000,000. The seed is reported so I can replicate my results, if I want. I think I can provide a symmetry argument for why the probability for the first row should be 1/2. I reran the simulation for the last row with 2,000,000,000 runs and the same seed. I still found zero viable economies.

Table 1: Simulation Results
Seed for
Random
Generator
Number of
Commodities
Number of
Viable
Economies
Probability
46,576,8892499,967,47649.9967476%
89,058,538350,198,6905.019869%
7,586,3384372,3390.0372339
784,0545990.0000099%
568,233,269600%

Zambelli suggests randomly specifying a rescaled output, in some sense, for the technology so as to ensure viability. I have a rough conceptual understanding of this step, but I need a better understanding to reduce it to source code. I think I'll go on to further analyses before revisiting the issue of viability. The above results certainly suggest that my analyses will be limited, in the mean time, to economies that produce only two, three, or maybe four commodities.

I think that Zambelli's approach is worthwhile for pursuing the results in which he is interested. One limitation arises with applying a probability distribution to one particular description of technology. In practice, coefficients of production evolve in a non-random manner. Pasinetti's structural dynamics is a good way of exploring technical progress in the tradition of Sraffa.

References
  • Stefano Zambelli (2004). The 40% neoclassical aggregate theory of production. Cambridge Journal of Economics 28(1): pp. 99-120.

Thursday, April 20, 2017

Nonstandard Investments as a Challenge for Multiple Interest Rate Analysis?

1.0 Introduction

This post contains some musing on corporate finance and its relation to the theory of production.

2.0 Investments, the NPV, and the IRR

In finance, an investment project or, more shortly, an investment, is a sequence of dated cash flows. Consider an investment in which these cash flows take place at the end of n successive years. Let Ct; t = 0, 1, ..., n - 1; be the cash flow at the end of the tth year here, counting back from the last year in the investment. That is, Cn - 1 is the cash flow at the end of the first year in the investment, and C0 is the last cash flow.

The Net Present Value (NPV) of an investment is the sum of discounted cash flows in the investment. Let r be the interest rate used in time time discounting, and suppose all cash flows are discounted to the end of the first year in the investment. Then the NPV of the illustrative investment is:

NPV0(r) = Cn - 1 + Cn - 2/(1 + r) + ... + C0/(1 + r)n - 1

If the above expression is multiplied by (1 + r)n - 1, one obtains the NPV of the investment with every cash flow discounted to the last year in the investment:

NPV1(r) = Cn - 1(1 + r)n - 1 + Cn - 2(1 + r)n - 2 + ... + C0

For the next step, I need some sign conventions. Let a positive cash flow designate revenues, and a negative cash flow be a cost. Suppose, for now, that the (temporally) first cash flow is a cost, that is negative. Then (-1/Cn - 1) NPV1(r) is a polynomial in (1 + r), with unity as the coefficient for the highest-order term. All other terms are real.

Such a polynomial has n - 1 roots. These roots can be real numbers, either negative, zero, or positive. They can be complex. Since all coefficients of the polynomial are real, complex roots enter as conjugate pairs. Roots can be repeating. At any rate, the polynomial can be factored, as follows:

NPV1(r) = (-Cn - 1)(r - r0) (r - r1)... (r - rn - 1)

where r0, r1, ..., rn - 1 are the roots of the polynomial. Note that the interest rate appears only in terms in which the difference between the interest rate and one root is taken. And all roots appear on the Right Hand Side. I am going to call an specification of NPV with these properties an Osborne expression for NPV.

Suppose, for now, that at least one root is real and non-negative. The Internal Rate of Return (IRR) is the smallest real, non-negative root. For notational convenience, let r0 be the IRR.

3.0 Standard Investments in Selected Models of Production

A standard investment is one in which all negative cash flows precede all positive cash flows. Is there a theorem that an IRR exists for each standard investment? Perhaps this can be proven by discounting all cash flows to the end of the year in which the last outgoing cash flow occurs. Maybe one needs a clause that the undiscounted sum of the positive cash flows does not fall below the undiscounted sum of the negative cash flows.

At any rate, an Osborne expression for NPV has been calculated for standard investments characterizing two models of production. As I recall it, Osborne (2010) illustrates a more abstract discussion with a point-input, flow-output example. Consider a model in which a machine is first constructed, in a single year, from unassisted labor and land. That machine is then used to produce output over multiple years. Given certain assumptions on the pattern of the efficiency of the machine, this example is of a standard investment, with one initial negative cash flow followed by a finite sequence of positive cash flows.

On the other hand, I have presented an example for a flow-input, point-output model. Techniques of production are represented as finite series of dated labor inputs, with output for sale on the market at a single point in the time. Each technique is characterized by a finite sequence of negative cash flows, followed by a single positive cash flow.

In each of these two examples, the NPV can be represented by an Osborne expression that combines information about all roots of a polynomial. Thus, basing an investment decision on the NPV uses more information than basing it on the IRR, which is a single root of the relevant polynomial.

4.0 Non-standard Investments and Pitfalls of the IRR

In a non-standard investment at least one positive cash flow precedes a negative cash flow, and vice-versa. Non-standard investments can highlight three pitfalls in basing an investment decision on the IRR:

  • Multiple IRRs: The polynomial defining the IRR may have more than one real, non-negative root. What is the rationale for picking the smallest?
  • Inconsistency in recommendations based on IRR and NPV: The smallest real non-negative root may be positive (suggesting a good investment), with a negative NPV (suggesting a bad investment).
  • No IRR: All roots may be complex.

Berk and DeMarzo (2014) present the example in Table 1 as an illustration of the third pitfall. They imagine an author who receives an advance of $750 hundred thousands, sacrifices an income of $500 hundred thousand in each year of writing a book, and, finally, receives a royalty of one million dollars upon publication. The roots of the polynomial defining the NPV are -1.71196 + 0.78662 j, -1.71196 - 0.78662 j, 0.04529 + 0.30308 j, 0.04529 - 0.30308 j. All of these roots are complex; none satisfy the definition of the IRR.

Table 1: A Non-Standard Investment
YearRevenue
0750
1-500
2-500
3-500
41,000

5.0 Issues for Multiple Interest Rate Analysis

Osborne, in his 2014 book, extends his 2010 analysis of the NPV to consider the first and second pitfall above. Nowhere do I know of is an Osborne expression for the NPV derived for an example in which the third pitfall arises.

The idea that the pitfalls above for the use of the IRR might be a problem for multiple interest rate analysis was suggested to me anonymously. On even hours, I do not see this. Why should I care about how many roots there are in an Osborne expression for the NPV, their sign, or even if they are complex?

On the other hand, I wonder about how non-standard investments relate to the theory of production. I know that an example can be constructed, in which the price of a used machine becomes negative before it becomes positive. Can the varying efficiency of the machine result in a non-standard investment? After all, the cash flow, in such an example of joint production, is the sum of the price of the conventional output of the machine and the price of the one-year older machine. Even when the latter is negative, the sum need not be negative. But, perhaps, it can be in some examples.

Not all techniques in models with joint production, of the production of commodities by means of commodities, can be represented as dated labor flows. I guess one can still talk about NPVs. Can one formulate an algorithm, based on NPVs, for the choice of technique? How would certain annoying possibilities, such as cycling be accounted for? Can one always formulate an Osborne expression for the NPV? Do properties of multiple interest rates have implications for, for example, a truncation rule in a model of fixed capital? Perhaps a non-standard investment, for a fixed capital example and one pitfall noted above, always has a cost-minimizing truncation in which the pitfall does not arise. Or perhaps the opposite is true.

Anyway, I think some issues could support further research relating models of production in economics and finance theory. Maybe one obtains, at least, a translation of terms.

Appendix: Technical Terminology

See body of post for definitions.

  • Flow Input, Point Output
  • Investment
  • Investment Project
  • Internal Rate of Return (IRR)
  • Net Present Value (NPV)
  • Non Standard Investment
  • Osborne Expression (for NPV)
  • Point-Input, Flow Output model
  • Standard Investment
References
  • Jonathan Berk and Peter DeMarzo (2014). Corporate Finance, 3rd edition. Boston: Pearson Education
  • Michael Osborne (2010). A resolution to the NPV-IRR debate? Quarterly Review of Economics and Finance, V. 50, Iss. 2: 234-239.
  • Michael Osborne (2014). Multiple Interest Rate Analysis: Theory and Applications. New York: Palgrave Macmillan
  • Robert Vienneau (2016). The choice of technique with multiple and complex interest rates, DRAFT.