
From the Received View to the Views of Popper
The standard view of science in the 19th century was that scientific investigations begin with unprejudiced views of the facts, proceed by inductive inference to the formulation of universal laws about these facts, and arrive at statements by further inference; laws and theories are later checked for their truth content by comparing their empirical consequences with all the observed facts. This is the inductive view of science began to break down because of the writings of Mach, Poincare, Duhem... Who helped bring about the Hypothetical - Deductive view of scientific philosophy.
This view was first written about by Hempel and Oppenheim, who argued that all truly scientific explanations have a common logical structure: they involve at least one universal law plus a statement of relevant initial conditions that together constitute the explanans or premises from which an explanadum, a statement about some event whose explanation we are seeking, is deduced by the aid of nothing but the rules of deductive logic. Universal laws are those when, " in all cases where A occurs, event B also occurs. The operation of explanation and prediction involve the same rules of logical inference...to cite a particular cause as an explanation of an event is simply to subsume the event in question under some universal law or set of laws...this is called the covering law of explanation.
This assumption of symmetry between the nature of explanation and prediction is called the symmetry thesis. The methodology of conventionalism asserts that all scientific theories and hypotheses are merely condensed descriptions of natural events, neither true or false in themselves but simply conventions for storing empirical information. Popper's "demarcation criterion" is the distinction between science and nonscience by virtue of the use of the method of induction.
Hume's showed a fundamental asymmetry between induction and deduction with the problem of how to logically infer anything about future experiences on the basis of nothing but past experiences...this is the problem of induction. From this, Popper formulates his demarcation criterion: science is a body of propositions that can be falsified by empirical observation because they rule out certain events from occurring. So, science is characterized by its method of formulating and testing propositions and not by its subject matter or its claim to certainty of knowledge.
Poppers argument that there is an asymmetry between verification and falsification is based on the fact that we can never assert that a hypothesis is necessarily true because it agrees with the facts; in reasoning from the truth of the facts to the truth of the hypothesis, we commit the fallacy of "affirming the consequent".
Blaug emphasizes that there is two kinds of induction, a nondemonstrative attempt to confirm some hypothesis and a demonstrative logical argument...he uses the word "adduction" for the nondemonstrative kind. He believes that science is based on adduction followed by deduction.
According to Popper, all true theories are merely provisionally true, having so far defied falsification. Duhem had argued that no individual hypothesis is conclusively falsifiable, because we always test the entire explanans (hypothesis in conjunction with auxiliary statements), so we can never be sure that we confirmed or refuted the hypothesis itself. Thus, its acceptance or rejection is to some extent conventional. Popper deals with this by stating that we can demarcate science from nonscience through falsification and methodological rules that forbid auxiliary assumptions, also known as immunizing stratagems.
According to Popper, the simpler the theory, the stricter its observable implications, and hence, the greater its testability. From Popper to the New Heterodoxy
Poppers methodology is normative, prescribing sound practice in science in light of the best science of the past. Kuhn breaks from this received normative view with his emphasis on positive prescription. For Kuhn, the history of science is marked by long periods where status quo is preserved, interrupted by sudden jumps from one ruling paradigm to another with no conceptual bridge in between.. This discussion leads us to the puzzle about the relationship between normative methodology and positive history of science.
According to Popper, those who seek to tell history "as it is" will find themselves "telling it as it should be"...all statements of history ar methodology laden. Lakatos work is a compromise between the antihistorical aggressive methodology of Popper and the relativistic, defensive methodology of Kuhn, leaning more in the Popperian camp.Lakatos believed that clusters of interconnected theories, called scientific research programs(SRP) need to be appraised instead of individual theories.
An SRP is progressive if a successive formulation of the program contains excess empirical content over its predecessor. He characterized SRP's ad having a "hard core' surrounded by a protective belt of auxiliary hypotheses which bear the brunt of tests. According to Lakatos, the history of science can be described as the rational preference of scientists for progressive over degenerating SRP's...testing involves a 3- cornered fight between facts and at least 2 rival theories. Feyerabend's theoretical anarchism is based on the argument that, if we couple the concept of theory-laden facts with the Kuhnian notion of content loss in successive theories or SRP's, competing theoretical systems become difficult to compare, so there would appear to be no grounds for a rational choice between SRP's. he saw science as irrational and believed no demarcation criteria existed between science and nonscience.
Popper's methodological individualism is the belief that , for the social sciences, we should construct and analyze models in terms of the individuals beliefs, attitudes and decisions. This is opposed to methodological holism, which states that social wholes are postulated to have functions that can't be reduced to beliefs, actions, and attitudes of the individuals.
Schumpeter was the first to distinguish between methodological and political individualism ...the former being a mode of economic analysis, while the latter being a program in which the preservation of individual liberty is made the touchstone of government action. The Verificationists
To Nassau Senior, we owe the first statement of the now familiar distinction between a pure and strictly positive science and an impure and inherently normative "art" of economics, as well as the first explicit formulation of the idea that scientific economics rests essentially on " a very few general propositions, which are the result of observation, or consciousness, and which almost every man, when he hears them, admits, as familiar to his thoughts"....from these, conclusions hold true only in the absence of particular disturbing causes.
Senior's 4 general propositions were:1) every person desires to maximize wealth with as little sacrifice as possible, 2) population tends to increase faster than the means of subsistence, 3) labour working with machines is capable of producing a positive net product, and 4) that agriculture is subject to diminishing returns.
Mills and Senior categorized the subject of economics as a mental science that is fundamentally concerned with human motives and modes of conduct in economic life. Senior, Marshall, and most other modern economists hold the belief that we should take the whole man as he is, staking our claim on correctly predicting how he will actually behave in economic affairs, whereas Mill's view of "economic man' is that we should abstract certain economic motives, namely, maximizing wealth subject to the constraints of a subsistence income and the desire for leisure, while allowing for the noneconomic motives( such as habit or custom) even in those spheres of life that fall within the ordinary purview of economics...Mill's operated with a theory of "fictional man".
This view can also be found in Malthus with his view that the pressure of population on subsistence rest essentially on man's irrational passion to reproduce himself.
Tendency laws according to Mills: man often asserts of an entire class what is only true of a part of it; his error is not in making too wide an assertion, but in making the wrong "kind" of assertion; he predicated an actual result, when he should only have predicted a tendency to that result. With regard to exceptions,in advanced science, there is no such thing as an exception. What is thought to be an exception to a principle is always some other and distinct principle cutting into the former; some other force that impinges against the first force, and deflects it from its direction.
In Mill's essay "Logic", there consists a stout defense of methodological monism and methodological individualism; and an insistence that positive and not normative analysis is the key to science even in the social field.
In this essay, Mill's leaves the reader confused as to whether he adheres to inductive or deductive methods of logic.
Mill's followed Recardian economics(doctrines that gave rise to such propositions as:a rising price of corn;a rising rental share of national income; a constant level of real wages;a falling rate of profit on capital--all without, according to Ricardo...ceteris paribus, as pertaining to the Britain economy in the early 1800's. Time proved these wrong before Mill's essay, "Principles" came out. Mill's filled the gap between Ricardo's theories and the facts with the use of "immunizing stratagems'.
Mill's methodological position was no different from Ricardo's:Mill only formally enunciated the "rules" which Ricardo implicitly adopted. Mill was a verificationist,i.e....the test of a theory is not the predictive accuracy it engenders, but its explanatory power...he was not a believer in the symmetry thesis. If a theory fails to predict accurately, a search should be made for sufficient supplementary causes to close the gap between the facts and the causal antecedents laid down in the theory because the theory is true in any case as far as it goes by the nature of its true assumptions.
Mill steadily increased the time period in which Ricardo's propositions would be proven and thus can be best classified as at best, a lukewarm verificationist. Mill's and other classical economists generally appealed to assumptions in judging validity, whereas modern economists appeal basically to predictions.For the classical's...we test the "applications" of theories to determine whether enough of the disturbing causes have been taken into account to explain what actually happens in the real world after allowing for noneconomic causes; we never test the "validity" of theories, because the conclusions are true as one aspect of human behavior by virtue of the assumptions, which in turn are true by virtue of being based on self-evident facts of human experience.
Cairnes, like Mill's , was also convinced of the validity of Ricardian tendencies...the only difference of the two is that Cairnes was more strident and dogmatic in denying that economic theories can ever be refuted by a simple comparison between their implications and the facts..."the doctrines of political economy are to be understood as asserting, not what "will" take place, but what "would' take place, and in this sense only are they true".
His abuse of the ceteris paribus clause can be seen in Blaug's statement "Either prove that the assumptions are unrealistic, of else demonstrate a logical inconsistency, but never take a refuted prediction as a reason for abandoning an economic theory, particularly because only qualitative predictions are possible in economics". he held strong to Malthusian doctrine even when time disproved it, largely because it was the base on which Ricardian theory stood.
According to Mill, Cairnes, and Jevons, the notion of "verification" is not a testing of economic theories to see whether they are true or false, but only a method of establishing the boundaries of application of theories deemed obviously to be true; one verifies in order to discover whether "disturbing causes" can account for discrepancies between stubborn facts and theoretically valid reasons; if this is the case, the theory has been wrongly applied, but the theory itself is still true...the question of whether there is a way of showing a theory to be false is never considered.
The 1880's "decade of Methodenstreint" between Menger and Schmoller was a result of the influence of the German historical school on British economists. John Neville Keynes essay "the scope and method of political economy" was to reconcile the Senior-Mill-Cairnes tradition with the new claims of the historical school. The five theses that made up the Senior-Mill-Cairnes tradition were; 1)It is possible to distinguish between a positive science and a normative art of economics;2)that economic events can be isolated at least to some extent from other social phenomena;3) that the direct induction of concrete facts is inappropriate as a starting point in economics;4) that the right procedure is the a priori method of starting from a few indispensable facts of human nature (in connection with physiological properties of the soil and man's physiological constitution) and;5) that economic man is an abstraction, and hence, economics is a science of tendencies only.
Neville Keynes summary of the historical school is one of holding and "ethical,realistic,and inductive" view of economics. Neville Keynes (under Marshall's influence), believed that economic theory as such cannot be expected to result in direct predictions, being instead an "engine of analysis" to be used in conjunction with detailed investigation of relevant disturbing causes in every case. neville Keynes and Marshall's hope for a final reconciliation of all methodological differences was to be short-lived.
The American Institutionalist's rose to a crescendo in the 1920's, with the writings of Veblen,mitchell, and Commons; (heterodox inductivists). At this time Robbins restated the Senior-Mill-Cairnes position in modern language...but mixed with an Austrian tradition of economics, such as the means-ends definition of economics and the claim of the unscientific character of all interpersonal comparisons of utility. Robbin's "Essay on the Nature and Significance of Economic Science" created a furor...because of its insistence on the purely conventional nature of interpersonal comparisons of welfare and in arguing that the science of economics is neutral with respect to policy objectives. He is the economists who gave us the well known definition of economics as: a science which studies human behavior as a relationship between a hierarchy of needs and scarce means which have alternative uses.
Like Cairnes, he denied that economic effects could ever be predicted in quantitative terms and rejects the allegation of the historical school, that all economic truths are relative to time and place.In his critique of the use of introspection as an empirical source of economic knowledge, Robbins believed that...if there are no objective methods for inferring about the "welfare" of different economic agents, there are also no objective methods for inferring anything about the "preferences" of different economic agents->Consumer theory.
Robbin's hostility toward quantitative investigations was very common in the 30's. His essay was the last to defend the verificationist's thesis using the stance: The case for believing economic truths requires verification only to check that they do apply in any particular case. The rise of econometrics and Keynesian economics encouraged the rise of falsificationism and operationalism...though the modern Austrian economists still adhere to the extreme version of the Senior-Mills-Cairnes tradition. This group was inspired by Hayek's attack on "scientism" or methodological monism and his emphasis on methodological individualism, and by Mises's statement of praxeology: the general theory of rational action, according to which the assumption of purposeful individual action is an absolute prerequisite for explaining human behavior.
According to Mises: "the ultimate yardstick of an economic theorem's correctness or incorrectness is solely reason unaided by experience"(radical apriorism). He also insisted on methodological dualism;, the essential disparity in approach between social and natural science grounded in the "Verstechen doctrine"(understanding from within by means of intuition and empathy), and the radical rejection of any kind of quantification of either the premises or the implications of economic theories.
Though somewhat a continuation of Senior-Mill-Cairnes tradition, it goes further in stating that even the verification of "assumptions" is unnecessary in economics. The essential ingredients of Austrian economics, who, among its adherents are Rothbard,Kirzner,and Lachmann are:1)Absolute insistence on methodological individualism as an a priori heuristic postulate;2)a deep suspicion of all macroeconomic aggregates such as GNP;3) a firm disavowal of quantitative testing of economic predictions;and 4) the belief that more is to be learned by studying how market processes converge on equilibrium than by endlessly analyzing the properties of final equilibrium states. The 4th tenet derives from the Hayekian influence. Hutchison's essay "The Significance and Basic Postulates of Economic Theory" strongly attacked this apriorism to such an extreme empirical degree that he spoiled his own case. The center of his argument was that all economic propositions can be exhaustively classified into either tautological propositions or empirical ones... logically necessary "analytic" propositions or logically indeterminate "synthetic" ones., where most economic propositions are tautologies, when in fact some were simply untestable empirical assertions about the world.
Klapphox and Agassi proposed a 3-way classification:1)analytic-tautological propositions,2) synthetic-empirical propositions that are nevertheless untestable, even in principle, 3) synthetic- empirical propositions that are testable at least in principle. Hutchison's conviction that empirical work in economics is just as usefully applied to the assumptions as to the predictions of theory, led Machlup to call him an "ultraempiricist". It is this issue where Hutchison parts company with Machlup and Friedman. Hutchison wrote in the 30's, when apriorism...i.e. the view that economics is essentially a system of pure deductions from a series of postulates derived from inner experience, which are not open to external verification, was strong.Therefore the book was greeted by Knights essay, affirming the Verstehen doctrine in economics.
The late 30's saw the rise of operationalism,which, according to Bridgeman, is fundamentally concerned with the construction of correspondence rules that are supposed to connect the abstract concepts of a scientific theory to the experimental operations of physical measurement. Samualson's thesis "The Operational Significance of Economic Theory" uses the term operationalism much differently...where it in fact amounts to Popperian falsificationism.
According to Samuelson, the standard assumptions of constrained maximization are not sufficient conditions to derive most economic predictions: the method of comparative statics is empty unless a corresponding dynamic system is specified and shown to be stable(correspondence principle). He is equally skeptical about the principle tenets of welfare economics, which assume to make meaningful statements about welfare without resorting to comparisons between individuals.. Machlup was very critical of operationalism and defined it as ruling out all mental constructs in the formation of economic theories...which is the same as eliminating all mathematical formulations of a theory.
In the 50's, Friedman wrote "Essay on the Methodology of Positive Economics". Its central thesis was that economists should not make their assumptions "realistic". The thesis starts out by setting out the Senior-Cairnes-Keynes distinctions between normative and positive economics, after which, Friedman asserts an essential methodological unity of all the sciences. According to him," the only relevant test of the validity of a hypothesis is the comparison of its predictions with experience. The hypothesis is rejected if its predictions are contradicted more often then predictions from an alternative hypothesis. We can never "prove" an hypothesis...we can only fail to disprove it.
Friedman also attacked the notion that conformity of assumptions of a theory with reality provides a test of validity different from the test of its predictions.He believes that it is advantageous that assumptions not be realistic. Realistic assumptions take account all the relevant background variables and refuse to leave any of them out...Friedman shows that no theory lives up to this standard. For example, whether businessmen testify that they strive to maximize returns, or even if they understand the meaning of this question,is no test of the realism of what he calls "the maximization-of-returns hypothesis" because a Darwinian process of competitive rivalry guarantees that only those who maximize survive. In other words, a hypothesis must be descriptively false in its assumptions in the sense of imputing as-if motives to economic actors that they couldn't possibly hold consciously, like assuming billiard players calculate the angle and momentum of billiard balls every time they drive a ball into the pocket. What matters is that these theories grounded on as-if motives have predictive powers.
This methodology is called "instrumentalism", meaning that theories are "only" instruments for making predictions, refusing to offer any causal mechanism liking business behavior to the maximization of returns. Blaug points out that Friedman didn't distinguish between the kinds of assumptions, such as auxiliary assumptions(such as ceteris paribus) and generative assumptions(such as profit maximization). Machlup distinguished assumptions which he stated must be supplemented with specified initial conditions.
According to Machlup,a theory is never wholly discredited, even in the context where its assumptions are known to be false, unless a better theory can be offered. For example, the profit-maximizing assumption are in fact contrary for some, the problem is that we can't know how significant the deviations are except in the context of specific predictions, and therefore should accept maximizing conduct as a heuristic postulate and keep in mind that consequences may be well out of line with observed data.He divided the methodological arena with extreme apriorists such as mises,knight and Robbins at one end, and ultraempericists such as Hutchison at the other end, with Samualson, Friedman, Lange, and himself in between. "The big bad wolf is he who insists on the direct verification of fundamental assumptions as the critical test of the validity of a theory independently from a test of its predictions."
All in all, Friedman did avoid an extreme version of the "irrelevance-of-assumptions" thesis, or what Samualson dubbed the F-twist. According to Blaug, there is no vital methodological issues at stake in the dispute with Samualson and Friedman: instrumentalism is untenable because the symmetry thesis is false, and descriptivism, while perfectly tenable, is an excessively modest methodology; a poor man's version of instrumentalism.
Friedman, Machlup, and Samualson adopt a defensive methodology whose main purpose seems to be to protect economics against criticisms of unrealistic assumptions on the one hand, and a strident demand for severely tested predictions on the other.Friedman leans heavily on the Alchian thesis, which states that the notion of all motivational assumptions in microeconomics may be construed as an as-if statement...which in effect shifts the locus of rational action from the individual to the social plane.
Thus, Friedman is in fact repudiating the methodological individualism that is so embedded in neoclassical economics. To vindicate the Alchian thesis, we must be able to predict behavior in disequilibrium states...like the Darwinian thesis, to survive, it is only necessary to be better adapted to the environment than one's rival, and we can no more establish from natural selection that surviving species are perfect than we can establish from economic selection that surviving firms are profit maximizers.
The following are Friedman's central tenets:1)Assumptions are "largely" irrelevant to the validation of theories which should be judged almost totally on their instrumental value in generating accurate predictions,2)Neoclassical theory has an excellent predictive record, 3) the dynamics of competition over time accounts for this excellent record.
In the 50's Schoeffler and Wooten denied that economics is a science, because the economic system is open to noneconomic forces. One of the most ardent anti-Popperian methodology that prevails in economics is are Classical-Marxians.
The book "A Philosophical Critique of Neo-Classical Economics", by Hollis and Nell(1975).In this book, they argue that positivism is a false philosophy because "the separability of facts and values, on the one hand, and facts and theories, on the other, is untenable because all facts are theory laden and all theories are value laden. Their methodology is based on "essentialism" and "rationalism" and thus leaves no room for quantitative-empirical research.
Essentialism dates back to Aristotle and Plato, for whom science begins with observations of individual events and proceeds by simple induction, until intuitively grasping that which is their "essence". Popper contrasts this with methodological nominalism, where the aim of science is to describe how things behave in various circumstances with the aid of universal laws, and not to determine what they really are. He believed it encourages an antiempirical tendency to solve problems by use of definitions.
Among the institutionalists, besides the many modes of explanation previously discussed...there is pattern modeling, which seeks to explain events by identifying their place in a pattern of relationships that is said to characterize the economic system as a whole.Their explanations emphasize understanding rather than predictions.(Veblen,Myrdal)
The common thread for all institutionalists regardless of their method of explanation is that they don't believe in the efficacy of such terms as, equilibrium, rational behavior, instantaneous adjustments, and perfect knowledge, and they favor the idea of group behavior under the influence of custom and habit, seeing the economic system as a biological organism rather than as a machine. Institutionalist methodology can best be described as "storytelling"...the binding together of facts, low-level generalizations,high-level theories, and value judgements, which are held together by an implicit set of beliefs and attitudes.
The Distinction Between Positive and Normative Economics
In the 1930's, we saw the rise of "welfare economics", which provided a normative economics that was allegedly free of value judgements. As a result, traditional, positive economics was enlarged to include the whole of pure welfare economics, leaving normative economics to deal with policy issues.
According to Hume, "one cannot deduce ought from is"(Humes Guillotine)...can we say that there is a solid distinction between an is-statement versus an ought-statement? Ultimately, a factual descriptive is- statement is held to be true because we have agreed among ourselves to abide by certain "scientific" rules that instruct us to regard that statements are true, although it may in fact be false. We accept or reject is-statements on grounds that are themselves conventions, and in this sense, we cannot be totally free of value judgements.
According to Blaug, the acceptance or rejection of is-statements is not a very different cognitive process from the acceptance or rejection of ought-statements, i.e. there are no descriptive is-statements regarded as true that do not rely on a definite social consensus that we "ought" to accept that is-statement.
Nagel draws a distinction between two types of value judgements in the social sciences ..."characterizing" and "appraising" value judgements. Characterizing value judgements involves choice of the subject matter to be investigated, mode of investigation, and criteria for judging validity of the findings. Appraising value judgements refer to evaluative judgments about states of the world, including the desirability of certain kinds of behavior and social outcomes. According to Nagel, science can free itself from appraising value judgements. According to Hume, "ought" can never be logically deduced from "is" and vice versa...but that "oughts" are powerfully influenced by "ises" and that the values we hold depend on a series of factual beliefs. Thinking along this line, Blaug states a distinction between "basic" and "nonbasic" value judgements, where it is "basic" when the judgement is supposed to apply under all conceivable circumstances. It is the "nonbasic" that lends itself to rational analysis and discussion.
Though the difference between the methods of reaching an agreement on methodological versus value judgements is a matter of degree, not of kind, to argue that the difference is too small as to be negligible is to take the radical view that all propositions about social phenomena are value-laden, and hence, lack "objectivity". As Nagel points out, this assertion is either itself uniquely exempt from the charge, in which case there is at least one objective statement that can be made about social questions, or it is itself value-laden, in which case we are locked into an infinite regress and driven towards extreme subjectivism in which all opinions count equally.
Max Weber preached the "possibility" of a value-free social science, and believed that discussions on values were not only possible but of the greatest utility. They could take the form of 1.) examining the internal consistency of the value premises from which divergent normative judgements are derived. 2.) deducing the implication of those value premises in light of the practical circumstances to which they are applied. 3.) Tracing the factual consequences of alternative ways of dealing with normative judgements, thus, the distinction between "basic" and "nonbasic" judgements and the ability for rational discourse of such judgements is Weberian in spirit. To deny any objectivity in social science is to inevitably come to the conclusion that we have the license to assert whatever we please. This has been rare in economics, but the following is an example of such: Heilbroner asserts that the difference between the social and the physical sciences is that human actions are subject to both latent willfulness and conscious purposiveness and without assumptions as to the meaning of those actions no conclusions can be drawn from social facts...economists, like all social investigators, cannot help being emotionally involved with the society of which they are members;"every social scientist approaches his task with a wish, consciously or unconsciously, to demonstrate the workability or unworkability of the social order he is investigating".
Heilbroner uses the term "value judgments" to include any and all untestable metaphysical propositions that color the vision of an economist, making up what Lakatos called the "hardcore" of his theories. For Heilbroner, there is no distinction between an economists "hardcore vision" and "value judgements" ... without this distinction, the fact that social science is value-laden becomes trivial;it now becomes a feature of all theoretical propositions, including these in the physical sciences.
Like Heilbroner, Gunner Myrdal attacked the belief of value-free social science...though his solution is not to suppress value judgements or to make clear when they entered the argument, but rather to declare them boldly at the outset of the analysis. According to Myrdal, it is impossible to distinguish positive from normative economics and to pretend to do so only involves self-deception.
According to Blaug, this belief only leads to a style of relativism in which all economic opinions are simply a matter of personal choice; the normative-positive distinctions should be clearly maintained as far as it can be- even at the cost of persuasion.
Historical sketch of the positive-normative distinction: Senior and Mill believed that in passing from the "science" to the "art" of economics, value judgements come in and that economists cannot advise as economists, not even if the science of economics is supplemented with appropriate value judgements.Cairnes stated that "economic science has no more connection with our present industrial system than the science of mechanics has with our present system of railways".
Neville Keynes distinguished between 1) positive science 2)normative science, and 3) an art; a system of rules for the attainment of given ends. The object of a positive science is the establishment of "uniformities", of a normative science, the determination of "ideals", and of an art, the formulation of "precepts". The notion of a "normative science" as a bridge between the "positive science" and "the art" of political economy appears to be the aspiration of welfare economics.
Walrus and Pareto drew the line, not between positive and normative science, but between pure and applied economics, where pure economics included only positive economics. What we now call Pareto optimality was for him simply a definition of maximum ophelimity (he despised utility for its overtones of cardinality).
Hicks and Kaldor provided "compensation tests" by defining improvement in economic welfare as any change that "might" make someone better off in his own terms without making anyone else worse off. This subtle distinction between a possible improvement and a desirable one set the stage for the "new" value-free welfare economics, powerfully assisted by the Robbinsian thesis that the arch villain of value judgements was that of making cardinal comparisons between the utility of different parties.
This new value-free welfare economics took the prevailing distribution of factor services and resources as given, thus invoking no value judgement so long as compensation payments are not actually recommended. Bergson's paper on "The social welfare function" and Samualson's "Foundations" planted the idea that society, expressing itself through its political representatives, does in fact compare the utilities of different individuals;these comparisons being recorded in a social welfare function...once in possession of such a function, an economist can assess a given change in policy as a potential Pareto improvement, after which the social welfare function can be consulted to determine whether compensation payments should be made...moving welfare economics into the normative corner.
Some economists have gone back to Pareto himself in regarding Paretian welfare economics as a branch of positive economic theory. According to Archibald, Paretian welfare economics investigates the efficiency of alternative arrangements for satisfying given wants in the light of the choices that individuals themselves make in their own interests...value judgements only come into the picture when the crucial step to prescription is taken.
According to Hennipman, " propositions such as that, under certain assumptions, perfect competition is a sufficient condition of Pareto optimality, and that monopoly, tariffs, and externalities cause a welfare loss are positive statements, which are true or false, independently of ideological beliefs". The statement that monopoly, tariffs, and externalities bring about welfare loss is not to be construed as a recommendation to take action to eliminate them. The three postulates of Paretian welfare economics (consume sovereignty, individualism of social choice, and unanimity) are believed to be widely accepted and thus, value-free, which implies value judgements to be only those that are controversial.
According to Blaug, all scientific hypotheses contain some philosophical, social and political undertones, which may prejudice the scientist, however, there are special biases to which economists are prone that have no parallel in the physical sciences...the source of which lies in the intimate association between certain propositions in positive economics and something very like those same propositions in normative economics.
According to Samualson, since the time of Adam Smith, there had been a pervading feeling that perfect competition represents an optimal situation, re, "invisible hand theorem": given certain conditions, every long-run, perfectly competitive equilibrium yields a Pareto-optimal allocation of resources, and every Pareto-optimal allocation of resources is a long-run, perfectly competitive equilibrium. According to Blaug, the limitations of economics as an empirical science stem mainly from the fact that the theorems of welfare economics are forever spilling over from normative economics into the appraisal of evidence in positive economics. Economists tend to polarize into "planners" and "free marketeers" and are inclined to read empirical evidence in light of these attitudes. The Theory of Consumer Behavior
The case studies that Blaug focuses on in this chapter were not randomly selected; each constitutes a satellite research program within a larger core program that is frequently called "Neoclassical economics". Though economists now prefer using the term "theorem" over the term "law", there seems to be a general consensus that the law of demand is indeed a law...the difficulty lies in determining whether it is a "deterministic law", a "statistical law", or a "causal law".
If the law of demand refers to the market conduct of a group of consumers of a homogeneous commodity, rather than at the individual level, it can probably be referred to as a deterministic law,i.e., an empirical regularity that simply admitted of no exceptions. Since Marshall, however, it has been regarded as a statistical law of market behavior, having a probability of occurrence close to, but not equal to unity. On the other hand, the law of demand may be construed as a causal law; one which explains human actions in terms of the reasons, desires, and beliefs of "rational" human agents, which form the causal mechanism that leads from a decline in price to an increase in quantity demanded.
According to Blaug, the law of demand is the nearest thing in economics to a completely axiomatized theory, the modern static theory of consumer behavior. This theory has a long and complex history that has frequently been told, proceeding from the introspective cardinalism of Jevons, Menger, Walrus, and Marshall to the introspective ordinalism of Slutsky, Allen, and Hicks to the behaviorist ordinalism of Samuelson's revealed preference theory, to the behaviorist cardinalism of the Neumann-Morgenstern theory of expected utility, to Lancaster's theory of commodity characteristics, not to mention more recent stochastic theories of consumer behavior. All along, the purpose being to justify the notion of a negatively sloped demand curve from fundamental and compelling axioms of individual behavior.
Neither individual or market demand curves are directly observable entities; all that is observed at any time is a single point on the market demand curve for a commodity. We are thus driven to estimate the curves statistically, which is possible only where we can make strong assumptions about the conditions of supply in the relevant market. This identification problem restricted the early pioneers of demand theory to two explanations: Cournot and Cassel's assertion of downward-sloping demand curves as a crude empirical generalization, or to deduce the law of demand from a set of primitive assumptions about economic behavior...they chose the latter.
It was Marshall who discovered "Giffon's paradox", which states that the positive income effect of a price change is so large in absolute terms as to cancel out the negative substitution effect of that change. He also dealt with a constant-real-income interpretation of demand curves in which the prices of all closely related goods are varied inversely to the price of the good in question so as to "compensate" the consumer for any change in real income caused by the price change...such a curve is by its very nature, negatively inclined.
According to Blaug, this curve is never observed, whereas we at least observe one point on the constant-money-income demand curve...and so, merely an evasion of the issues; the income effect of a price change is as integral part of consumer behavior as is the substitution effect.
The Slutsky-Allen-Hicks decomposition of price responses into income and substitution effects and the invariably negative sign of the substitution effect, though important, it is silent on the consumers decision to buy durable goods, to save, and to hold wealth in one form rather than another, and cannot predict what particular perishable goods will be purchased.
Indifference theory, coming after a generation of ineffective criticism of marginal utility theory by institutionalists, reaffirmed the concept of economic man as possessed of what J.M.Clark called an "irrationally rational passion for dispassionate calculation", while taking inordinate pride in deriving all classical results from an ordinal rather than a cardinal utility calculus.
The concept of "indifference" involving as it does pairwise comparisons between commodity bundles that are infinitesimally close to each other, is just as introspective and unobservable as the concept of cardinal comparisons between marginal utilities...and also of no help in telling us beforehand which demand curves have negative slopes. Since we cannot observe either the income or substitution effects, we can't measure the size of one to add it to the other for purposes of predicting the total change in the quantity demanded resulting from a change in price.
The classic exposition of indifference theory can be found in Hick's "Value and Capital" which came at the same time as Samualson's Revealed Preference Theory(RPT) which contained fewer assumptions. RPT proposed to purge the theory of consumer behavior of the last vestiges of utility by restricting it to operational comparisons between value sums. This "fundamental theorem of consumption theory" has the advantage of inferring consumers preferences from their revealed behavior, and not vice versa. Also, the income effect is measurable in principle, being the change in income opposite in sign to the change in price that is required to restore the original bundle of goods that was purchased.
Subsequent developments succeeded in axiomatizing RPT to the point where its assumptions and conclusions were so firmly connected that the established truth of one was sufficient to establish the truth of the other...showing how the logical distinctions between "assumptions" and "implications" disappear in a perfectly axiomatized theory. What is called "rational" choice in utility theory translates into "preferring more to less","consistency", and "transitivity" in RPT. But the predictive powers of RPT in respect to demand relationships are no better than the older theories of consumer behavior, and is a theory of the choices of a single consumer instead of market behavior.
According to Brown and Deaton, there has been no attempt to build truly aggregate systems of demand relations. The assumption that demand functions are homogeneous of degree zero in prices and money incomes, which is a standard assumption in price theory, has been in fact rejected in some tests of complete systems of demand equations. The problem of how changes in the distribution of income affect the average per capita consumption is the most important missing link in the construction of an adequate empirically applicable theory of consumer demand.
The pure theory of consumer behavior is not empirically refutable; the statistical law of demand is only derivable from that theory by the addition of an extra auxiliary assumption, asserting the likelihood that any positive income effect will be too small to offset the negative substitution effect of a price change.
According to Hick's, Giffen goods are rarely observed because positive stretches in demand curves tend to produce unstable equilibria, implying that most equilibria in the real world are patently stable. Lancaster's new approach to consumer behavior takes as its starting point the old idea that consumers do not value goods for their own sake, but for the services they render. He adds to this, that these services are usefully conceived as objectively measurable components, the same for all consumers, which are combined in fixed proportions to make up an individual good, these goods being in turn combined into a bundle of consumption "activities". Thus, consumers are seen as maximizing, not a utility function, but a transformation function, which depicts the utility derived by transforming a particular collection of characteristics into a particular collection of goods.
The major implication of Lancaster's theory is that consumers occupy corner equilibria in most dimensions of choice from corner to corner in response to price changes, so that continuous adjustments along an indifference curve is in fact never observed...and throws new light on the "intrinsic" substitability and complementarity between goods, on occupational choices, on asst holdings and on the role of advertising in promoting the introduction of new goods. Lancaster conjectured that his theory creates new presumptions for the improbability of Giffon goods, that is, the greater likelihood of negatively inclined demand curves.
According to Blaug, the question that must be answered for the viability of this new theory is: What are the refutable predictions about market behavior that are generated by the new theory, and are these in fact predictions of "novel facts" that are capable of discriminating between the old and new theory?
The Theory of the Firm
The function of the orthodox theory of the firm is to justify the notion of positively inclined supply curves. Cournot's orthodox theory of the single-product firm, which uses only output or price as a strategic variable in a static but highly competitive environment, has been often criticized for its central assumption that businessmen strive to maximize profit subject to technology and demand constraints.
The classic defense taken in response to these numerous critiques has been Machlup's argument that marginal analysis and the theory of the firm does not aim to provide a complete explanation of business behavior but rather to predict the effects of specific changes in market forces. The theory is simple, elegant, internally consistent, and it produces qualitative predictions that are well corroborated. Blaug argues that the predictive ability of this theory is somewhat weak, and gives the following example: Though it predicts that a profit maximizing firm in a perfectly competitive market won't advertise because it faces a perfectly elastic demand curve...we see in fact that most do advertise. Similarly, the prediction that a rise in money wages will lead to a fall in the volume of employment has not been borne out by evidence on short-run employment functions...and would suggest well-behaved Phillips curves, which haven't been the case.
Another example is the prediction that a proportionate tax on business income, such as the corporate income tax, is not shifted by the firm to its customers in the short run, because the tax reduces the level of profit but not the volume of output where profit is maximized...evidence shows otherwise. Baumol's constrained sales maximization theory and Williamson's managerial theory imply very different comparative static predictions from standard theory-yet few attempts have been made to compare the track records of these competing theories, the reason being that in appraising the conventional theory of the firm, we must necessarily pass judgement on the power of neoclassical price theory, of which it forms an integral part.
Spiro Latsis argued that all theories of perfect, imperfect, and monopolistic competition may be considered as forming part of the same neoclassical research program in business behavior with one identifiable "hard core", one "protective belt", and one "positive heuristic"...the hard core being,1)profit maximization 2)perfect knowledge 3)independence of decisions and 4)perfect markets. These are supplemented by the following auxiliary assumptions:1)product homogeneity 2)large numbers 3)free entry and exit. The "positive heuristic" consists of a set of directives that reduce to one rule:derive the comparative static properties of theories;1)divide markets into buyers and sellers;2)specify market structure;3)create "ideal types" definitions of the behavioral assumptions;4)set out ceteris paribus conditions;5)translate the situation into a mathematical extremum problem and examine first and second order conditions. According to Latsis, the businessman with a well behaved profit function in a perfect competitive market and perfect information is left with only the decision, according to neoclassical theory, of producing a unique level of output, or else go out of business, and that the predictions would be the same if we moved from the assumption of profit maximization to bankruptcy avoidance.
The purpose of this theory is to answer the questions;1)Why do commodities exchange at given prices?;2)What are the effects of changes in parameters? Blaug concludes that Latsis's characterization of the neoclassical theory of the firm as "degenerating" is actually based on an examination of the theory's assumptions rather than its testable implications. Empirical evidence has provided robust conclusions even in situations, such as monopoly and oligopoly, where all auxiliary assumptions are violated.
According to Blaug, though the myriad of critiques against the assumption of maximization under certainty make sense...there has been no other theory capable of making general pronouncements on the decision- making process, nor must its predictive ability be overlooked. General Equilibrium Theory
Leon Walrus, in 1874, first suggested that the maximizing behavior of consumers and producers, under certain conditions, will result in an equilibrium in every product and factor market of the economy. We cannot infer from the fact that the economy exhibits disequilibrium in some markets that theories such as utility and profit maximization are false because of the widespread occurrence of economies of scale and externalities... GE theory, therefore is inapplicable, not false.
Arrow-Debreu proofs of the existence of GE depend critically on two assumptions: that consumption and production sets are convex and that every economic agent owns some resources valued by other agents. The global stability of such an equilibrium depends on some dynamic process that guarantees that every economic agent has knowledge of the level of aggregate demand and that no final transactions are carried out except at equilibrium price. These assumptions may be relaxed somewhat to accommodate increasing returns to scale in a minority of industries and a measure of monopolistic competition in all industries...though the existence of oligopoly and externalities destroys all GE solutions. Since GE theory has no empirical content, its continued refinement has been defended for the following reasons: 1) the necessary and sufficient conditions required to attain GE can throw light on the way in which equilibrium is attained in the real world, and 2) it facilitates the decisive refutation of commonly held but invalid arguments. (classical assumptions) This last defense assumes that the GE theory and the invisible hand theorem are identical.
According to Blaug, GE theory is strong on equilibrium and very weak on how it comes about, whereas the Smith-Marshall analysis is weak on equilibrium and very strong on how it comes about;it is more a study of the competitive processes than of the end results of competitive equilibrium...being an evaluative claim about the nature of perfect competition.According to Hahn, GE contains no presumption that a sequence of actual economic states will terminate in an equilibrium state, but that no plausible sequence of economic states will terminate, if at all, in a state which is not an equilibrium.
Blaug believes that the widespread belief that every economic theory must be fitted into the GE mold if it is to qualify as rigorous science has perhaps been more responsible than any other intellectual force for the purely abstract and nonempirical character of so much of modern economic reasoning. Marginal Productivity Theory
This theory derives the input demand functions from the production function as inverse forms of the marginal productivity equations. If factor and product markets are competitive, firms will hire workers, machines, and space until wage rates,machine rentals, and land rentals are equal to their respective marginal value or marginal revenue products. For the firm, factor prices "determine" marginal products and not vice versa.(assuming factor supplies are given).
According to Robertson, factor prices "measure" the marginal products, and what "determines" factor prices is not so much the first derivatives of the production function as the maximizing behavior of producers. The notion that the functional distribution of income may be explained simply by invoking the principles of marginal productivity, as enshrined in an aggregate production function of the simple Cobb-Douglas variety, was first broached in Hick's "Theory of Wages".
After Solow's article in 1957, estimation of aggregate production functions for purpose of measuring the sources of growth and drawing inferences about the nature of technological change became widespread practice in economics, ignoring the problem of the concept of an "aggregate" production function. This "measurement without theory" resulted in the "simpliste" marginal productivity theory: one or two outputs, two inputs, twice differentiable,constant returns to scale, malleable, homogeneous capital, monotonic relationship between the capital-labor ratio and the rate of return on capital,disembodied technical progress classified as neutral of factor saving,perfect competition,instantaneous adjustment,and costless information.
Thus, dramatic conclusions about the past are derived from the global measurement of a few well selected microeconomic variables. Radical critics of orthodox economics criticize the theory of distribution as a theory of distributive shares, whereas for orthodox economists, it is a theory of factor pricing. Because of this view, it does not prohibit the belief that the class struggle has aa lot to do with determination of distributive shares or the rate of wages and profit.
According to Hicks, "neutral" technical change leads to an unchanged capital-labor ratio at constant relative factor prices;but according to Harrod, it leads instead to a constant capital- output ratio at a given rate of interest;both agree that it would leave relative shares of wages and profit unaffected. Measurement with aggregate data usually confirmed an elasticity of substitution of unity, but at the industry level it proved necessary to fit production functions with nonunitary elasticities of substitution, such as the CES(constant elasticity of substitution) production function.
According to Johnson,"the elasticity of substitution, as employed in distributive theory, is a tautology, in the same way as the Marshallian concept of the elasticity of demand is a tautology...the problem being measurement, not statements about the implications of hypothetical measurement. Because the aggregate production function is only tenuously related to microeconomic behavior,interpretation of the outcome is not possible.
Hick's theory of induced innovation, for a while, was used to explain technical change endogenously as a process whereby firms "learn" to extrapolate past trends in the factor- saving bias of technology...but, due to its lack of a micro foundation, it has petered out.
According to Hahn, the neo- classical theory of distribution has nothing to offer in answer to the question why is the share of wages and profits, what it is...this question being prompted by our interest in the distribution of income between social classes, and social class is not an explanatory variable in neo-classical theory. The marginal productivity theory of factor pricing is a modest and highly abstract theory because of its general grouping of workers. It is therefore not of much use in accounting for the observed pattern of relative wages, though it has been successful in predicting extremely long-run changes in interindustry and interoccupational wage differentials.
According to Blaug, this theory has suffered more than most theories in its failure to specify its range of application to concrete problems;a perfectly general thesis without specific content. Switching,Reswitching,and All That
In the 1950's, Joan Robinson and many other Cambridge economists attacked the marginal productivity theory of distribution and the Hicksian two-inputs-one-output theory of factor pricing, by arguing that the stock of capital in an economy, being a collection of heterogeneous machines rather than a homogeneous fund of purchasing power, can't be valued in its own technical units although "labor" and "land' can be so measured;the valuation of capital presupposes a particular rate of interest, and this means that the rate of interest cannot be determined by the marginal product of capital without reasoning in a circle; hence, marginal productivity theory cannot explain how the rate of interest is determined.
This critique falls if the simpliste formulation of marginal productivity theory by the disaggregated Walresian version, which doesn't invoke the concept of an aggregate production function or an aggregate capital stock as a variable. The problem of finding a natural unit in which to measure capital does not arise when measuring marginal variations around an equilibrium position ...different capital goods are aggregated into a fund of purchasing power. The other Cambridge critique is that it is not possible to demonstrate that a fall in the interest rate will always alter the rankings of the most profitable of all currently available techniques in a unidirectional manner so as to increase the overall capital intensity in an economy because of the phenomenon of "reswitching".
If there is no strict monotonic relationship between a change in the rate of interest and the capital-labor ratio, we cannot explain the rate of interest in terms of the relative scarcity of capital in an economy, which is the essence of the marginal productivity theory of interest...demand for capital is not an inverse function of the interest rate.
Samualson illustrated reswitching with an example involving two processes that require the same length of time to make a given product with unequal amounts of labour, but without any machines;the process with less labour will not necessarily be the more profitable one at all rates of interest:if its labor is applied at an earlier date in the production cycle, it is more expensive of the two processes at high rates of interest because its wage bill accumulates faster at compound interest and vice versa. This is due to the compound-interest effect of changes in the interest rate on the comparative costs of labour inputs applied at different dates in various technical processes of identical length producing the same good. The result of reswitching being lower instead of higher capital-labour ratios as the rate of interest rises.
Robinson and the others have yet to empirically test the significance of reswitching, instead arguing that to assume switching, 1)all inputs in the economy enter into the production of that capital good, and 2)it is itself produced by a smooth neoclassical production function with variable coefficient. Both of which they find to be highly unlikely. It has been shown that the empirical significance of switching depends on 1)whether the rate of interest falls below a critical level, and 2)whether the product prices decline as firms readopt some previously used techniques.
The measurement of the likelihood of reswitching rests on the measurement of the degree of input substitutability in an economy. The Cambridge model involves linear Leontief technologies-each product in each sector is produced with only one fixed coefficient technique, thus, there is always some degree of input substitution that is reintroduced by the pattern of final demand, including the demand of overseas buyers. This leads Blaug to agree with Ferguson that the neoclassical version is the most realistic version at this time.
Robinson and her ilk believe that reswitching is the norm, but are propositions about alternative equilibrium states and thus, can never be observed in the real world even in principle.
This claim succeeds in rendering the whole of neoclassical research imperious to empirical refutation and exchanges the methodology of falsificationism for the methodology of essentialism. Though it may be conceded that reswitching and capital reversing are possible, until they are shown to be empirically important rather than just logically possible, economists are ill-advised to do away with price theory, labour economics, and development economics just because the models in them contain some indigestible anomalies. The Heckscher-Ohlin Theory of International Trade
Ricardo believed in the importance of foreign trade because of the relative immobility of capital across national frontiers. The commodity composition of world trade was due to persistent differences in the productivity of labour between nations (assuming relative commodity prices vary proportionately with relative labour costs), thus, free trade will cause each country to export those goods in which it possessed a comparative price advantage, and such trade will result in mutual gain.
In the Heckscher-ohlin theory(HOT), these productivity differences are traced back to intercountry differences in initial factor endowments. According to HOT, a country exports those goods whose production is intensive in the country's relatively abundant factor and imports goods that use intensively the country's relatively scarce resources, thus, explaining foreign trade in terms of supply conditions. Inasmuch as international trade is a substitute for international factor movements, Heckscher and Ohlin felt that free trade would work to equalize factor scarcities and hence, factor prices around the world...though Ohlin gave reasons why this may not happen.
Samualson's Factor-price equalization theorem(FPET), under classical assumptions, free trade will bring about complete equalization of factor prices.