by Alex M. Thomas
Whether you can observe a thing or not depends on the theory which you use. It is the theory which decides what can be observed.’ – Albert Einstein
1776. Adam Smith formulates a system of economic theory capable of understanding the levels and movements of commodity prices, real wages and gross output. Smith’s system has much in common with his predecessors, especially Francois Quesnay. David Ricardo and Karl Marx successively refine Smith’s system, in particular, by developing a coherent theory of distribution. Thomas Malthus and Marx, in opposition to Ricardo, point out the possibility of aggregate demand insufficiency which will cause an economic crisis. All these authors provide us with an explanation of select economic variables – prices, real wages, rate of profit, unemployment and output levels.
Their theoretical world makes certain assumptions as any other theory. One assumption is that capital and labour move freely across sectors. This engenders a tendency for profit rates to be uniform across sectors. A higher profit rate in one sector attracts capital and labour from other sectors and this process stops when profit rates are uniform across sectors. Of course, in the real world, there are several obstacles to such free mobility. They could be legal or cultural in nature.
2013. Research in economics attains greater credibility when the theory is tested against data through econometric analyses. Within data analysis, there is a further hierarchy with qualitative data analysis placed on a lower footing than quantitative data analysis. This essay raises two epistemological questions in this context and ends with some suggestions, not entirely novel.
(1) Is quantitative data more credible than qualitative data?
(2) How reliable are macroeconomic data in settling theoretical disputes?
The Stanford Encyclopedia of Philosophy defines epistemology as ‘the study of knowledge and justified belief’ and the Oxford English Dictionary as ‘the theory or science of the method or grounds of knowledge’. What makes one method of enquiry more reliable than another? Can macroeconomic data justify our belief in a theory?
An assessment of the quality of data requires us to know the process by which the data is collected. In Economics, quantitative data is usually collected by Governments as part of their routine administration and also specifically for purposes of providing public services (Aadhaar, in India, is a case in point). Public limited companies are obliged by law to make their financial statements public, for the benefit of the existing shareholders, and possibly for the potential ones too. Researchers, employed in universities and research institutes, both private and public, also develop questionnaires and collect information from respondents. International organizations, such as the UN, ILO, WHO and World Bank also collate and classify socio-economic data. The commonly used technique for gathering data is a sample survey. Detailed interviews also provide socio-economic information of a qualitative nature. Field observations are also a valuable source of information. Photographs capture different facets of the economic life especially the stark inequalities of income and wealth.
There will always be errors caused by factors other than those associated with the selection of the sample. Duplication, non-response, incorrect recording of response and improper data classification are some such errors. Greater the errors, lesser their reliability. Quantitative data is also collected and tabulated by humans with the aid of computers. So is qualitative data. In fact, one complements the other. Errors are present in both kinds of data; nothing can be said a priori about the degree of error. Specialization in the social sciences has unfortunately led to the favouring, for instance, of quantitative data analysis over other forms of analyses in economics. But, it is not epistemologically clear as to why one kind of data is considered more credible than the other.
In the theoretical world (known as classical economics) developed by the economists mentioned in section 1, real wages are determined by wider social and political forces. That is, it is exogenously determined. Collective bargaining and employers’ power play an important role in the determination of wages. Contrast this with how wages are determined in mainstream marginalist (more popularly known as neoclassical) economics. The demand and supply schedules/curves for labour together determine both wages and employment. That is, wages are endogenously determined. In classical economics, there is no tendency for labour to be fully employed. But, in marginalist economics, labour tends to be fully employed and wages move such that any labour unemployment is removed. The presence of labour unemployment is explained by the presence of ‘rigidities’ by the marginalist economists. How can such different theoretical explanations for wage determination arising from conceptually distinct frameworks be tested with wage and employment data? Incommensurability is one issue. The other issue lies with the inherently problematic logic of marginalist economics, which can be used to explain contrasting sets of data. This is, of course, dangerous.
We will possess a better understanding of the economy if quantitative data is supplemented with rich qualitative data. The credibility of the analysis would depend on the strength of the data sources, the explanation provided and the overall argument. It is scientifically regressive to pass judgment prior to the analysis. In other words, credibility of a particular analysis can only be ascertained ex post. Finally, it is of utmost importance that the underlying theory is logically sound and makes reasonable assumptions about the economy.
About the author
Alex M. Thomas is a PhD candidate at the School of Economics, University of Sydney, Australia. He is mainly interested in classical economics and history of economic thought. He blogs at the Undergraduate Economist.