In assessing what is wrong with the performance of countries, the system should not be confused with parts

Is the Union government on track to bring about “good days"? Has it done enough in its first 100 days? Is the competitiveness of the Indian economy improving? Is our economic progress sustainable? Are we creating enough jobs? Are we on track to achieve our goal of skilling 500 million Indians? These are very important questions on many peoples’ minds. They cannot be answered with prevalent methods of measuring progress.
If we aim to train 100 million people in five years, our instinct is to check if around 40 million have been trained in the first two years to gauge whether we are on track to the target. We deride “hockey-stick" projections in which large outcomes of a programme are expected to come only at the end. Whereas, whenever a system has to be created to enable large results, the first years must be devoted to building the system, during which time there may be no fruits at all. If a tree is expected to produce 1,200 apples in a year, it will not produce 100 apples every month. They will all come later and in one month. They will come only if and when the tree is capable of producing them. Therefore, to attain big goals, we should begin by measuring the progress being made in strengthening the roots, not counting the apples.
You can only improve what you measure, scientists and managers say. Therefore when systems and institutions must be improved, measurements must be focused on the conditions of the systems and institutions. By definition, systems have many components. A serious conceptual error in measuring the condition of a system is to measure the conditions of its individual components and add up these measurements into one number to indicate the condition of the whole system.
This is how the competitiveness of countries, the state of innovation in their economies, and ease of doing business in them are rated. It is very difficult to assign different weights to the components unless one understands very well the precise role they play in the performance of the whole system. So an intellectually lazy way is taken and all components are given equal weights for arriving at the overall measure. For example, components of an innovation system, such as expenditures on R&D, availability of funds for start-ups, and the state of higher education, are individually rated with marks for each factor. Then countries are ranked by the total marks they get.
This conceptually flawed method of measurement can lead to fatally dangerous conclusions. For example, for gauging the health of our bodies, we should know how all the vital sub-systems are functioning. Compare two people. One with a very strong liver and a very weak heart. Another with an adequately functioning heart and an adequately functioning liver. Whose life is in greater danger? If the conditions of the heart and liver were added into one number, both would be the same, whereas in fact one person is clearly in greater danger. This way of measuring conditions of systems, which is the usual way in which conditions of countries are measured and compared, can result in complacency or to wrong prescriptions for improvement.
The systemic relationships between the causal factors must be understood. Which components of the system affect which others and how? Components may be individually strong. For example, a firm may spend a lot on R&D and it may have a large manufacturing capability also. But if the interactions between R&D and manufacturing are not productive, the company’s innovation capabilities will be weak. So too with countries with good R&D labs that are disconnected from manufacturers. For gauging the health of systems, the quality of connections between the parts must be assessed. Very often the right prescription to improve the capability of a system is to improve the strength of the connections rather than strengthening the parts further.
The absurdity of merely adding up values of components to assess the health of a complex system is seen in the widely used measure for gauging the health and progress of countries, their gross domestic product (GDP). GDP measures the quantum of economic activity, whatever its purpose may be. Thus an environmental disaster, such as the BP oil spill, which caused a huge amount of clean-up and legal activities, increased the US’s GDP. Therefore was the oil-spill desirable? Some economic activities should be subtractions: all cannot be additions to a measure of the health of the whole system.
The concept of “balanced score-cards" gaining currency in the private sector, and now sneaking into the measurements of countries’ economies with measures of environmental sustainability and social inclusion being included, is an improvement on the predominant economic paradigm that has been governing our lives in the past fifty years. However, by considering economic, social, and environmental forces as separate parts of a system, albeit all important, such score-cards can suffer from fatal, conceptual flaws, not only of giving them equal weights, but also of considering them independent of each other.
It is imperative we change our measures of progress and our simplistic measurements of the conditions of our institutions and our lives. They do not reflect the fundamentals we must understand and manage. Therefore we are mis-measuring and mis-reporting, with great accuracy, our progress to futures we may not want.