During the pandemic, a frequently re-iterated claim was that the contraction of the economy was not dramatically less severe in areas that adopted milder policies but reported higher levels of mortality. This was taken as an argument that the trade-off of slowing down business activity to slow down contagion and avoid crowding hospitals was not too expensive. After all, countries like Sweden (which had less stringent policies) did not fare noticeably better economically than countries like Denmark, Norway, and Finland, all of which adopted more stringent policies.
But this may be a false perception resulting from how the data used to convert nominal monetary amounts (such as personal consumption, personal income, business expenditures, and the like) into real amounts (i.e., adjusting for inflation) was affected by the pandemic.
In a recent paper published in the Canadian Journal of Economics, Erwin Diewert – who is essentially the father of important theoretical breakthroughs that underlie modern consumer price indexes (CPI) that government agencies create – and Kevin Fox pointed out that many lockdowns created a “disappearing product” bias. This bias starts from the way CPI is built, as prices must be weighted according to the importance of each good in the total expenditures of a household. These “weights” create a fixed basket that statistical agencies frequently adjust thanks to multiple surveys.
During the pandemic, many goods and services became unavailable or could simply not be consumed. The weights thus lost some of their validity, and inflation measures during the pandemic were biased. Early on, many economists found that there were multiple reasons to believe that the deflation that appeared in the first year of the pandemic was far less severe than the one reported in official data.
Some solutions to deal with the problem were attempted by government agencies, but as Diewert and Fox show, those solutions fell short. This is because the “disappearing product” problem came with a “new products” bias – numerous goods that were unknown to consumers or were only sparingly consumed (and thus went unmeasured) became major consumption items. Overall, they argue that the deflation during the lockdowns was overstated.
Why would this matter? Because we use these price indexes to convert “nominal” (i.e., current dollars, not adjusted for inflation) variables into “real” (i.e., constant dollars, adjusted for inflation) variables!
Hypothetically, if nominal consumption falls by 20 percent during a lockdown while the measured price index also falls by 20 percent, “real” consumption is unchanged. However, if the true price (i.e., that which accounts for the problems raised by Diewert and Fox) index only fell 10 percent, then the inflation-adjusted value for consumption actually fell 10 percent.
This means that had we used the true price index, we would have found a larger contraction due to lockdowns. That means that we understated the cost of lockdowns. Moreover, one logical implication from the work of Diewert and Fox (who do not deal with this particular point) is that areas that applied stricter policies probably made larger overstatements. As such, they are also the areas that will most understate the economic contraction during lockdowns. In turn, this means that we misunderstood the true costs of lockdown policy.
Does that mean that lockdowns were not worth it? Some of my colleagues would likely argue that this only confirms their viewpoint that, no, they were not. Others might argue that it does not change their opinion of the desirability of lockdown policies.
I do not wish to draw lessons like that. The lesson I draw is that there are so many things we do not know and can only discover with hindsight. Essentially, in works like that of Diewert and Fox, I see the need for humility by policymakers and policy advisors (i.e., economists and social scientists who seek to advise policymakers). If so much uncertainty about policy consequences can only be known with hindsight, there is a need for modesty when designing policy. This may be a boring policy conclusion, but not rushing headfirst into things whose consequences we will not be able to properly measure until after they are done seem like a reasonable conclusion.
This article was originally published in AIER and is republished here with permission.