Tuesday 16 February 2016

And more about that Mexican study

In January, I'd noted a couple problems with the latest BMJ study on soda taxes.

To recap, they found that poor cohorts in Mexico substantially reduced their soda consumption on the implementation of a peso-per-litre SSB tax. That tax in the public health press has been characterised as a 10% tax, and that's made it easy for those who want to see big effects to characterise it as showing strong price elasticities in poorer communities.

I'd noted that if Mexico has discount brands similar to New Zealand, the peso-per-litre on a cheap off-brand product would be far more than a 10% tax and that they're then overestimating the elasticities. I've also noted that a peso is just under 2% of the daily minimum salary reported in the paper's sample, and that if we wanted a tax here that had comparable effects on affordability, it would need to be more like $2 per litre. Normally we'd just go with elasticities at means, but here the main effects were really concentrated in the poorest households. I do not know what the price of off-brand soda is in Mexico though.

But that's as far as I went with the critique. I also worried about the difference-in-difference empirical technique, but I left that to one side. Let's get to that now.

Difference-in-difference actually started in public health. Angrist and Pischke's text notes its first use in figuring out whether cholera was spread by water or by air. When one part of town flipped to an upriver water source and saw a big reduction in cholera as compared to the district that maintained the downstream water source, the difference in differences (the change in the difference between the two places over time) pointed to water. Nothing in the control district should have been affected by the treatment district's change in water supply and so it gives a nice counterfactual trend. Since the treatment district dropped sharply and control didn't, we had an answer.

A normal diff-in-diff needs a control group. Everyone in Mexico was subject to the soda tax, so it won't be a "this group of people were affected and this group weren't affected" diff-in-diff. It would instead have to get a counterfactual consumption trend from consumption of some other good that isn't itself affected by the soda tax.

Suppose that soda consumption is affected by the weather and by general wealth. When it's hot and you have money, you buy soda to cool off. In that case, you could take, say, consumption of suncreen as your control group - if the trend in sunblock sales were predictably related to the trend in soda sales in the pre-tax period. I don't know whether it is or isn't: I'm just saying the kind of control group we should be looking for. Maybe t-shirts. Or fans. Anything that had a regular correlation with consumption of taxed beverages before the tax took effect would work - so long as that group weren't also affected by the tax. If bottled water sales co-moved with soda sales before the tax, you couldn't use bottled water to predict things after the tax: if the tax did induce a shift away from taxed beverages, some of that would flow over to bottled water. If you then forecast what soda consumption would have been but for the tax based on the observed increase in water sales, you'll overestimate the tax's effect.

The only comparison group talked about in the BMJ paper is untaxed beverages, and that cannot have been the basis for a difference-in-difference: consumption of untaxed beverages will have gone up because they weren't taxed, so using that would strongly bias upwards the estimated effect of the tax.

I think, but do not know for sure, that what they've done is look at the pre-tax trend in sales, the post-tax trend in sales, and called the difference between the two a difference-in-difference because a trend is the difference between two points. It's the only thing that makes sense, but I've not really seen this kind of thing described as a difference-in-difference before.

In the abstract, and in the text, the paper's authors talk about how they've used a difference-in-difference approach to estimate the effects of the SSB tax. Here are some examples.
To test whether the post-tax trend in purchases was significantly different from the pretax trend, the authors used a difference in difference fixed effects model, which adjusts for both macroeconomic variables that can affect the purchase of beverages over time, and pre-existing trends.
Difference in difference fixed effects analyses
As the tax was implemented nationally, it was not possible to construct a true experimental design to study the association between the tax on sugar sweetened beverages and purchases. Therefore we applied a pre-post quasiexperimental approach using difference in difference analyses along with fixed effects models,36 37 with fixed effects at the household level. Fixed effect models have several advantages, mainly that they account for non-time varying unobserved characteristics of households (for example, preference for certain types of beverages). As such, non-time varying measures (for example, region of household’s residence) are omitted in the model.
Model predicted differences in beverage purchases in stores: overall findings

Supplemental table 2 presents the coefficient estimates for each of the beverage categories from the difference in difference fixed effects models at the household level controlling for socioeconomic status, age, and sex, and for contextual measures of households. Based on these estimates, we back transformed the predicted log volumes for each of the 12 post-tax months using Duan smearing.38 We compared estimated counterfactual volumes purchased in the post-tax period based on pretax trends (expected volumes if the tax had not been implemented) to adjusted volumes purchased in the post-tax period (based on predicted values from the model) and derived the absolute and relative differences from January to December 2014.

Table 2 and figure 1 show that for taxed beverages the absolute and relative differences between the post-tax volume and its counterfactual widened over the 12 post-tax months from −11 mL/capita/day (−5.6% relative to the counterfactual) in June to −22 mL/capita/day (−12% relative to the counterfactual) by December 2014, giving an average change of −6.1% over 2014. In total, during 2014 the average urban Mexican purchased 4241 mL (seven 600 mL or 20 oz bottles) fewer taxed beverages than expected (based on pretax trends). This was related to a decrease in purchases of non-carbonated sugar sweetened beverages (−17% relative to the counterfactual) and taxed sodas (−1.2% relative to the counterfactual). See supplemental Figure 2.
And here is Supplemental Table 2:
And here's Figure 1, where they illustrate things.

SSB blog 2

A standard diff-in-diff would let you have rather more confidence in what's going on. For example, suppose that beverage consumption is a lot higher when it's hotter out, and that your control group tracked that well. If 2014 were cooler than 2013, you'd expect less consumption. If your diff-in-diff tracked that already, you wouldn't need to worry about it. If what you have instead is just a panel fixed effects study, you need to account for more of the seasonality. Just putting in month-by-month dummies might not do it if June 2014 is more than a degree colder than June 2013.*

In short, I'm way less confident that anything going on in here is causal. When I usually see diff-in-diff, I expect something that at least leers a lot more suggestively at causality than we'd get out of a plain panel fixed effect study. And this isn't that.

Meanwhile, Geoff Simmons and David Farrar have conflicting views about the nature of the underlying data.

Here's Geoff:
The researchers looked at actual sales data from a sample of over 6,000 households in Mexico. They found that the 10% tax reduced soft drink consumption over the year by 6%, but this had risen to 12% by the end of the year. For poor people the effect was even greater – reducing consumption by 9% on average but 17% by the end of the year. Consumption of untaxed beverages – especially bottled water – rose by 4%.
And here's David:
You see Katherine Rich points out in this article that the Popkin paper relies on reported data from respondents, not actual sales data. So this entire paper is based on people saying they think they are now drinking less. It is far from robust, despite peer review.
It can't both be sales data and not sales data, can it? Well, it's a bit of both, kinda. Here's the paper:
Enumerators visited the households every two weeks to collect diaries, product packaging from special bins provided for this study (scanned by the enumerators), and receipts, and to carry out pantry surveys. Bar code information provided all other data.
It isn't sales data, but it isn't just survey data either - if the paper's description of the Nielson household survey is right. It's rather a mix combining diary data with checks on that by enumerators looking at receipts and checking folks' shelves and discarded packaging.

It's probably the best that can be done in terms of data on what that sample of households is up to, and I wouldn't dismiss it as being subject to the same recall problems as straight diary data.

But it also squares poorly with aggregate sales data.


And this puzzles me. If household consumption is dropping, why would aggregate sales be going up? The time period is too short for changes in demographics to have generated a Simpson's Paradox.

Anyway, if we take the study entirely at face value, and consider it to be entirely causal despite not really having a difference-in-difference method, and don't ask too many questions about why they used Duan smearing rather than just GLM or Poisson with a log-link despite likely heteroskedasticity, then the paper says the peso-per-litre tax reduced consumption of sugar-sweetened beverages for poor households by 35mL per person per day. About two tablespoons per day, or one 600mL bottle every 17 days. For a per-litre tax equivalent to 1.7% of the daily minimum salary reported in their survey.

Meanwhile, an experimental study in the US subsidising folks' purchases of healthy foods wound up increasing purchases of unhealthy foods through income effects. Smaller numbers in that sample, but a proper experiment rather than observational correlations with poor adjustment for potentially time-varying trends.



*August 2014 turned out warmer than August 2013. I'm not going to go through and pull each month.

No comments:

Post a Comment