Eating Highly Processed Food is Correlated with Death

In “Hints for Healthy Eating from the Nurses’ Health Study” I write:

The trouble with observational studies of diet and health that don't include any intervention is the large number of omitted variables that are likely to be correlated with the variables that are directly studied. Still, it is worth knowing for which things one can say:

Either this is bad, or there is something else correlated with it that is bad. 

When multivariate regression is used, one might be able to strengthen this to

Either this is bad, or there is something else bad correlated with it that is not completely predictable from the other variables in the regression.

in discussing “Association Between Ultraprocessed Food Consumption and Risk of Mortality Among Middle-aged Adults in France” by Laure Schnabel, Emmanuelle Kesse-Guyot and Benjamin Allès, I need to go further to elaborate on my interpretation for multivariate regression results that show a coefficient in the undesirable direction for “this”:

Either this is bad, or there is something else bad correlated with it that is not completely predictable from the other variables in the regression.

To make sure the message isn’t lost, let me say this more pointedly: In observational studies in epidemiology and the social sciences, variables that authors say have been “controlled for” are typically only partially controlled for. The reason is that almost all variables in epidemiological and social science data are measured with substantial error. Although things can be complicated in the multivariate context, typically variables that are measured with error get an estimated coefficient smaller than the underlying true relationship. (A higher coefficient multiplies the noise by a bigger number, and that bigger coefficient multiplying the noise is penalized by ordinary least squares, since ordinary least squares is looking for the best linear unbiased predictor, and noise multiplied by a big coefficient hinders prediction.) If the estimated coefficient on a variable meant to control for something is smaller than the true relationship with the true variable underneath the noise, then the variable is only partially controlled for. The only way to truly control for a variable is to do a careful measurement-error model. As a practical matter, anyone who doesn’t mention measurement error and how they are modeling measurement error is almost always not fully controlling for the variables they say they are controlling for.

I like to think of an observed variable as partially capturing the true underlying variable. When, because of measurement error, an observed variable only partially captures the true underlying variable, simply including that variable in multivariate regression will only partially control for the true underlying variable.

If the coefficient of interest is knocked down substantially by partial controlling for a variable Z, it would be knocked down a lot more by fully controlling for a variable Z. It is very common in epidemiology and social science papers to find statements like: “We are interested in the effect of X on Y. Controlling for Z knocks the coefficient on X (the coefficient of interest) down to 2/3 of the value it had without that control, but it is still statistically significantly different from zero.” This is quite worrisome for the qualitative conclusion of interest, because if measurement error biases the coefficient of Z down to only half of what the underlying relationship is, fully controlling for Z using a measurement-error model would be likely to reduce the coefficient of interest by about twice as much, and a coefficient on X that was 1/3 of the size without controls might well be something that could easily happen by chance—that is, not statistically significantly different from zero.

Let’s turn now to the fact that eating highly processed food is positively correlated with mortality. Personally, my prior is that eating highly processed food does, in fact, increase mortality risk. So the statistical point I am making is questioning the strength of the statistical evidence from this French study for a proposition I believe. But ultimately, understanding the statistical tools we use will get us to the truth—and, I believe, through knowing the truth—to a better world.

One of the controls Laure Schnabel, Emmanuelle Kesse-Guyot and Benjamin Allès use is overall adherence to dietary recommendations by the French government (the Programme National Nutrition Santé Guidelines). The trouble is that the true relationship between eating highly processed food and eating badly in other ways is likely to be stronger than what can be shown by the imperfect data they have. That means that, at the end of the day, it is hard to tell whether the extra mortality is coming from the highly processed food or from other dimensions of bad eating that are correlated with a lot of highly processed food. Another set of controls are income and education. Even if the income and education variables measured francs earned last year and number of years of schooling perfectly, what is really likely to be related to people’s causally health-related behavior is probably something more like permanent income on the one hand, and knowledge of health principles on the other—which would depend a lot on dimensions of education such as college major and learning on the job in a profession as well as years of school. Hence, all the things that might stem being poor in the sense of low permanent income (low income not just one particular year, but chronically) and having a low knowledge of health principles are undercontrolled for when they are representative only by typical income and education data. The same kind of argument can be made about controlling for exercise: for health purposes, there are no doubt higher/lower-quality dimensions to exercise that are not fully captured by the exercise data in the French NutriNet-Santé Study that Laure, Emmanuelle and Benjamin are using. If exercise quality were better measured, controlling for more dimensions of exercise would likely knock the coefficient of ultraprocessed food consumption down a bit more.

The bottom line is that there is definitely something about what people who eat a lot of highly processed food do, or about the situations people who eat a lot of highly processed food are in that leads to death, but it is not clear that the highly processed food itself is doing the job. Highly processed food might have been merely driving the getaway car rather than firing the bullet that accomplished the hit job. Highly processed food is clearly hanging out with some bad actors if it didn’t fire the gun, but it is hard to convict it of committing the crime itself.

In absence of clearcut statistical evidence of causality, theory becomes important in helping to establish priors that will affect how one reads ambiguous data. I lay out theoretical reasons for being suspicious of processed food in “The Problem with Processed Food.” One of the problems with processed food is its typical reliance on sugar in some form to make processed food tasty. Laure Schnabel, Emmanuelle Kesse-Guyot and Benjamin Allè given other theoretical reasons to worry about processed food in this passage (from which I have omitted the many citation numbers that pepper it for the sake of readability):

First, studies have documented the carcinogenicity of exposure to neoformed contaminants found in foods that have undergone high-temperature processing. The European Food Safety Authority stated in 2015 that acrylamide was suspected to be carcinogenic and genotoxic, and the International Agency for Research on Cancer classified acrylamide as “probably carcinogenic to humans” (group 2A). Some studies reported a modest association between dietary acrylamide and renal or endometrial cancer risk. Further research is necessary to confirm these speculative hypotheses. Similarly, meat processing can produce carcinogens. The International Agency for Research on Cancer reported in 2015 that processed meat consumption was carcinogenic to hu- mans (group 1), citing sufficient evidence for colorectal cancer. Moreover, the agency found a positive association be- tween processed meat consumption and stomach cancer.

In addition, ultraprocessed foods are characterized by the frequent use of additives in their formulations, and some studies have raised concerns about the health consequences of food additives. For instance, titanium dioxide is widely used by the food industry. However, findings from experimental studies suggest that daily intake of titanium dioxide may be associated with an increased risk of chronic intestinal inflammation and carcinogenesis. Likewise, experimental studies have suggested that consumption of emulsifiers could alter the composition of the gut microbiota, therefore promoting low-grade inflammation in the intestine and enhancing cancer induction and metabolic syndrome. In addition, some findings suggest that artificial intense sweeteners could alter microbiota and be linked with the onset of type 2 diabetes and metabolic diseases, which are major causes of premature mortality.

Food packaging is also suspected to have endocrine-disrupting properties. During storage and transportation of food products, chemicals from food-contact articles can migrate into food, some of which might negatively affect health, such as bisphenol A. Epidemiologic data have suggested that endocrine disruptors are associated with an increased risk of endocrine cancers and metabolic diseases, such as diabetes and obesity.

After the passage of time has led to more deaths and therefore increased the statistical power available from the French data set, one way to test the importance of these forces will be to look at the association of the consumption of highly processed food with different causes of death. For example, it would be tantalizing evidence about causal channels if eating ultraprocessed food predicted a higher risk of death due to cancer by a bigger ratio than the ratio by which it predicted a higher risk of death due to cardiovascular disease.

What are highly processed foods—or in term used by the authors, “ultraprocessed foods”? Laure, Emmanuelle and Benjamin write:

Each of the 3000 foods in the NutriNet-Santé Study composition table was classified according to the NOVA food classification system, which categorizes food products into 4 groups according to the nature, extent, and purpose of processing. This current study focused on 1 group classified as ultraprocessed foods, which are manufactured industrially from multiple ingredients that usually include additives used for technological and/or cosmetic purposes. Ultraprocessed foods are mostly consumed in the form of snacks, desserts, or ready-to-eat or -heat meals.

I have been struck by how cutting out sugar almost automatically cuts out the vast majority of highly processed foods. So it is not easy to tell apart harm from sugar and harm from highly processed foods. (I give tips for going off sugar in “Letting Go of Sugar.”) But it also means that currently it is not that important for one’s own personal efforts to avoid early death to distinguish between the harms of sugar and the harm from highly processed food. (In the future, it might become extremely important to distinguish between the two if large number of people started avoiding sugar and food companies started reformulating their processed food to leave out sugar.)

My recommendation is to cut back on sugar and highly processed foods—efforts in which there are many opportunities to kill both birds—sugar and processed food—with one stone. For more detailed recommendations on good and bad foods, see:

For annotated links to other posts on diet and health, see:

Chris Kimball: The Language of Doubt

A pond and stand of aspens near Chris’s home

A pond and stand of aspens near Chris’s home

I am pleased to have another guest post on religion from my brother Chris. (You can see other guest posts by Chris listed at the bottom of this post.) In what Chris has written below, he is wrestling not just with what he thinks and feels about Mormonism, but also with what he thinks and feels about Christianity and about belief in God itself.


The Skeptic

I am a skeptic, an empiricist, a Bayesian. These are not moral statements or value judgments. They are simply observations about how and whether I know anything. Essentially a matter of epistemology.

This is me, and a few other people I know. I’m sure many people think differently and that’s OK. I know there are discussions about the nature of the world, of human beings generally. Interesting discussion. Ones I’m not equipped to argue but interested to listen. This is not about the general case. Just about me.

A skeptic questions the possibility of certainty or knowledge about anything (even knowledge about knowing). An empiricist recognizes experience derived from the senses. A Bayesian views knowledge as constantly updating degrees of belief. In a functional sense, in the way it works in my life, I only know anything as a product of neurochemicals and hormones in the present. 

In science, in law, in everyday life, being a skeptic is not a big deal; it is even usual or typical to talk like a skeptic. But in religion it is a big deal.

The Spirit: When the topic of doubt comes up, a common biblical reply is that the Spirit is the ultimate witness. “The Spirit beareth witness with our spirit” (Romans 8:16 KJV). I don’t suppose everybody has the same idea what that means; I hear it as a Platonic ideal of Truth or Knowledge, and a dualist body-soul where God or the Spirit speaks directly to the soul in some form of indisputable ultimate knowledge. “For to one is given by the Spirit the word of wisdom; to another the word of knowledge by the same Spirit.” 1 Corinthians 12:8

However this works for others, it doesn’t work for me. I don’t recognize any Spirit-to-soul ultimate knowledge communication. Only neurochemicals and hormones. I am reasonably satisfied there is an external world that impinges on my senses. I am willing to make room for an external supernatural world, although I am more likely categorized agnostic than believing. But the external world, natural or supernatural, ultimately registers through neurochemicals and hormones. There is no back channel, no source of certainty.  

Testing Faith: Sometimes religious people talk about overcoming doubt with tested faith. “That the trial of your faith, being much more precious than of gold that perisheth, though it be tried with fire, might be found unto praise and honour and glory at the appearing of Jesus Christ.” 1 Peter 1:7. A Mormon version is the frequently referenced Alma 32:24, which compares the word to a seed, which if given place will begin to swell, “and when you feel these swelling motions, ye will begin to say within your selves—It must needs be that this is a good seed, or that the word is good.” Plant the seed, watch it grow, come to “know” by proof of results.

I do in fact notice that giving place for a seed can lead to good feelings and positive experiences. However, cause and effect are mysterious to me, pattern searching is a real phenomenon, confirmation bias happens. In other words, I feel the swelling motions, but I never get to “must needs be.” The “must” is forever elusive.

Sensus Divinitatis: French Protestant reformer John Calvin used the term sensus divinitatis (“sense of divinity”) to describe a hypothetical human sense:

That there exists in the human mind and indeed by natural instinct, some “sensus divinitatis,” we hold to be beyond dispute, since God himself, to prevent any man from pretending ignorance, has endued all men with some idea of his Godhead … [T]his is not a doctrine which is first learned at school, but one as to which every man is, from the womb, his own master; one which nature herself allows no individual to forget. (John Calvin, Institutes of the Christian Religion, Vol I, Chapter III)

Not me. It has been suggested that this sense does not work properly in some humans due to sin. (Alvin Plantinga, Warranted Christian Belief). I don’t believe it, but that is certainly a theory I have heard in Mormon circles as well.  

Gift of the Holy Ghost: The way Mormons often talk about the gift of the Holy Ghost sounds a lot like Calvin’s sensus divinitatis, including Plantinga’s theory that the sense may not work due to sin. From The Church of Jesus Christ of Latter-day Saints’ Gospel Principles, a reasonable indicator of how Mormons talk, even if not doctrine by some definitions:

The Holy Ghost usually communicates with us quietly. His influence is often referred to as a “still small voice” . . .  The Holy Ghost speaks with a voice that you feel more than you hear . . . [T]he Holy Ghost will come to us only when we are faithful and desire help from this heavenly messenger. To be worthy to have the help of the Holy Ghost, we must seek earnestly to obey the commandments of God. We must keep our thoughts and actions pure. (Gospel Principles, Chapter 21: The Gift of the Holy Ghost, p. 123.)

I read the “thoughts and actions pure” worthiness requirement and recognize that saying “not me” can come across as a confession. But . . .  not me.

In short, for me there is no back channel, there is no Spirit-to-soul communication, there is no Gift, that I recognize as anything more than neurochemicals and hormones. Everything I know or think I know is subject to the limitations and failings of mortality. I am not certain of my own memories, my perceptions, or my emotions. I recall “burning in the bosom” experiences; I have dreamed dreams; I have seen visions. Some of these experiences have changed my life. Some of these experiences feel fresh in memory because I tell stories about them. But where they came from and what they mean seem forever a matter of interpretation. I attach meaning in the present over the surface of uncertain memory of an inherently ambiguous experience. I don’t recognize a back door, a sysop or root, an access to indisputable ultimate knowledge. I am not certain, always and forever.

The Language of Doubt

As a skeptic, the language of “doubt” can be misleading or misconstrued. Doubt can be a simple synonym for skepticism. There is a sense in which I live in a state of doubt always and forever. Because it is too all-encompassing, “doubt” is not a useful concept for me. Therefore, I find it useful to expand the vocabulary and to use words like cynicism, belief, probability, and trust.

Cynicism: A friend once said “assume goodwill.” I’m sure it was not original with him, but it stuck as good advice, reinforced by the example of his good life.

The cynic does not assume goodwill. The cynic disbelieves the sincerity or goodness of human motives and actions. The cynic sees the natural man, the economic maximizer, the selfish gene, in human interaction. The cynic sees the church’s decisions and policies in terms of the collection plate or tithing receipts.

I know people who play the intellectual game of explaining everything in selfish self-centered terms. But I am convinced there is more--love and altruism and God and friendship and loyalty and long-term perspectives that extend beyond any one person’s lifetime. Therefore, I generally think I am not a cynic.

Probability and Belief: Belief sometimes sounds like a binary—you believe or you disbelieve. But it doesn’t have to be a binary. I experience life as a swarm of probabilities. It is not clear there is any propositional statement I could 100% agree or 100% disagree with.

I can adopt the language of belief and disbelief by taking high probability propositions and calling them belief, and low probability propositions and calling them disbelief. And there is substance here. It’s not all word play. It is very possible for my high probability propositions to correspond to your beliefs or certainties or knowledge. Living in a swarm of probabilities does not mean anything goes.

On the other hand, church people often use “I know” or “beyond a shadow of doubt” phrases and I struggle to fit in. Narrowly speaking, high probability is not the same as knowledge, and turning a highly probable proposition into an “I know” would feel like playing to the crowd—using the words the community expects. More broadly, living in a swarm of probabilities makes me constantly aware of uncertainty and I wonder whether there is anything in the nature of a statement about faith or belief about which I have enough confidence to consider the move to “I know.”

Trust: I find “trust” the most useful concept to structure my religious thinking and conversation. Trust feels like a principle of action. Is my confidence level sufficient to make choices or turn my life or make a commitment? That’s trust.

Instead of asking “do I believe in God?” or “does God exist?” the question becomes Ivan’s question (Ivan of Dostoevsky’s Brothers Karamozov) “How can I trust God if he allows the most unthinkable evils to destroy innocents like the little girl?” In formal terms, this is not a logical theodicy (is it rational or is there an explanation that makes sense?) It is not exactly the evidential theodicy (does the weight of evidence, including the amount of evil in the world, argue for or against God?) It is more like an existential or pastoral theodicy that asks whether God makes sense in my life, in my circumstances, in light of my pain?

Instead of asking “do I believe in Christ?”—an existence proof kind of question--the question becomes one of confidence in an atonement. Is the child Yeshua born of Mariam someone I can trust in as a Savior? Is there a Christology (a theology regarding the person, nature, and role of Christ), a soteriology (a doctrine of salvation), that makes sense to me? That motivates me? That I am willing to trust in? Enough that I am willing to take up the cross and follow?

All this leading up to the questions that arise when turning the lens of trust on the Church. As I think about trusting the Church, the catalog of standard truth claims do not strike me as very important. Instead, I think about questions like Do I trust leaders? Or, Which leaders do I trust? Do I trust the disciplinary system, the process by which some human representative judges my qualifications? Is the doctrine, the description of how God works, the Plan, a reliable representation of reality? Do I trust the history as taught in the standard curriculum? Do I trust the Church’s claim to effect salvation? Do I trust the process of extending callings or making assignments? Do I trust the Church as custodian of tithes and offerings?

For me, every one of these trust questions is thought-provoking. Not quickly answered by reference to truth claims or “I know” kinds of belief statements. For me these trust questions lead to a very nuanced relationship with God and Christ and the Church. Neither all in nor all out but forever tentative and questioning.

 Don't miss these posts on Mormonism:

Also see the links in "Hal Boyd: The Ignorance of Mocking Mormonism."

Don’t miss these other guest posts by Chris:

In addition, Chris is my coauthor for

Don’t miss these Unitarian-Universalist sermons by Miles:

By self-identification, I left Mormonism for Unitarian Universalism in 2000, at the age of 40. I have had the good fortune to be a lay preacher in Unitarian Universalism. I have posted many of my Unitarian-Universalist sermons on this blog.

Making the Monopsony Argument for Minimum Wages More Evidence-Based—José Azar, Emiliano Huet-Vaughn, Ioana Elena Marinescu, Bledi Taska and Till Von Wachter

The monopsony argument for minimum wages is a sound one where it applies. It is not a good argument for a one-size-fits-all minimum wage like most minimum wage policies. And it points to a particular range of magnitudes for a minimum wage as helpful, not magnitudes outside that range. It is good to see some research aimed at making the monopsony argument more concrete and clarifying where it applies.

In Honor of Martin Weitzman

In “Why I am a Macroeconomist: Increasing Returns and Unemployment” I write:

… during the first few months of calendar 1985, I stumbled across Martin Weitzman’s paper “Increasing Returns and the Foundations of Unemployment Theory” in the Economics Department library. Marty’s paper made me decide to be a macroeconomist.

I am also a big fan of Martin’s book The Share Economy. It should be required reading for all macroeconomists. (I mention it in my posts “The Costs and Benefits of Repealing the Zero Lower Bound...and Then Lowering the Long-Run Inflation Target” and “The Equilibrium Paradox: Somebody Has to Do It.”)

I met Martin in person only when he came to give a seminar at the University of Michigan about how agnosticism about the true variance of asset prices could explain the equity premium puzzle. After a long argument over dinner, I came to think he was applying Humean skepticism to asset pricing, but that what really mattered was what investors actually had in their minds—which is something we can in principle establish with survey measures of expectations. (Existing survey measures of expectations on the Health and Retirement Study actually show that most people think the mean return on the stock market is quite low. The particular survey questions don’t really tell much about people’s expectations of the tail extremes.)

Suffice it to say that Martin Weitzman has had a big influence on me in my professional life. So I am saddened that he, like Alan Krueger (see “In Honor of Alan Krueger”) is dead at his own hand. The Wall Street Journal article shown above says tersely:

Prof. Weitzman died Aug. 27 in what the Massachusetts medical examiner determined was a suicide. He was 77 years old.

That article says this about The Share Economy:

His 1984 book, “The Share Economy,” caught the attention of politicians searching for ways to prevent the economy from lurching from periods of rapid inflation to spells of high unemployment. He believed inflationary pressures would abate and companies would be less likely to lay off workers in bad times if wages could be adjusted down as well as up, depending on profits.

“The major macroeconomic problems of our day can be traced back ultimately to the wage system,” he said at a political forum in 1987. “We try to award every worker a predetermined piece of the income pie before it’s out of the oven, before the size of the pie is even known.”

On his work on climate change, I like Holman Jenkin’s discussion in “CNN Climate Show Wasn’t Just Boring,” which also appeared on September 6, 2019. Holman writes:

… the shocking suicide of Harvard climate economist Martin Weitzman, rightly praised in obituaries for an insight lacking in the CNN town hall: A climate disaster is far from guaranteed. It’s the low but not insignificant chance of a “fat tail” worst-case disaster that we should worry about. (Mr. Weitzman put the odds at 3% to 10%.)

It comes as Weitzman’s student, collaborator and co-author, Gernot Wagner, tellingly has focused his own attention lately on geoengineering rather than the seemingly lost cause of carbon reduction.

The logic here is that if things get really, really bad, so that the world is getting as hot as hell, then we should not, and will not reject geoengineering out of hand. As Holman puts it:

So to answer CNN’s non-debate and the worries of the late Prof. Weitzman, if the small but not negligible chance of a climate catastrophe is borne out, we already know what the answer is going to be: to throw a bunch of particles into the atmosphere, at a cost of perhaps $2 billion a year, in order to block the estimated 1% of sunlight necessary to keep earth’s temperature in check.

Elizabeth Warren mentioned a policy action to tame climate change that Martin pushed forward: a carbon tax. Holman writes:

… Elizabeth Warren had an interesting moment when she admonished a network personality for trying to rile up viewers “around your light bulbs, around your straws and around your cheeseburgers.”

As the New York Times also noted, “For the first time, Ms. Warren explicitly embraced a carbon tax before quickly pivoting away . . .”

What’s Ms. Warren afraid of? A carbon tax would hardly be prohibitive. Weitzman advocated $40 a ton—the equivalent of 36 cents per gallon of gasoline. Such a tax could be implemented without raising the overall tax burden; it could be used to trim taxes on work, saving and investment, improving the economy overall. It could be embraced and copied by other nations out of self-interest rather than self-abnegation (unlike the absurd Green New Deal).

Short of a carbon tax or tradable carbon dioxide permits in limited supply, right now the most honest, effective way of reducing the amount of carbon dioxide we put into the atmosphere would be to ban the burning of coal worldwide. Politically, demonizing coal is an achievable and laudable goal that would do a lot more good than many, mostly symbolic policy measures that are being taken now.

What connects Martin’s work on taming climate change and his work on taming the business cycle was a penchant for attacking hard problems. The article at the top of this post says this:

“I’m drawn to things that are conceptually unclear, where it’s not clear how you want to make your way through this maze,” he said at a 2018 Harvard seminar honoring his career.

We need trailblazers like Martin. It is sad that we now have one fewer on our troubled planet.

Less Than 6 or More than 9 Hours of Sleep Signals a Higher Risk of Heart Attacks

It is a good thing that our culture’s attitudes toward sleep are turning more positive—from thinking of sleep as a sign of lazy slothfulness to thinking of sleep as an important contributor to creativity and good health. The authors of the Journal of the American College of Cardiology paper “Sleep Duration and Myocardial Infarction,” have made an important contribution to this ongoing cultural shift: Iyas Daghlas, Hassan S. Dashti, Jacqueline Lane, Krishna G. Aragam, Martin K. Rutter, Richa Saxena, and Céline Vette. The strength of the signal of heart attack risk provided by habitually short or habitually long sleep is substantial, as the paper’s “Central Illustration” shown above reports: when reporting an integer number of hours, integers 5 or below predict 1.2 times as high a risk as integers 6-9, while integers 10 and above predict 1.34 times as a high a risk as integers 6-9.

Because habitually short or long sleep reported at a baseline interview predicted later heart attacks, it is clear that those with unusually short or long sleep should take extra efforts to reduce heart attack risk. The authors of “Sleep Duration and Myocardial Infarction” also go some part of the way toward suggesting that interventions to moderate habitually short sleep might help reduce heart attack risk. They are careful not to oversell this possibility, saying:

“… randomized trials of sleep extension will be the most rigorous test of causality.”

and

“… recent work has demonstrated that sleep extension for short sleepers is a feasible intervention.”

referencing:

Al Khatib HK, Hall WL, Creedon A, et al. Sleep extension is a feasible lifestyle intervention in free-living adults who are habitually short sleepers: a potential strategy for decreasing intake of free sugars? A randomized controlled pilot study. Am J Clin Nutr 2018; 107:43–53.

Mendelian Randomization

In “Sleep Duration and Myocardial Infarction,” the closest the authors get to causal evidence that interventions to moderate habitually short sleep might reduce heart attack risk is through their “Mendelian Randomization” analysis. Mendelian Randomization is a technique that treats genes as an instrument for traits the genes foster. I discussed one of the few truly convincing Mendelian Randomization studies in “Data on Asian Genes that Discourage Alcohol Consumption Explode the Myth that a Little Alcohol is Good for your Health.” The Mendelian Randomization evidence in “Sleep Duration and Myocardial Infarction” is more convincing than that in many Mendelian Randomization studies, but still far from fully convincing. Let me explain. The point estimate in need of interpretation is that an extra hour of sleep caused by genes for more sleep predicts heart attack .8 times as much, with a 95% confidence interval from .67 to .95. (In percentage reductions, an extra hour of sleep caused by genes for more sleep predicts a 20% lower risk of heart attacks relative to the base rate for heart attacks, with a 95% confidence interval from a 5% reduction to a 33% reduction relative to the base rate.)

The big interpretative issue can be stated succinctly this way: an intervention that gets people to sleep longer is not likely to have the same effects as hypothetical engineering that altered an individual’s genes at conception.

Another issue that will soon be resolved by data sets with having genetic data for the parents as well as the individuals themselves is that an individual’s genes represent half of the genes the individual’s two biological parents have, and those genes in the parents are likely to affect the kind of parenting they give in relation to sleep.

Typically, hypothetical genetic engineering will have a bigger effect because it works through many channels throughout life. Let me give a simple example from economics. There are no doubt genes that predispose people to take courses on financial planning. Indeed, I would bet that with a data set that had the genes of a 100 million people and detailed data on the courses they have taken, we would have no trouble finding those genes. But it would be a mistake to take those genes as good instruments for the effect of taking courses on financial planning on patterns of retirement saving. The reason is that genes for taking courses in financial planning are likely to be genes for intelligence—especially mathematical intelligence—and for interest in finance. Mathematical intelligence and being interested in finance would be likely to help people do a better job of retirement saving in many ways even if for some reason they never take a financial planning class.

The fact that the biggest, most important genes for cardiovascular risk are known, and that sleep duration is predicted by many genes that each are a small part of the story allows the authors of “Sleep Duration and Myocardial Infarction” to use a variety of techniques to reasonably rule out the possibility that causally distinct genes near on the chromosome—and thereby highly correlated with sleep duration genes—are confounding the inference that sleep duration genes cause more heart attacks. But the key issues of interpretation remain.

After correlated gene effects are ruled out, that genes for getting more sleep predict that someone will be more likely to get a heart attack says that either getting more sleep reduces heart attack risk, or something that causes people to get more sleep (intermediate in the causal chain between those genes and sleep duration itself) reduces heart attack risk or both. It is easy to think of many, many things that might both cause someone to get more sleep and to have reduced heart attack risk. For example, many things that operate through the cognitive and social realm could both cause someone to get more sleep and to have reduced heart attack risk: intelligence, education, the likelihood a relative is a doctor, the type of friends one has. And the biology of sleep has enough unknowns to leave room for many biological states that could both cause someone to get more sleep and to have higher heart attack risk.

The authors of “Sleep Duration and Myocardial Infarction” try to control for some of the most obvious things that might cause both more sleep and fewer heart attacks. But statistical controls are seldom adequate to do the necessary “controlling.” Even for the things they are intended to measure, available data is usually error-ridden—and careful measurement-error correction is uncommon. And a host of things that could be relevant (especially given the limited state of our knowledge) may not be measured at all.

Nevertheless, knowing that either or both getting more sleep reduces heart attack risk, or something that causes people to get more sleep (intermediate in the causal chain between those genes and sleep duration itself) reduces heart attack risk is extremely helpful. Even if an intervention trial showed no effect of directly getting people to sleep more on heart attack risk, the search for something that causes both more sleep and reduced heart attack risk could be a very fruitful one in medical insights.

Two related reminders may jump-start interpretation in other cases when Mendelian Randomization is used as a technique:

  1. Genes for sleep causing reduced heart attack risk does not mean more sleep causes reduced heart attack risk. A third thing downstream from genes for sleep could cause both more sleep and reduced heart attack risk. (See “Cousin Causality.”) To generalize, genes for X causing Y does not mean X causes Y. The key alternative is that “genes for X” cause Z, and Z causes both X and Y.

  2. A feasible intervention might be very different in its effects from what would happen if one could really do genetic engineering at conception (as well as genetic engineering at the conception of the parents).

The authors cite the following paper as saying Mendelian Randomization is not susceptible to reverse causality:

Davey Smith G, Ebrahim S. ‘Mendelian randomi- zation’: can genetic epidemiology contribute to un- derstanding environmental determinants of disease? Int J Epidemiol 2003;32:1–22.

This is simply not true in any useful sense. Supposed one is interested in whether X causes Y. Genes for X can easily be “genes for X” precisely because they cause Y, which then in turn causes X. What is most helpful here is the concept of “genetic correlation”: loosely, some of the “genes for X” also being “genes for Y.” Just as correlation does not imply causation, genetic correlation doesn’t imply causation. There is a danger that some scientists will be tempted (and fall for the temptation) to use the rhetoric of Mendelian Randomization to try to get causation from genetic correlation. Instead, one should think of getting an implication of causation from a genetic correlation as requiring additional arguments that are just as difficult as—and indeed are mostly analogous to—the arguments needed to get an implication of causation from any other type of correlation.

The Bottom Line

Despite all of the care that should be taken in interpreting the science, the results of “Sleep Duration and Myocardial Infarction” should certainly raise one’s Bayesian assessment to a substantial posterior that arranging your life so that you get nearer to 7 or 8 hours sleep will reduce your risk of a heart attack. For now, that is the bottom line you should take in terms of advice for your own life.

I should also mention that if more sleep does reduce heart attack risk, one important possibility channel is that lack of sleep leads people to eat worse by reducing their self-control. If so, then having a pattern of eating that doesn’t depend on so much self-control in order to eat well could help reduce the harm if one can’t find time to get enough sleep. (See “Live Your Life So You Don't Need Much Self-Control.”) And, of course, arranging your life so it doesn’t take so much self-control to get enough sleep can help if sleep reduces heart-attack risk through other channels.

For annotated links to other posts on diet and health, see:

John Locke: Even Monarchists Admit there are Some Circumstances When It is Appropriate to Rise Up Against a King

In Sections 223-227 of John Locke’s 2d Treatise on Government: Of Civil Government, Chapter XIX, “Of the Dissolution of Government,” John Locke is insistent that rulers, like the ruled, can commit crimes that deserve punishment. And all too often, an uprising is the only way to deal with very serious crimes by a ruler. That view I discussed in “If Rebellion is a Sin, It is a Sin Committed Most Often by Those in Power.”

In Sections 230, 231 and the first half of 232, John Locke repeats this basic view that (a) rulers are subject to the law (including the law of nature) as much as the ruled are and (b) if a rule commits an infraction big enough to engender an uprising with any real chance of success, it is likely that the ruler has made truly grave invasions of the people’s liberty and truly grave harms to their welfare:

§. 230. Nor let any one say, that mischief can arise from hence, as often as it shall please a busy head, or turbulent spirit, to desire the alteration of the government. It is true, such men may stir, whenever they please; but it will be only to their own just ruin and perdition: for till the mischief be grown general, and the ill designs of the rulers become visible, or their attempts sensible to the greater part, the people, who are more disposed to suffer than right themselves by resistance, are not apt to stir. The examples of particular injustice, or oppression of here and there an unfortunate man, moves them not. But if they universally have a persuasion, grounded upon manifest evidence, that designs are carrying on against their liberties, and the general course and tendency of things cannot but give them strong suspicions of the evil intention of their governors, who is to be blamed for it? Who can help it, if they, who might avoid it, bring themselves into this suspicion? Are the people to be blamed, if they have the sense of rational creatures, and can think of things no otherwise than as they find and feel them? And is it not rather their fault, who put things into such a posture, that they would not have them thought to be as they are? I grant, that the pride, ambition, and turbulency of private men have sometimes caused great disorders in commonwealths, and factions have been fatal to states and kingdoms. But whether the mischief hath oftener begun in the people’s wantonness, and a desire to cast off the lawful authority of their rulers, or in the rulers’ insolence, and endeavours to get and exercise an arbitrary power over their people; whether oppression or disobedience, gave the first rise to the disorder, I leave it to impartial history to determine. This I am sure, whoever, either ruler or subject, by force goes about to invade the rights of either prince or people, and lays the foundation for overturning the constitution and frame of any just government, is highly guilty of the greatest crime, I think, a man is capable of, being to answer for all those mischiefs of blood, rapine, and desolation, which the breaking to pieces of governments bring on a country. And he who does it, is justly to be esteemed the common enemy and pest of mankind, and is to be treated accordingly.

§. 231. That subjects or foreigners, attempting by force on the properties of any people, may be resisted with force, is agreed on all hands. But that magistrates, doing the same thing, may be resisted, hath of late been denied: as if those who had the greatest privileges and advantages by the law, had thereby a power to break those laws, by which alone they were set in a better place than their brethren: whereas their offence is thereby the greater, both as being ungrateful for the greater share they have by the law, and breaking also that trust, which is put into their hands by their brethren.

§. 232. Whosoever uses force without right, as every one does in society, who does it without law, puts himself into a state of war with those against whom he so uses it; and in that state all former ties are cancelled, all other rights cease, and every one has a right to defend himself, and to resist the aggressor.

John Locke’s repetition of this point indicates how important he thought the point to be.

Beginning with the second half of Section 232 and continuing through Section 239, John Locke shows that even the Monarchists Barclay and Winzerus, plus, he claims, Bilson, Bracton, Fortescue, the author of The Mirrour and Hooker, admit of some circumstances in which it is appropriate to rise up against a king, or someone who had been thought of as a king. In the quotation that follows, I delayed two long Latin passages to the end of this post, leaving the English translations in their original locations:

This is so evident, that Barclay himself, that great assertor of the power and sacredness of kings, is forced to confess, That it is lawful for the people, in some cases, to resist their king; and that too in a chapter, wherein he pretends to shew, that the divine law shuts up the people from all manner of rebellion. Whereby it is evident, even by his own doctrine, that, since they may in some cases resist, all resisting of princes is not rebellion. His words are these: [First Latin passage from Barclay] In English thus:

§. 233. “But if any one should ask, Must the people then always lay themselves open to the cruelty and rage of tyranny? Must they see their cities pillaged, and laid in ashes, their wives and children exposed to the tyrant’s lust and fury, and themselves and families reduced by their king to ruin, and all the miseries of want and oppression, and yet sit still? Must men alone be debarred the common privilege of opposing force with force, which nature allows so freely to all other creatures for their preservation from injury? I answer: Self-defence is a part of the law of nature; nor can it be denied the community, even against the king himself: but to revenge themselves upon him, must by no means be allowed them: it being not agreeable to that law. Wherefore if the king shall shew an hatred, not only to some particular persons, but sets himself against the body of the commonwealth, whereof he is the head, and shall, with intolerable ill usage, cruelly tyrannize over the whole, or a considerable part of the people, in this case the people have a right to resist and defend themselves from injury: but it must be with this caution, that they only defend themselves, but do not attack their prince: they may repair the damages received, but must not for any provocation exceed the bounds of due reverence and respect. They may repulse the present attempt, but must not revenge past violences: for it is natural for us to defend life and limb, but that an inferior should punish a superior, is against nature. The mischief which is designed them, the people may prevent before it be done; but when it is done, they must not revenge it on the king, though author of the villany. This therefore is the privilege of the people in general, above what any private person hath; that particular men are allowed by our adversaries themselves (Buchanan only excepted) to have no other remedy but patience; but the body of the people may with respect resist intolerable tyranny; for when it is but moderate, they ought to endure it.”

§. 234. Thus far that great advocate of monarchical power allows of resistance.

§. 235. It is true, he has annexed two limitations to it, to no purpose: First, He says, it must be with reverence. Secondly, It must be without retribution, or punishment; and the reason he gives is, because an inferior cannot punish a superior. First, How to resist force without striking again, or how to strike with reverence, will need some skill to make intelligible. He that shall oppose an assault only with a shield to receive the blows, or in any more respectful posture, without a sword in his hand, to abate the confidence and force of the assailant, will quickly be at an end of his resistance,and will find such a defence serve only to draw on himself the worse usage. This is as ridiculous a way of resisting, as Juvenal thought it of fighting; ubi tu pulsas, ego vapulo tantum. And the success of the combat will be unavoidably the same he there describes it: —“Libertas pauperis hæc est: Pulsatus rogat, & pugnis concisus, adorat, Ut liceat paucis cum dentibus inde reverti.”This will always be the event of such an imaginary resistance, where men may not strike again. He therefore who may resist must be allowed to strike. And then let our author, or any body else, join a knock on the head, or a cut on the face, with as much reverence and respect as he thinks fit. He that can reconcile blows and reverence, may, for aught I know, desire for his pains, a civil, respectful cudgeling wherever he can meet with it. Secondly, As to his second, An inferior cannot punish a superior; that is true, generally speaking, whilst he is his superior. But to resist force with force, being the state of warthat levels the parties, cancels all former relation of reverence, respect, and superiority:and then the odds that remains, is, that he, who opposes the unjust aggressor, has this superiority over him, that he has a right, when he prevails, to punish the offender, both for the breach of the peace, and all the evils that followed upon it. Barclay therefore, in another place, more coherently to himself, denies it to be lawful to resist a king in any case. But he there assigns two cases, whereby a king may unking himself. His words are, [Second Latin passage from Barclay] Which in English runs thus:

§. 237. “What then, can there be no case happen wherein the people may of right, and by their own authority, help themselves, take arms, and set upon their king, imperiously domineering over them? None at all, whilst he remains a king. Honour the king, and he that resists the power, resists the ordinance of God; are divine oracles that will never permit it. The people therefore can never come by a power over him, unless he does something that makes him cease to be a king: for then he divests himself of his crown and dignity, and returns to the state of a private man, and the people become free and superior, the power which they had in the interregnum, before they crowned him king, devolving to them again. But there are but few miscarriages which bring the matter to this state. After considering it well on all sides, I can find but two. Two cases there are, I say, whereby a king, ipso facto, becomes no king, and loses all power and regal authority over his people; which are also taken notice of by Winzerus. “The first is, If he endeavour to overturn the government, that is, if he have a purpose and design to ruin the kingdom and commonwealth, as it is recorded of Nero, that he resolved to cut off the senate and people of Rome, lay the city waste with fire and sword, and then remove to some other place. And of Caligula, that he openly declared, that he would be no longer a head to the people or senate, and that he had it in his thoughts to cut off the worthiest men of both ranks, and then retire to Alexandria: and he wished that the people had but one neck, that he might dispatch them all at a blow. Such designs as these, when any king harbours in his thoughts, and seriously promotes, he immediately gives up all care and thought of the commonwealth; and consequently forfeits the power of governing his subjects, as a master does the dominion over his slaves whom he hath abandoned.

§. 238. “The other case is, When a king makes himself the dependent of another, and subjects his kingdom which his ancestors left him, and the people put free into his hands, to the dominion of another: for however perhaps it may not be his intention to prejudice the people; yet because he has hereby lost the principal part of regal dignity, viz. to be next and immediately under God, supreme in his kingdom; and also because he betrayed or forced his people, whose liberty he ought to have carefully preserved, into the power and dominion of a foreign nation. By this, as it were, alienation of his kingdom, he himself loses the power he had in it before, without transferring any the least right to those on whom he would have bestowed it; and so by this act sets the people free, and leaves them at their own disposal. One example of this is to be found in the Scotch Annals.”

§. 239. In these cases Barclay, the great champion of absolute monarchy, is forced to allow, that a king may be resisted, and ceases to be a king. That is, in short, not to multiply cases, in whatsoever he has no authority, there he is no king, and may be resisted: for wheresoever the authority ceases, the king ceases too, and becomes like other men who have no authority. And these two cases he instances in, differ little from those above mentioned, to be destructive to governments, only that he has omitted the principle from which his doctrine flows; and that is, the breach of trust, in not preserving the form of government agreed on, and in not intending the end of government itself, which is the public good and preservation of property. When a king has dethroned himself, and put himself in a state of war with his people, what shall hinder them from prosecuting him who is no king, as they would any other man, who has put himself into a state of war with them; Barclay, and those of his opinion, would do well to tell us. This farther I desire may be taken notice of out of Barclay, that he says, “The mischief that is designed them, the people may prevent before it be done:” whereby he allows resistance when tyranny is but in design. “Such designs as these,” says he, “when any king harbours in his thoughts and seriously promotes, he immediately gives up all care and thought of the commonwealth;” so that, according to him, the neglect of the public good is to be taken as an evidence of such design, or at least for a sufficient cause of resistance. And the reason of all, he gives in these words, “Because he betrayed or forced his people, whose liberty he ought carefully to have preserved.” What he adds, into the power and dominion of a foreign nation, signifies nothing, the fault and forfeiture lying in the loss of their liberty, which he ought to have preserved, and not in any distinction of the persons to whose dominion they were subjected. The people’s right is equally invaded, and their liberty lost, whether they are made slaves to any of their own, or a foreign nation; and in this lies the injury, and against this only they have the right of defence. And there are instances to be found in all countries, which shew, that it is not the change of nations in the persons of their governors, but the change of government, that gives the offence. Bilson, a bishop of our church, and a great stickler for the power and prerogative of princes, does, if I mistake not, in his treatise of Christian subjection, acknowledge, that princes may forfeit their power, and their title to the obedience of their subjects; and if there needed authority in a case where reason is so plain, I could send my reader to Bracton, Fortescue, and the author of The Mirrour, and others, writers that cannot be suspected to be ignorant of our government, or enemies to it. But I thought Hooker alone might be enough to satisfy those men, who relying on him for their ecclesiastical polity, are by a strange fate carried to deny those principles upon which he builds it. Whether they are herein made the tools of cunninger workmen, to pull down their own fabric, they were best look. This I am sure, their civil policy is so new, so dangerous, and so destructive to both rulers and people, that as former ages never could bear the broaching of it; so it may be hoped, those to come, redeemed from the impositions of these Egyptian under-task-masters, will abhor the memory of such servile flatterers, who, whilst it seemed to serve their turn, resolved all government into absolute tyranny, and would have all men born to, what their mean souls fitted them for, slavery.

The brute fact is that the history of kings and other rulers includes cases of rulers who so flagrantly violated their trust and so powerfully damaged the welfare of those they ruled, that it is hard for any serious scholar to deny that there are some cases where it is appropriate to rise up against a ruler, or individual who was a ruler at one time. Thus, John Locke’s doctrine that “Bad Rulers May Be Removed” is different from other views in degree and where the line is drawn, not different in kind.

Closely related to the question of where the line should be drawn that rulers step over at their peril is the question of who can rightfully judge that a ruler has stepped over that line. That is the subject of the following sections, that I will discuss in a couple of weeks.

For links to other John Locke posts, see these John Locke aggregator posts: 

First Latin passage from Barclay: “Quod siquis dicat, Ergone populus tyrannicæ crudelitati & furori jugulum semper præbebit? Ergone multitudo civitates suas fame, ferro, & flammâ vastari, seque, conjuges, & liberos fortunæ ludibrio & tyranni libidini exponi, inque omnia vitæ pericula omnesque miserias & molestias à rege deduci patientur? Num illis quod omni animantium generi est à naturâ tributum, denegari debet, ut sc. vim vi repellant, seseq; ab injuriâ tueantur? Huic breviter responsum sit, Populo universo negari defensionem, quæ juris naturalis est, neque ultionem quæ præter naturam est adversus regem concedi debere. Quapropter si rex non in singulares tantum personas aliquot privatum odium exerceat, sed corpus etiam reipublicæ, cujus ipse caput est, i. e. totum populum, vel insignem aliquam ejus partem immani & intolerandâ sævitiâ seu tyrannide divexet; populo, quidem hoc casu resistendi ac tuendi se ab injuriâ potestas competit, sed tuendi se tantum, non enim in principem invadendi: & restituendæ injuriæ illatæ, non recedendi à debitâ reverentiâ propter acceptam injuriam. Præsentem denique impetum propulsandi non vim præteritam ulciscenti jus habet. Horum enim alterum à naturâ est, ut vitam scilicet corpusque tueamur. Alterum verò contra naturam, ut inferior de superiori supplicium sumat. Quod itaque populus malum, antequam factum sit, impedire potest, ne fiat, id postquam factum est, in regem authorem sceleris vindicare non potest: populus igitur hoc ampliùs quàm privatus quispiam habet: quod huic, vel ipsis adversariis judicibus, excepto Buchanano, nullum nisi in patientia remedium superest. Cùm ille si intolerabilis tyrannus est (modicum enim ferre omnino debet) resistere cum reverentiâ possit,” Barclay contra Monarchom. l. iii. c. 8.

Second Latin passage from Barclay: “Quid ergo, nulline casus incidere possunt quibus populo sese erigere atque in regem impotentius dominantem arma capere & invadere jure suo suâque authoritate liceat? Nulli certe quamdiu rex manet. Semper enim ex divinis id obstat, Regem honorificato; & qui potestati resistit, Dei ordinationi resistit: non aliàs igitur in eum populo potestas est quam si id committat propter quod ipso jure rex esse desinat. Tunc enim se ipse principatu exuit atque in privatis constituit liber: hoc modo populus & superior efficitur, reverso ad eum sc. jure illo quod ante regem inauguratum in interregno habuit. At sunt paucorum generum commissa ejusmodi quæ hunc effectum pariunt. At ego cum plurima animo perlustrem, duo tantum invenio, duos, inquam, casus quibus rex ipso facto ex rege non regem se facit & omni honore & dignitate regali atque in subditos potestate destituit; quorum etiam meminit Winzerus. Horum unus est, Si regnum disperdat, quemadmodum de Nerone fertur, quod is nempe senatum populumque Romanum, atque adeo urbem ipsam ferro flammaque vastare, ac novas sibi sedes quærere decrevisset. Et de Caligula, quod palam denunciarit se neque civem neque principem senatui amplius fore, inque animo habuerit interempto utriusque ordinis electissimo quoque Alexandriam commigrare, ac ut populum uno ictu interimeret, unam ei cervicem optavit. Talia cum rex aliquis meditatur & molitur serio, omnem regnandi curam & animum ilico abjicit, ac proinde imperium in subditos amittit, ut dominus servi pro derelicto habiti dominium.

§. 236. “Alter casus est, Si rex in alicujus clientelam se contulit, ac regnum quod liberum à majoribus & populo traditum accepit, alienæ ditioni mancipavit. Nam tunc quamvis forte non eâ mente id agit populo plane ut incommodet: tamen quia quod præcipuum est regiæ dignitatis amisit, ut summus scilicet in regno secundum Deum sit, & solo Deo inferior, atque populum etiam totum ignorantem vel invitum, cujus libertatem sartam & tectam conservare debuit, in alterius gentis ditionem & potestatem dedidit; hâc velut quadam regni ab alienatione effecit, ut nec quod ipse in regno imperium habuit retineat, ne in eum cui collatum voluit, juris quicquam transferat; atque ita eo facto liberum jam & suæ potestatis populum relinquit, cujus rei exemplum unum annales Scotici suppeditant.” Barclay contra Monarchom. l. iii. c. 16.

Give Central Banks Independence and New Political Pressures to Balance the Old Ones

There is a debate about whether central banks should follow a formal rule for monetary policy. (See my view on rules versus discretion in monetary policy in “Next Generation Monetary Policy.”) But there is broad agreement that central banks should follow some kind of systematic monetary policy. As long as carrying out a systematic monetary policy requires any kind of judgment, and as long as politicians have short-run interests contrary to good monetary policy, central banks need tactical independence. But over a longer horizon central banks don’t need fewer political pressures, they need new political pressures to balance out the old ones.

I see political pressures on central banks through the lens of negative interest rate policy. I know from my travels to talk about negative interest rate policy at central banks around the world that central bankers worry about the political blowback from changing paper currency policy as I address in the links in “How and Why to Eliminate the Zero Lower Bound: A Reader’s Guide.” And I worry that many central bankers fail to code it correctly in their brains as implicit criticism of not using vigorous negative interest rate policy when they get criticized for a recovery that is agonizingly slow. In the scary new monetary landscape, there is no refuge from criticism. But central bankers can, if they choose, get criticized inappropriately for doing the right thing instead of getting criticized appropriately for doing the wrong thing.

There is a longstanding set of arguments that have been developed by monetary “hawks,” who in almost all situations argue that interest rates should be higher. There has been little innovation in this area in the last few years, so the set of arguments by John Taylor that I discuss in “Contra John Taylor” can serve as a handy guide to many of them. (Because of his rule, John Taylor does occasionally think that interest rates should be lower, on this occasion, he retails the standard hawkish arguments.) In addition to these standard hawkish arguments that negative interest rates themselves arouse, the idea of changing paper currency policy arouse another set of anxieties about overweening government power. By keeping paper currency in the picture in my proposals (laid out in most detail in work with Ruchir Agarwal 1, 2), I have avoided the vitriol that comes to any threat to the existence of paper currency, but people also get anxious about changing the rules for paper currency.

What is most needed right now is for those who are tempted to lobby for lower interest rates in general to shift gears to lobbying for the elimination of any lower bound on interest rates. In general, I think the Fed makes reasonable decisions in particular circumstances given their overall policy, but there are many dimensions in which their overall policy can be improved, beginning with eliminating any lower bound on rates. Here is the list or proposed upgrading of systematic monetary policy that I argue for in “Next Generation Monetary Policy”:

  1. eliminating the zero lower bound or any effective lower bound on interest rates

  2. tripling the coefficients in the Taylor rule

  3. reducing the penalty for changing directions

  4. reducing the presumption against moving more than 25 basis points at any given meeting

  5. a more equal balance between worrying about the output gap and worrying about fluctuations in inflation

  6. focusing on a price index that gives a greater weight to durables

  7. adjusting for risk premia

  8. pushing for strict enough leverage limits for financial firms that interest rate policy is freed up to focus on issues other than financial stability.

  9. having a nominal anchor.

The Fed knows that it will get political criticism for negative interest rates and changes in paper currency policy. It would help even the odds for good macroeconomic outcomes if the Fed knew it would get criticism for not implementing negative rates, including changes in paper currency policy. Unfortunately, the chances that deep negative rates will be unnecessary in the future are very slim. So it matters greatly for the economy in the future whether the Fed feels political pressure in only one direction or an equipoise of political forces.

Related reading: Don’t miss the links to negative interest rate papers and blog posts in “How and Why to Eliminate the Zero Lower Bound: A Reader’s Guide.”

Should the Typical Person be Restricting Salt Intake?

The idea that almost all of us should be cutting back on salt intake to avoid high blood pressure is well-entrenched, not only in the popular mind, but in the minds of physicians. But how strong is the evidence for this view? James J. DiNicolantonio, Varshil Mehta and James O'Keefe question how valuable salt restriction is for the typical person in their American Medical Journal article “Is Salt a Culprit or an Innocent Bystander in Hypertension? A Hypothesis Challenging the Ancient Paradigm?” All quotations in this blog post are from that article.

Let’s clear up a few things:

  1. There are enough different physical conditions that it seems likely that some people should cut back on salt intake. But are these rare conditions?

  2. For those with high blood pressure, medications that reduce blood pressure do seem to save lives. That doesn’t speak to whether reducing salt intake is important.

  3. People who eat a lot of salty food do seem to be less healthy. But maybe it is all the other bad things in typical types of salty food. After all, processed food tends to have a lot of salt and a lot of sugar.

One reason it seems unlikely that restricting salt intake is necessary for the typical person is that evolutionarily, it would have been necessary for the body to evolve mechanisms to get the right sodium level:

The human brain (hypothalamus) is wired to maintain salt (sodium) balance and hence controls our salt intake.12,42, 43 The biological reason behind this tight homeostatic regulation is that the maintenance of normal sodium levels in the extracellular fluid is required for life and for cellular processes to function properly. The transition process from marine milieu to land-based existence required the evolution of cells that were able to simulate the salty environment of their progenitor cells that existed in sea water.

If sodium levels get too high, the bodies of healthy individuals are good at getting rid of sodium:

Dietary salt has been considered as one of the most important etiologic causes of hypertension. However, according to a study conducted by Hall,82 increasing salt intake in individuals with normal kidney function “usually does not increase arterial pressure much because the kidneys rapidly eliminate the excess salt and blood volume is hardly altered.” Other studies also mention the same principal, saying that individuals with normal kidney functions eliminate excess dietary salt with ease.83, 84, 85, 86

If sodium levels get too low, people start (appropriately) craving salt:

In other words, bodily need drives salt intake. In fact, the low-salt advice may lead to salt cravings and an overconsumption of more processed foods to obtain the salt our physiology desires.12 However, nowadays, to get the salt our body needs we end up consuming salty processed foods (instead of naturally salty foods) and thus consume a greater amount of harmful dietary substances (eg, excess calories, added sugars, harmful fats, and artificial flavorings).12 Indeed, low-salt diets may inadvertently cause us to eat more added sugars. When we are deficient in salt there is an enhanced craving for it, but this does not mean we are addicted to salt.42, 43, 48, 49

If sodium levels get too low and you don’t eat salt, bad things can happen:

…exercise or physical work (particularly when done in a warm or hot environment) on a low-salt diet causes a 10-fold increased risk of heat exhaustion and prostration (characterized by nausea, vomiting, tachycardia, hypotension, vertigo, dehydration, and collapse).36 Moreover, following the advice to consume <2300 mg of sodium per day can lead to negative sodium balance, as well as negative calcium and magnesium balance.37 Thus, low-salt diets may predispose to calcium and magnesium deficiency and all the negative consequences that come with it (including osteoporosis, hypertension, cardiovascular events, arrhythmias, coronary vasospasm, sudden death, and more).

Overall, chronic salt restriction has minuses as well as pluses:

Most importantly, a meta-analysis of almost 170 studies noted that sodium restriction only lowers blood pressure by approximately 1%-3% in normotensives and 3.5%-7% in hypertensives98; however, restricting sodium increases aldosterone, renin, noradrenaline, and blood lipids. It is hard to justify dietary sodium restriction when the overall cardiovascular risk seems to worsen rather than improve when all risk factors are taken into account.

Unfortunately, salt restriction is now so much part of conventional wisdom, in a way that goes beyond the evidence, that it hard to know where to get reliable advice about it. But as long as you are avoiding sugar and processed food, it is not clear you need to worry about salt. Of course, you should worry about high blood pressure, but beyond taking blood pressure medication, avoiding sugar and processed food and may do a lot more to reduce your blood pressure than cutting back on salt. Also, I’d like to see a study on the effects of fasting on blood pressure. Blood pressure is easy to measure frequently, so it is possible to find out how your blood pressure responds to avoiding sugar and processed food and to fasting.

Don’t miss:

For annotated links to other posts on diet and health, see:

New Evidence on the Genetics of Homosexuality

Since I moved to the University of Colorado Boulder in the Summer of 2016, I have added social science genetics to my research portfolio. Recently I was made a Faculty Fellow at the Institute for Behavioral Genetics at the University of Colorado Boulder. In the last three years, I have been impressed with the quality of the research being done in this area. Social science genetics suffered its replication crisis early compared to other areas of social science—the era of the “candidate gene study” in which testing out many genes led to de facto p-hacking. Since then, sample sizes for human genetics data have become large enough that one can get significant results even after careful correction for multiple hypothesis testing across a huge number of genetic variants.

The basic finding for most traits of interest to social scientists is that a large number of genes each has a tiny effect. So there typically isn’t a gene for anything but a few diseases. In the absence of a single determinative gene, there are two key things that can be done: (a) look at what kinds of genes are related to a particular trait, how they compare to the genes for other traits, and how great an R-squared genes could in principle get to and (b) construct linear combinations of genes (called “polygenic scores”) that can predict the trait—always with a lower R-squared than is possible in principle because the weights in the linear combination have estimation error.

The August 30, 2019 issue of Science includes an article “Large-scale GWAS reveals insights into the genetic architecture of same-sex sexual behavior” looking at the genetics of homosexuality in this careful way. The authors were able to use data on almost half a million individuals. Here is their “Structured Abstract”:

INTRODUCTION

Across human societies and in both sexes, some 2 to 10% of individuals report engaging in sex with same-sex partners, either exclusively or in addition to sex with opposite-sex partners. Twin and family studies have shown that same-sex sexual behavior is partly genetically influenced, but previous searches for the specific genes involved have been underpowered to detect effect sizes realistic for complex traits.

RATIONALE

For the first time, new large-scale datasets afford sufficient statistical power to identify genetic variants associated with same-sex sexual behavior (ever versus never had a same-sex partner), estimate the proportion of variation in the trait accounted for by all variants in aggregate, estimate the genetic correlation of same-sex sexual behavior with other traits, and probe the biology and complexity of the trait. To these ends, we performed genome-wide association discovery analyses on 477,522 individuals from the United Kingdom and United States, replication analyses in 15,142 individuals from the United States and Sweden, and follow-up analyses using different aspects of sexual preference.

All quotations in this post are from “Large-scale GWAS reveals insights into the genetic architecture of same-sex sexual behavior.

Unsurprisingly, there is no “gay gene,” but instead many, many genes that each has a small effect on the likelihood of having had at least one same-sex sex partner:

The SNPs that reached genome-wide significance had very small effects (odds ratios ~1.1) (table S7). For example, in the UK Biobank, males with a GT genotype at the rs34730029 locus had 0.4% higher prevalence of same-sex sexual behavior than those with a TT genotype (4.0 versus 3.6%). Nevertheless, the contribution of all measured common SNPs in aggregate (SNP-based heritability) was estimated to be 8 to 25% (95% CIs [Confidence Intervals], 5 to 30%) of variation in female and male same-sex sexual behavior, in which the range reflects differing estimates by using different analysis methods or prevalence assumptions … same-sex sexual behavior, like most complex human traits, is influenced by the small, additive effects of very many genetic variants, most of which cannot be detected at the current sample size (22). Consistent with this interpretation, we show that the contribution of each chromosome to heritability is broadly proportional to its size (fig. S3) (14). In contrast to linkage studies that found substantial association of sexual orientation with variants on the X-chromosome (8, 23), we found no excess of signal (and no individual genome-wide significant loci) on the X-chromosome (fig. S4).

Homosexuality is still rare enough that a sample of half a million or so is still not enough to get precise estimates of just what fraction of the variation in homosexual behavior could in principle be predicted by genes. For linear combinations of common genes, the key quotation from above is:

… the contribution of all measured common SNPs [single nucleotide polymorphisms] in aggregate (SNP-based heritability) was estimated to be 8 to 25% (95% CIs, 5 to 30%) of variation.

Based on a wider ranges of genetic variation, the key quotation is as follows:

By modeling the correspondence of relatedness among individuals and the similarity of their sexual behavior, we estimated broad-sense heritability—the percentage of variation in a trait attributable to genetic variation—at 32.4% [95% confidence intervals (CIs), 10.6 to 54.3] (table S4). This estimate is consistent with previous estimates from smaller twin studies (7).

In any case, don’t fall into the fallacy that “genetic” means “unmodifiable” and “environmental” means “modifiable.” Many things that are environmental are hard to modify because it is hard to modify the environment, while many things that are genetic are easy to modify. For example, nearsightedness can easily be corrected by eyeglasses and contact lenses. In the case of homosexuality, there have been, in effect many messily conducted experiments in modifying homosexuality that directly show that in many, many cases it is essentially impossible to modify. Genetic evidence does not speak directly to “modifiability.” In other words, you can’t use genetic evidence to say whether something is a “choice” or not.

The notion that genetic effects are hardwired physical effects is not always on track. Genetics for complex traits often operate through an effect on people’s preferences. That doesn’t mean those preferences are a small thing. Except to protect other people, it is cruel to block the expression of people’s preferences. For example, whether to be an economist or not is clearly a choice, but it would both make some of us miserable and get in the way of important contributions to the world if it were made illegal or socially disfavored to be an economist. Inhibitions to freedom, whether legal or social, need strong justification in reducing harm to others.

Also, there can be physical effects that are not genetic that are at least as hardwired as physical effects. Men who have more older brothers are more likely to be gay. (See the Wikipedia article “Fraternal birth order and male sexual orientation.”) People theorize this might be due to maternal immune-system reactions to previous male fetuses.

Surprises in the Genetic Data

There are several important findings in the genetic data that will surprise some and confirm the prior beliefs of others. First, having had at least one same-sex sexual partner seems to be a different thing for men than for women:

To assess differences in effects between females and males, we also performed sex-specific analyses. These results suggested only a partially shared genetic architecture across the sexes; the across-sex genetic correlation was 0.63 (95% CIs, 0.48 to 0.78) (table S9). This is noteworthy given that most other studied traits show much higher across-sex genetic correlations, often close to 1 (1821).

A 63% correlation between the genetic predictor for men having had a same-sex sexual partner and for women having had a same-sex sexual partner is still a substantial correlation, but that means there are important differences.

Second, having had at least one as opposed to no same-sex sexual partners is not the same thing as having predominantly same-sex partners:

To maximize our sample size and increase the power to detect SNP associations, we defined our primary phenotype as ever or never having had a same sex partner. … the genetic effects that differentiate heterosexual from same-sex sexual behavior are not the same as those that differ among nonheterosexuals with lower versus higher proportions of same-sex partners. This finding suggests that on the genetic level, there is no single dimension from opposite-sex to same-sex preference. The existence of such a dimension, in which the more someone is attracted to the same-sex the less they are attracted to the opposite-sex, is the premise of the Kinsey scale (39), a research tool ubiquitously used to measure sexual orientation. Another measure, the Klein Grid (40), retains the same premise but separately measures sexual attraction, behavior, fantasies, and identification (as well as nonsexual preferences); however, we found that these sexual measures are influenced by similar genetic factors. Overall, our findings suggest that the most popular measures are based on a misconception of the underlying structure of sexual orientation and may need to be rethought. In particular, using separate measures of attraction to the opposite sex and attraction to the same sex, such as in the Sell Assessment of Sexual Orientation (41), would remove the assumption that these variables are perfectly inversely related and would enable more nuanced exploration of the full diversity of sexual orientation, including bisexuality and asexuality.’’

In other words, the authors suggest a model with two parameters: attraction to men and attraction to women, with some people attracted to both, some people only attracted to men or only to women, and some people attracted to neither.

Scientifically, one of the interesting questions is how genes that increase the probability of homosexual behavior have survived in the gene pool. The authors mention this issue and its importance, but do not suggest any resolution:

We observed in the UK Biobank that individuals who reported same-sex sexual behavior had on average fewer offspring than those of individuals who engaged exclusively in heterosexual behavior, even for individuals reporting only a minority of same-sex partners (Fig. 1B). This reduction in number of children is comparable with or greater than for other traits that have been linked to lower fertility rates (fig. S1) (14). This reproductive deficit raises questions about the evolutionary maintenance of the trait, but we do not address these here.

Conclusion

Before solid evidence about the genetics of homosexuality was available, many people talked as if the genetics of homosexuality could usefully inform how gays should be treated. But it doesn’t. What the genetics of homosexuality does do is help us to appreciate the complexity of sexual attraction.


Deeper Learning in Macroeconomics

Sarah Fine and Jal Mehta have a new book In Search of Deeper Learning. Their focus is on deeper learning in high schools. I want to explore the possibility of deeper learning in college macroeconomics.

What is Deep Learning?

In an interview with Liz Mineo for the Harvard Gazette, Jal Mehta defined “deeper learning” this way:

Deeper learning is the understanding of not just the surface features of a subject or discipline, but the underlying structures or ideas. If we were talking about a biological cell, shallow learning would be able to name the parts of the cell; deeper learning would be able to understand the functions of the cell and how they interrelate. 

When you listen to the show “Car Talk,” you are listening in on a conversation between someone who has a shallow understanding of their car and someone who has a deeper understanding. A person will call in and say, “My car tends to slow down when it rains.” And then one of the guys will say, “Well, does it happen more in hot weather or cold weather?” The caller can only see the symptoms; the person at the other end of the phone can see the system and has some underlying theory or diagnosis of what might be happening.

In the same interview, Jal also explains where and when deeper learning is most likely to happen:

… deeper learning tends to emerge at the intersection of mastery, identity, and creativity. Mastery is developing significant knowledge and skill; identity is seeing yourself as connected to doing the work; and creativity is not just taking in knowledge but doing something in the field. When those three elements come together, it often yields deep learning. …

When we visited schools, we asked students, teachers, and administrators to point us to the most powerful learning spaces in their schools. They frequently pointed to elective classes and extracurricular spaces.  

… we did a deep dive on theater and debate, and those were really different domains, but they shared a number of elements. It started with purpose — students knew why they were there, what they were trying to produce, and why it mattered. There was also a much stronger sense of community in extracurriculars; students described these places as like “family.” And there was lots of opportunity for student leadership as opposed to passively receiving knowledge. There was lots of intrinsic motivation and passion — that’s the identity and creativity parts of deep learning. But there was also a lot of careful feedback, practice, and refinement — that’s the mastery part. 

… the best core classes shared the same characteristics as the extracurriculars; there was a purpose created either by a project, an essential question, or by an authentic thing that was trying to be produced. There was a real attention to trying to build the right kind of community; there was a lot of peer learning by watching how other students were doing work or making comments.

Engaging with the Big Questions in Macroeconomics (or in Economics More Broadly)

At the University of Colorado Boulder,  I teach a class of about 100 students in Intermediate Macroeconomics. That stage of learning about macroeconomics and the sheer size of the class makes it difficult to assign a large term project that would force students to dig deeply into an issue in macroeconomics, but I give students the opportunity to dig deeply if they are willing to seize it. I assign weekly blog posts (on an internal class blog) that can be used to examine different angles of a given topic or to explore different topics. And even when a student explores ostensibly different topics in each blog post, I am a big believer that there is almost always a theme that answers the questions “Why am I interested in this set of things? What is the connection between them?”

In my view, the degree of personal initiative needed to seize the opportunity to make the writing assignments a deep learning experience is an appropriate level of personal responsibility for college as contrasted with high school. Those students who take that initiative will find the class much more rewarding.

Macroeconomics, in particular, offers many deep questions to wrestle with. Every day I see people wrestling with big questions in Economics Twitter. Here are just some of the big questions:

  1. What caused the take-off into modern economic growth?

  2. What policies and what political equilibria can get countries that are still poor onto the track toward getting richer as Japan, Southeast Asian countries, China and India have?

  3. Should fiscal policy or monetary policy take the lead in taming business cycles?

  4. How should monetary policy adjust to the increasingly frequent situations in which the short-term interest rate needed in order to provide enough stimulus is lower than the traditional zero interest rate on paper currency?

  5. What will it take to avoid another financial crisis like the Financial Crisis of 2008?

  6. What should we do about rising inequality? What are the side-effects of different ways of trying to address inequality?

  7. What is the best way of aligning the interests of corporations with the common good? Was Milton Friedman right in saying that telling them to maximize shareholder value will yield the best outcomes for the economy?

  8. What causes trade deficits? How could we reduce the trade deficit? Should we?

  9. Is immigration good for the economy or bad for the economy? If it is good for some people and bad for others, who is it good for and who is it bad for? How is the answer different when immigration policy is designed to shift the balance of immigrants to high-skill immigrants?

  10. How do labor market policies affect the economy as a whole?

  11. Have colleges lost their way? How effective are colleges at helping their students build human capital?

  12. How should we be evaluating the performance of governments? For example, should GDP be supplemented or supplanted by a National Well-Being Index? If so, on what principles should it be designed?

  13. In helping get people what they want, what is the right balance between the four main domains of the economy: the government, non-profits, for-profit activity, household production that is not exchanged in the market?

  14. Which economic regulations are bad and which are good? What are the essential economic regulations needed to effectively establish property rights?

  15. How can we slow global warming in a way that has the lowest cost to other economic objectives that we have? How can we build a political coalition to do that?   

Math and Deep Learning

In his examples of deep learning, Jal Mehta tends to give examples that are about thinking with words or honing words. And he talks about algorithms as if they were the antithesis of deep learning:  

The bad news was that in these schools, which had been recommended as places that did 21st-century learning or particularly rigorous forms of traditional learning, students still experienced a lot of unchallenging instruction; they were doing a lot of worksheets and tasks that were pretty low level, where they were expected to memorize content and apply algorithms rather than analyze, synthesize, and create.

For me, designing an algorithm is one of the true challenges for deep learning. Using an algorithm may or may not be an occasion for deep learning. The deep learning in relation to using algorithms is often about learning which algorithm to use in different situations, how to identify the inputs into the algorithm and how to interpret the numbers an algorithm produces. The algorithm itself may seem simple, without much depth, but once all of these challenges in using an algorithm appropriately are thought of as part of learning the algorithm, an algorithm can be an occasion for deep learning.

I teach many algorithms in my “Intermediate Macro” class because I think the questions of “What happens?” and “How much?” are essential for macroeconomics. As just one example of where the question “How much?” matters, there are many, many people on Twitter and in politics think they can get enough resources to do vast government programs from printing money. Seignorage, the effective government revenue from printing money, just isn’t that big. People can get very excited about something like seignorage because it is an unusual type of thing, but then overestimate just how big a deal it is, if they don’t do the arithmetic. Sometimes deep learning can be figuring out the difference between an effect existing and an effect being substantial in size.

Less is More

There is a tradeoff between deep learning and “covering” a lot of material. “Covering” a lot of material often amounts to a lot of time spent in giving the instructor the delusion that the students have a large set of ideas firmly in hand. True engagement by students requires taking longer for each key idea. Jal Mehta says this in his interview by Liz Mineo:

We define them as compelling teachers when they give their students a challenging, higher-order thinking task, and where at least three-quarters of the students were highly engaged with that task. … These teachers created spaces where they brought together rigor and joy and which were intellectually demanding, but also open, playful, and warm. … They emphasized coverage less and seeing things from different angles more.

Students as Scientists

Any deep learning requires students exercising their own minds in many ways. Jal says this to Mineo: 

Our most compelling teachers viewed their students as essentially inquirers in the subjects they were pursuing; the students were the historians or the scientists. They were trying to help students to own the standards of their fields or disciplines and also inspire them to get interested in their subjects in the long run.

… It takes time to develop knowledge, skill, and mastery over a domain, and these teachers were trying to get students excited about this trajectory.

In economics, the answers to many questions are still disputed, so taking the role of a scientist is not only a good way to learn, it is necessary in order to make a decision for yourself among competing ideas. Even when I only lay out only one view in my lectures, that view often differs enough from the view in the textbook or the view in an earlier class that there are plenty of different views for students to wrestle with if they are willing. A key here is to realize that a state of being confused is a gateway to deep learning.

What is wrestling with different views? It is asking the question of how someone could believe each view and then asking yourself what you believe after you have the backup for each view kicking around in your head. Euclid, who formalized the geometry you learned in high school thousands of years ago, is reputed to have said “There is no royal road to geometry.” I am saying “There is no road to deep learning that does not pass through a period of feeling confused.” Feeling confused along the way is not a problem. Thinking that feeling confused means you should give up is a big problem. I can tell you that cutting-edge research almost always involves going through a period of feeling confused. There is great honor in feeling confused because you are trying to understand something deeply.

Motivating Students and Making Things Fun

To me, economics is fascinating. I tend to teach as if everyone found it as intrinsically fascinating as I do. But I realize that, in fact, students come into my class with a wide variety of different motivations, almost none of which I understand. I would love to have more students tell me what it is they hope to get out of their Intermediate Macro class. If I understood better what interests my students come into the class with, I could thicken the connection of what we are doing in class those interests. Sometimes the connection might be that those interests stem from what I see as a faulty view of macroeconomics, but there should always be some way to connect things.

I would also love to have more students tell me what kinds of spice they like to have added to lectures. My primary goal will always be learning that lasts. But if, within my abilities, I can see how to make lectures more fun without sacrificing learning that lasts, I’ll try to do it.

Finally, I’d love to get feedback from other professors about what they think motivates students and what keeps things fun for students in macroeconomics. They might be interested in turn in what I have to say about my approach in “On Teaching and Learning Macroeconomics.” One possible motivation for wanting to learn macroeconomics is to be able to understand the newspaper as it talks about the big important events of the day. That is the objective for learning macroeconomics that, so far, I have been most focused on in my teaching.