Less is More in Mormon Church Meetings

The degree of deference by other Mormon Church leaders to the President of the Mormon Church is great enough that the views and actions of a Mormon apostle who becomes President of the Mormon Church by seniority and so has no one above him to defer to can be surprising. My July 22, 2018 post, “New Mormon Prophet Russell Nelson Shakes Things Up” reported several big changes approved by Russell Nelson, who became President of the Mormon Church only in January 2018.

Major changes continue in the Mormon Church. The biggest is the reduction in the Sunday meeting schedule from three hours to two hours. Sunday School and the sex-segregated Priesthood Meeting/Relief Society period will now alternate Sundays, while the everyone-(including young children)-together Sacrament Meeting will continue to be held every week.

The official reason given for the change is to free up time for religious instruction and study at home: “It is time for a home-centered church,” Russell Nelson said.

The most intriguing aspect of the change is that it seems to have been spurred in part by social-science research. Quoting from the article flagged above:

Leaders also considered a study that found that individual scripture study and prayer did the most to help young Latter-day Saints feel the influence of the Holy Ghost, Elder Cook said.

Here, if I were a Mormon Church leader, I would worry that the research was based only on a correlation and was not causal. The sort of person who has the right psychological profile for feeling powerful subjective spiritual experiences might well be more attracted to scripture study and prayer to begin with. A broader range of psychological profiles might lead people to show up at church on Sunday. Making efforts to convince Mormons to spend more time on individual scripture study and prayer may get the desired effect more than having Mormons spend an extra hour in church on Sunday.

Intervention studies provided some evidence of satisfactory effects:

The church had been testing the new curriculum in congregations around the world with success, Elder Cook said. One pilot program was in Brazil and others were reported in Iowa and Tooele.

The big empirical issue here is the Hawthorne effect: doing an experiment that makes the people in the experiment feel special often gets a good effect, regardless of what the intervention is. Of course, many Mormon Church leaders have been businessmen who are used to changing things up in order to use the Hawthorne effect intentionally. (Russell Nelson himself was a heart surgeon rather than a businessman. He performed heart surgery on my grandfather.) But cutting the Sunday meeting times down to two hours from three is much bigger than the size of change one would make if one was only relying on the Hawthorne effect.

The other change is a continuing attempt to get those outside the Mormon Church not to call it the Mormon Church any more, but to use its official name, “The Church of Jesus Christ of Latter-day Saints.” The choir formerly known as “The Mormon Tabernacle Choir” is now officially “The Tabernacle Choir at Temple Square.” Despite these efforts, I plan to continue to refer to The Church of Jesus Christ of Latter-day Saints as the “Mormon Church” on this blog.

Don't miss these posts on Mormonism:

Also see the links in "Hal Boyd: The Ignorance of Mocking Mormonism."

Don’t miss these Unitarian-Universalist sermons by Miles:

By self-identification, I left Mormonism for Unitarian Universalism in 2000, at the age of 40. I have had the good fortune to be a lay preacher in Unitarian Universalism. I have posted many of my Unitarian-Universalist sermons on this blog.

The US Military Needs to Beef Up Its Artificial Intelligence and Cyberware Capabilities

                                                      Link to the article shown above

                                                      Link to the article shown above

Artificial intelligence is becoming more important to warfare. The US, China and Russia are all working on their military artificial intelligence capability and other computer capabilities. In their March 2, 2018 Wall Street Journal article "The New Arms Race in AI," Julian E. Barnes and Josh Chin give a very useful rundown of some of the things happening in this area. The sheer growth in computing power is a key driver:

Fueling the AI race is processing power, an emerging area of strategic competition between China and the U.S. Chinese state media reported in January that researchers with the National University of Defense Technology and National Supercomputer Center in Tianjin had made a breakthrough in building a conventional supercomputer at exascale—10 times faster than today’s supercomputers—scheduled for completion by 2020. “That’s a revolutionary, generational leap up,” said Dr. Cheung.

Distinct from raw processing power is the rise of quantum computing. For now, the low-hanging fruit from quantum computing is its value in breaking codes, and for itself enabling unbreakable encryption:

In the city of Hefei in eastern China, work began last year on a $1 billion national quantum-information-sciences laboratory. Slated to open in 2020, it will build on research already under way nearby in the lab of physicist Pan Jianwei, who led the team that launched the world’s first quantum communications satellite. The project propelled China far ahead of others in transmitting information with essentially unbreakable quantum encryption.

But quantum computing could be important in the long-run for sheer processing power. My intuition for the power of quantum computing comes from the many worlds interpretation of quantum mechanics. Quantum computing has the potential to harness a multitude of copies of a computer in close-by alternate universes that are still entangled and haven't fully separated yet. 

Here are some of the things artificial intelligence could do on the battlefield, as described by quotations from Julian Barnes and Josh Chin's article:

  • ... scan video from drones and find details that a human analyst would miss—identifying, for instance, a particular individual moving between previously undetected terrorist safe houses.
  • The F-35, one of America’s most advanced jet fighters, uses AI to evaluate and share radar and other sensor data among pilots, expanding their battlefield awareness. AI stitches together information and highlights what is likely most important to the pilot.
  • The U.S. Army is working on tactical augmented reality systems—sort of a Google Glass for war—using goggles or a visor that could display video from drones flying above, current position and enhanced night vision. AI-powered computing could add information about incoming threats, targets and areas that have to be protected.
  • AI also could vastly improve the effectiveness of airstrikes, ... launch a cluster of missiles at the target. ... China is developing similar technology. In January, the country’s military TV network broadcast footage of researchers testing such “swarm intelligence,” which could eventually link dozens of armed drones into an automated attack force.
  • AI could speed up warfare to a point where unassisted humans can’t keep up—a scenario that retired U.S. Marine Gen. John Allen calls “hyperwar.” In a report released last year, he urged the North Atlantic Treaty Organization to step up its investments in AI, including creating a center to study hyperwar and a European Darpa, particularly to counter the Russian effort. ... In hyperwar, the side that will prevail will be the side that is able to respond more quickly,” Gen. Allen said. “Artificial intelligence will collapse the decision-action loop in a very big and very real way.”
  • Russia is investing in AI as well. Moscow has focused on creating autonomous weapons powered by AI and hopes in the coming decade to have 30% of its military robotized, which could transform how it fights. Russia’s sophisticated drone development lags behind the U.S., but it has exceptional expertise in electronic warfare, and AI technologies could boost it further.

The US has great expertise in computer hardware and software in the private sector, but many private companies are leery of being involved in military research. 

In addition to the role of artificial intelligence and other computational capabilities in kinetic warfare, cyberespionage and cyberwar that attacks the internet itself or spreads computer viruses and worms is a great danger. I'd like to see the US government devote more resources to addressing all of these threats.

 

Don't miss these other posts on national security:  

 

Best Health Guide: 10 Surprising Changes When You Quit Sugar

Even clickbait can sometimes be on target. Here are the 10 points in the article above, “10 Surprising Changes When You Quit Sugar”:

  1. Your teeth improve.

  2. You feel more energetic.

  3. You lose weight.

  4. You reduce your diabetes risk.

  5. Food tastes better.

  6. You get smarter.

  7. You reverse a range of health problems.

  8. Your heart health improves.

  9. You sleep better.

  10. You extend your lifespan.

If you can quit sugar, it will be one of the best things you have ever done for yourself. I have some tips on quitting sugar in “Letting Go of Sugar.” And I have written about some of the arguments against sugar in many of the posts flagged below.

Don't miss these other posts on diet and health and on fighting obesity:

I. The Basics

II. Sugar as a Slow Poison

III. Anti-Cancer Eating

IV. Eating Tips

V. Calories In/Calories Out

VI. Wonkish

VIII. Debates about Particular Foods and about Exercise

IX. Gary Taubes

X. Twitter Discussions

XI. On My Interest in Diet and Health

See the last section of "Five Books That Have Changed My Life" and the podcast "Miles Kimball Explains to Tracy Alloway and Joe Weisenthal Why Losing Weight Is Like Defeating Inflation." If you want to know how I got interested in diet and health and fighting obesity and a little more about my own experience with weight gain and weight loss, see “Diana Kimball: Listening Creates Possibilities and my post "A Barycentric Autobiography."

John Locke: Defense against the Black Hats is the Origin of the State

In “The Social Contract According to John Locke” I write:

John Locke's version of social contract theory is striking in saying that the only right people give up in order to enter into civil society and its benefits is the right to punish other people for violating rights. No other rights are given up, only the right be be a vigilante.

But why would people give up even that right? John Locke explains in Section 123 of his 2d Treatise on Government: “Of Civil Government” (in Chapter IX, “Of the Ends of Political Society and Government”):

§ 123. IF man in the state of nature be so free, as has been said; if he be absolute lord of his own person and possessions, equal to the greatest, and subject to nobody, why will he part with his freedom? why will he give up this empire, and subject himself to the dominion and controul of any other power? To which it is obvious to answer, that though in the state of nature he hath such a right, yet the enjoyment of it is very uncertain, and constantly exposed to the invasion of others: for all being kings as much as he, every man his equal, and the greater part no strict observers of equity and justice, the enjoyment of the property he has in this state is very unsafe, very unsecure. This makes him willing to quit a condition, which, however free, is full of fears and continual dangers: and it is not without reason, that he seeks out, and is willing to join in society with others, who are already united, to have a mind to unite, for the mutual preservation of their lives, liberties and estates, which I call by the general name, property.  

Note here his definition of “property” as “lives, liberties and estates”; the ordinary meaning of property was enough different in the 18th century that the Declaration of Colonial Rights used the phrases “life, liberty, and property,” and the Declaration of Independence famously included close to the full range of things that enter people’s utility functions by the expansive phrase “life, liberty and the pursuit of happiness.” In the absence of some sort of mutual protection association, many things people care about are endangered by bad guys.

What does it take to restrain bad guys? Here is John Locke’s answer:

  • clearly stated rules

  • impartial judges

  • the brute force needed to enforce sentences

Why are these needed?

  • people are reluctant to admit they have transgressed

  • the desire for revenge makes people want to go too far in punishing offenses against themselves

  • lack of caring makes people not want to go far enough in punishing offenses against others they are not emotionally close to

  • punishing bad guys is hard.

Here is how John Locke makes those points:

§ 124. The great and chief end, therefore, of men’s uniting into commonwealths, and putting themselves under government, is the preservation of their property. To which in the state of nature there are many things wanting. First, There wants an established, settled, known law, received and allowed by common consent to be the standard of right and wrong, and the common measure to decide all controversies between them: for though the law of nature be plain and intelligible to all rational creatures; yet men being biassed by their interest, as well as ignorant for want of study of it, are not apt to allow of it as a law binding to them in the application of it to their particular cases. 

§ 125. Secondly, In the state of nature there wants a known and indifferent judge, with authority to determine all differences according to the established law: for every one in that state being both judge and executioner of the law of nature, men being partial to themselves, passion and revenge is very apt to carry them too far, and with too much heat, in their own cases; as well as negligence, and unconcernedness, to make them too remiss in other men’s. 

§ 126. Thirdly, In the state of nature there often wants power to back and support the sentence when right, and to give it due execution. They who by any injustice offended, will seldom fail, where they are able, by force to make good their injustice; such resistance many times makes the punishment dangerous, and frequently destructive, to those who attempt it.  

There remains the question “Why are people willing to subject themselves to actual, imperfect rulers?” The basic answer is that there are bad guys out there who are worse than the rulers:

§ 127. Thus mankind, notwithstanding all the privileges of the state of nature, being but in an ill condition, while they remain in it, are quickly driven into society. Hence it comes to pass, that we seldom find any number of men live any time together in this state. The inconveniences that they are therein exposed to by the irregular and uncertain exercise of the power every man has of punishing the transgressions of others, make them take sanctuary under the established laws of government, and therein seek the preservation of their property. It is this makes them so willingly give up every one his single power of punishing, to be exercised by such alone, as shall be appointed to it amongst them; and by such rules as the community, or those authorized by them to that purpose, shall agree on. And in this we have the original right and rise of both the legislative and executive power, as well as of the governments and societies themselves.

When the rulers become as bad as the worst of the bad guys, then people will often take their chances rejecting any authority of the state over them, or will try to form alternative dispute resolution mechanisms that can be seen as the rudiments of an alternative state.

For links to other John Locke posts, see these John Locke aggregator posts: 

On Guilt by Association

During the September 27 Senate Judiciary Committee hearings on Brett Kavanaugh’s nomination to the Supreme Court, Orrin Hatch said this:

Let's at least be fair and look at facts or the absence thereof," Mr. Hatch said. "Guilt by association is wrong. Immaturity does not equal criminality. That Judge Kavanaugh drank in high school or college does not make him guilty of every terrible thing that he's recently been accused of.

My reaction is that it depends on what “guilt by association” means. Certainly, you should be able to associate with any other human being, no matter how bad they are, without there crimes tainting you—unless through your association you gain knowledge of awful crimes it would be useful for the police or other constituted authorities to know, that you fail to share. If there was a rape culture at Georgetown Prep, and Brett Kavanaugh knew about it and did nothing, that is a serious black mark against him.

Note here that while attorney-client privilege is close to absolute, even therapists are required by law to report any suspicion of sexual abuse. So whatever privilege one thinks should extend to loyalty to what one’s friends tell one in confidence, it should not extend to helping one’s friends conceal sexual assault. Part of the test is whether someone could tell, and let themselves be moved by the fact, that sexual assault is an awful crime.

It seems it should be possible to determine by investigation whether there was such a rape culture at Georgetown Prep and whether Brett was in the in-crowd to such an extent that it would have been hard for him not to know about it. For a juvenile to know about and do nothing about such wrongdoing may not be worthy of jail time (whatever the statute says), but it should be disqualifying for a seat on the Supreme Court, even 35 years later, short of a forthright renunciation of such inaction in one’s past. We want to have on the Supreme Court folks who were straight arrows as kids—or who have admitted wrongdoing—whether by action or inaction—and turned over a new leaf.

Of course, Brett Kavanaugh may be guilty of worse than inaction. But even in that case it may be easier to prove that he didn’t lift a finger to stop the rape culture of friends.

John Ioannidis, T. D. Stanley and Hristos Doucouliagos: The Power of Bias in Economics Research

Unfortunately, I couldn't find an ungated version of this article. But the abstract above says a lot. In case it is hard to read the image, here it is:

We investigate two critical dimensions of the credibility of empirical economics research: statistical power and bias. We survey 159 empirical economics literatures that draw upon 64,076 estimates of economic parameters reported in more than 6,700 empirical studies. Half of the research areas have nearly 90% of their results under‐powered. The median statistical power is 18%, or less. A simple weighted average of those reported results that are adequately powered (power ≥ 80%) reveals that nearly 80% of the reported effects in these empirical economics literatures are exaggerated; typically, by a factor of two and with one‐third inflated by a factor of four or more.

Yes, Sugar is Really Bad for You

In a September 19, 2018 bbc.com article, journalist Jessica Brown acts as a defense attorney for sugar, while recognizing attacks that have been made on sugar. Here is the opening statement:

It’s hard to imagine now, but there was a time when humans only had access to sugar for a few months a year when fruit was in season. Some 80,000 years ago, hunter-gatherers ate fruit sporadically and infrequently, since they were competing with birds.

Now, our sugar hits come all year round, often with less nutritional value and far more easily – by simply opening a soft drink or cereal box. …

But so far, scientists have had a difficult time proving how it affects our health, independent of a diet too high in calories. 

Is it just calories?

One key way to cast doubt on the role of sugar is to say that it is really just total calories. Jessica quotes Luc Tappy:

Luc Tappy, professor of physiology at the University of Lausanne, is one of many scientists who argue that the main cause of diabetes, obesity and high blood pressure is excess calorie intake, and that sugar is simply one component of this.

“More energy intake than energy expenditure will, in the long term, lead to fat deposition, insulin resistance and a fatty liver, whatever the diet composition,” he says. “In people with a high energy output and a matched energy intake, even a high fructose/sugar diet will be well tolerated.”

Tappy points out that athletes, for example, often have higher sugar consumption but lower rates of cardiovascular disease: high fructose intake can be metabolised during exercise to increase performance. 

The trouble with this way of absolving sugar is that a key mechanism through which sugar can cause trouble is by leading people to eat more total calories—both through the insulin mechanism I emphasize in “Obesity Is Always and Everywhere an Insulin Phenomenon” and through making things tasty. So holding calories constant when thinking about the effects of sugar is ignoring one of the primary mechanisms through which eating sugar can cause harm. And to the extent athletes are harmed less by eating sugar, an interesting hypothesis is that exercise might reduce the effect of sugar in making people hungry a little while later.

Is it just correlational, not causal?

Some of the best evidence about the malign effects of sugar comes from data on soft drink and fruit juice consumption. Here Jessica confuses criticism of some studies as being only correlational with the status of research on soft drink and fruit juice as a whole. Indeed, there are some interesting studies that have to be taken with a grain of salt because they are only correlational:

Still, studies have demonstrated other ways in which sugar affects our brains. Matthew Pase, research fellow at Swinburne’s Centre for Human Psychopharmacology in Australia, examined the association between self-reported sugary beverage consumption and markers of brain health determined by MRI scans. Those who drank soft drinks and fruit juices more frequently displayed smaller average brain volumes and poorer memory function. Consuming two sugary drinks per day aged the brain two years compared to those who didn’t drink any at all. But Pase explains that since he only measured fruit juice intake, he can’t be sure that sugar alone is what affects brain health. …

One 15-year study seemed to back this up: it found that people who consumed 25% or more of their daily calories as added sugar were more than twice as likely to die from heart disease than those who consumed less than 10%. Type 2 diabetes also is attributed to added sugar intake. Two large studies in the 1990s found that women who consumed more than one soft drink or fruit juice per day were twice as likely to develop diabetes as those who rarely did so.

But there is a great deal of non-correlational evidence. Jessica writes:

Meanwhile, sugary drinks, which usually use high fructose corn syrup, have been central to research examining the effects of sugar on our health. One meta-analysis of 88 studies found a link between sugary drinks consumption and body weight. In other words, people don’t fully compensate for getting energy from soft drinks by consuming less of other foods – possibly because these drinks increase hunger or decrease satiety.

Following the link through to the meta-analysis, it reports:

… larger effect sizes were observed in studies with stronger methods (longitudinal and experimental vs cross-sectional studies). …

We found 7 studies that examined the connection between soft drink intake and body weight in an experimental or intervention context. Five reported a positive association. In 3 of these studies, participants who were given soft drinks to consume gained weight over the course of the experiment. Two intervention studies aimed at decreasing soft drink consumption among high school students showed that students in the intervention groups essentially maintained their weight over the treatment period, whereas those in the control groups exhibited significant weight gain. Two studies reported no statistically significant effect of soft drink consumption on weight gain. The average effect size for experimental studies was 0.24 (P < .001; Q7 = 24.57, P = .001).

Jessica never pulls this out.

Does sugar sometimes do something good?

There are so hundreds of effects of any one thing; so if one is looking for a good effect—as one might have an incentive to do—one is likely to find a good effect. For sugar, perking up old folks seems may be one:

One recent study found that sugar may even help improve memory and performance in older adults. Researchers gave participants a drink containing a small amount of glucose and asked them to perform various memory tasks. Other participants were given a drink containing artificial sweetener as a control. They measured the participants' levels of engagement, their memory score, and their own perception of how much effort they’d applied.

The results suggested that consuming sugar can make older people more motivated to perform difficult tasks at full capacity – without them feeling as if they tried harder. Increased blood sugar levels also made them feel happier during the task.

And of course, if you wait to eat sugar until your expected remaining life is only a couple of years, it may well be that death will interrupt any serious harm of sugar. It is young people—say anyone with an expected remaining life span of more than two years—who should quit eating sugar.

Is being against sugar crazy or cultlike?

One way to distract from the evidence against sugar is to shift the focus to other, less scientific reasons people might be against sugar. Jessica writes:

There is also a growing argument that demonising a single food is dangerous …

… dietitian Renee McGregor says it’s important to understand that a healthy, balanced diet is different for everyone.

McGregor, whose clients include those with orthorexia, a fixation with eating healthily, says that it isn’t healthy to label foods as ‘good’ or ‘bad’. And turning sugar into a taboo may only make it more tempting. “As soon as you say you can’t have something, you want it,” she says. “That’s why I never say anything is off-limits. I’ll say a food has no nutritional value. But sometimes foods have other values.”

Associate professor at James Madison University Alan Levinovitz studies the relationship between religion and science. He says there’s a simple reason we look at sugar as evil: throughout history, we’ve demonised the things we find hardest to resist (think of sexual pleasure in the Victorian times).

Today, we do this with sugar to gain control over cravings.

“Sugar is intensely pleasurable, so we have to see it as a cardinal sin. When we see things in simple good and evil binaries, it becomes unthinkable that this evil thing can exist in moderation. This is happening with sugar,” he says.

He argues that that seeing food in such extremes can make us anxious about what we’re eating – and add a moral judgment onto something as necessary, and as everyday, as deciding what to eat.

For well-adjusted folks, I think there is a good reason to cut out sugar almost entirely that they don’t give due credit to: after about three weeks off sugar, everything tastes sweeter, and the desire for sugar goes down, so it becomes much easier to avoid sugar. I talk about this in “Letting Go of Sugar.”

Will cutting out sugar lead to eating worse things or cutting out something essential?

One of the weakest arguments in defense of sugar is that cutting it out will lead to other bad dietary adjustments. Give me a break! Because sugar is in almost all processed foods, cutting out sugar leads to avoiding most processed foods—highly likely to be a step in the right direction. (See “The Problem with Processed Food” and the evidence discussed in “Why a Low-Insulin-Index Diet Isn't Exactly a 'Lowcarb' Diet.”)

While Jessica is doing her best to come up with pro-sugar arguments, she is cavalier in assuming that dietary fat is bad and that fruit is so wonderful, that eating fruit sparingly because of its sugar content is a bad thing. She writes:

Taking sugar out of our diets can even be counterproductive: it can mean replacing it with something potentially more calorific, such as if you substitute a fat for a sugar in a recipe.

… we risk confusing those foods and drinks with added sugar that lack other essential nutrients, like soft drinks, with healthy foods that have sugars, like fruit.

On fruit, see the section “The Conundrum of Fruit” in “Forget Calorie Counting; It's the Insulin Index, Stupid.” On dietary fat, see:

Also, see “Faye Flam: The Taboo on Dietary Fat is Grounded More in Puritanism than Science.” Here, as I noted above, “not science” is more important than “yes Puritanism.” But it is worth noting that if there is anything biasing non-scientists’ sense of the virtues and vices of different types of food, the fact that dietary fat uses the same word “fat” as body fat is highly likely to distort some people’s intuitions. (This could be tested by looking at attitudes toward dietary fat in countries where the word for dietary fat is fully distinct from the word for body fat.)

It may be surprising that one has to defend the idea that sugar is very bad, but that is the world we live in. In “The Trouble with Most Psychological Approaches to Weight Loss: They Assume the Biology is Obvious, When It Isn't” I give examples of things people will be saying when the serious harms of sugar really are conventional wisdom. We are not there yet. I am proud to be an anti-apologist for sugar.

Don't miss these other posts on diet and health and on fighting obesity:

Also see the last section of "Five Books That Have Changed My Life" and the podcast "Miles Kimball Explains to Tracy Alloway and Joe Weisenthal Why Losing Weight Is Like Defeating Inflation." If you want to know how I got interested in diet and health and fighting obesity and a little more about my own experience with weight gain and weight loss, see my post "A Barycentric Autobiography."

Netflix as an Example of Clay Christensen's 'Disruptive Innovation'

I am a great admirer both of Clay Christensen personally and of his theory of disruptive innovation. “Disruptive innovation” doesn’t mean simply a big innovation, it means an innovation for which doing a good job at giving your usual customers what they want and will pay for is not enough. You can see posts I have written referencing Clay here:

https://blog.supplysideliberal.com/?tag=clay

Netflix is a fascinating example of disruptive innovation. I urge you to read the entire article by Alex Sherman shown above:

How Netflix sent the biggest media companies into a frenzy, and why Netflix thinks some are getting it wrong

As teasers, here are the two passages most relevant to the idea of disruptive innovation. I added boldface to the most important sentence in each:

While traditional media is racing to catch up, Netflix CEO Reed Hastingsis not looking back at the runners he's passed.

Hastings has never really feared legacy media, said Neil Rothstein, who worked at Netflix from 2001 to 2012 and eventually ran digital global advertising for the company. That's because Hastings bought into the fundamental principle of "The Innovator's Dilemma," the 1997 business strategy book by Harvard Business School professor Clayton Christensen.

That book, often cited in tech circles, explains how disruptive businesses often start off as cheaper alternatives with lesser functionality, making it difficult for big incumbents to respond without cannibalizing their cash-rich businesses. Over time, the newcomer adds features and builds customer loyalty until it's just as good or better than the incumbent's product. By the time the old guard wakes up, it's too late.

"Reed brought 25 or 30 of us together, and we discussed the book," Rothstein said of an executive retreat he remembered nearly a decade ago. "We studied AOL and Blockbuster as cautionary tales. We knew we had to disrupt, including disrupting ourselves, or someone else would do it."

 

Hastings derived many of his strategy lessons from a Stanford instructor named Hamilton Helmer. Hastings even invited him to Netflix in 2010 to teach other executives.

One of Helmer's key concepts is called counter-positioning, which Helmer defines as: "A newcomer adopts a new, superior business model which the incumbent does not mimic due to anticipated damage to their existing business."

"Throughout my business career, I have often observed powerful incumbents, once lauded for their business acumen, failing to adjust to a new competitive reality," Hastings writes in the forward to Helmer's book "7 Powers," published in 2016.

What Steven Gundry's Book 'The Plant Paradox' Adds to the Principles of a Low-Insulin-Index Diet

I have been reading a remarkable book: The Plant Paradox by Steven Gundry. I find the book persuasive, both due to its logical arguments and because of my experience in experimenting myself with its recommendations. Starting from a low-insulin-index diet (see “Forget Calorie Counting; It's the Insulin Index, Stupid”) that tries to avoid too much animal protein (see “Meat Is Amazingly Nutritious—But Is It Amazingly Nutritious for Cancer Cells, Too?”), the changes needed to follow Steven Gundry’s recommendations weren’t huge, but in my assessment have made me feel better than ever. In particular, I have noticed feeling especially clearheaded, and I have felt an almost indestructible cheerfulness in the last couple of months while following those recommendations, despite being in the stressful situation of writing two grant proposals at once.

In this post, I’ll focus on Steven’s logical arguments and the most basic practical implications of those arguments. For the most part I’ll leave writing in detail about how I have reoptimized my diet to later posts. But there is one post in which I already did an update based on The Plant Paradox: “Our Delusions about 'Healthy' Snacks—Nuts to That!" There, I added this, which can serve as an executive summary for today’s post:

I am currently reading Steven Gundry's book The Plant Paradox. So far, I am impressed. He argues that peanuts and cashews (technically legumes and drupes respectively, not nuts) are unhealthy because they contain a lot of lectins (natural insecticides that plants produce) that humans are ill-adapted to detoxify because they were plants from the Americas. Even those of Native American descent have had at most about 14,000 years for evolution to develop defenses against these particular lectins. For the rest of us, it is only 526 years since Columbus. And it is a lot more complex to evolve defenses against particular lectins than, say, the extension of the ability to digest milk into adulthood.

I was never a big fan of peanuts, so avoiding them is no change for me, but currently I have cut out cashews. Also, based on Steven Gundry's advice, I am currently switching to blanched (deskinned) Marcona almonds and shifting away from almond milk toward coconut milk. (I was annoyed to find that Costco's Marcona almonds are roasted in peanut oil, which means I have to get my Marcona almonds online.) 

The War Between Plants and Animals

The first key point in Steven’s argument is that plants have evolved natural insecticides—many of them in the class of chemicals called lectins. These natural insecticides are not healthy for humans, but some are worse than others. The immediately following quotations are from Chapter 1: “The War Between Plants and Animals.”

… plants don’t want to be eaten—and who can blame them? Like any living thing, their instinct is to propagate the next generation of their species. To this end, plants have come up with devilishly clever ways to protect themselves and their offspring from predators. …

… Plants have actually evolved an awesome array of defensive strategies to protect themselves, or at least their seeds, from animals of all shapes and sizes, including humans. Plants may use a variety of physical deterrents, such as color to blend into their surroundings; an unpleasant texture; sticky stuff such as resins and saps that entangle insects, provide protective cover by making sand or soil clump, or attract grit that makes them unpleasant to eat; or a simple reliance on a hard outer coating, such as a coconut, or spine-tipped leaves, such as an artichoke. Other defensive strategies are far subtler. Plants are great chemists—and alchemists, for that matter: they can turn sunbeams into matter! They have evolved to use biological warfare to repel predators—poisoning, paralyzing, or disorienting them—or to reduce their own digestibility to stay alive and protect their seeds, enhancing the chances that their species will endure. Both these physical and chemical defensive strategies are remarkably effective at keeping predators at bay, and even sometimes at getting animals to do their bidding. Because their initial predators were insects, plants developed some lectins that would paralyze any unfortunate bug that tried to dine on them. Obviously, there is a quantum size difference between insects and mammals, but both are subject to the same effects. (If you are suffering from neuropathy, take notice!) Clearly, most of you won’t be paralyzed by a plant compound within minutes of eating it, although a single peanut (a lectin) certainly has the potential to kill certain people. But we are not immune to the long-term effects of eating certain plant compounds. Because of the huge number of cells we mammals have, we may not see the damaging results of consuming such compounds for years. And even if this is happening to you, you don’t know it yet. I learned of this connection via hundreds of my patients who respond almost instantly, often in fascinating ways, to these mischievous plant compounds. For this reason, I call these patients my “canaries.” Coal miners used to take caged canaries into the mines with them because the birds are especially subject to the lethal effects of carbon monoxide and methane. As long as the canaries sang, the miners felt safe, but if the chirping stopped, it was a clear signal to evacuate the mine posthaste. My “canaries” are more sensitive to certain lectins than the average person, which is actually an advantage in terms of seeking help sooner rather than later.

Here is some of the biochemistry of why lectins are bad:

[Lectins] also bind to sialic acid, a sugar molecule found in the gut, in the brain, between nerve endings, in joints, and in all bodily fluids, including the blood vessel lining of all creatures. Lectins are sometimes referred to as “sticky proteins” because of this binding process, which means they can interrupt messaging between cells or otherwise cause toxic or inflammatory reactions, as we’ll discuss later. For example, when lectins bind to sialic acid, one nerve is unable to communicate its information to another nerve. If you have ever experienced brain fog, thank lectins.

Grains have both lectins and other unpleasant chemicals:

IN THE CASE of naked seeds, plants use a divergent strategy. … Instead of a hard casing, the naked seed contains one or more chemicals that weaken predators, paralyze them, or make them ill, so they won’t make the mistake of eating the plant again. These substances include phytates, often referred to as antinutrients, which prevent absorption of minerals in the diet; trypsin inhibitors, which keep digestive enzymes from doing their job, interfering with the predator’s growth; and lectins, which are designed to disrupt cellular communication by, among other things, causing gaps in the intestinal wall barrier, a condition known as leaky gut. Whole grains actually contain all three of these defensive chemicals in the fibrous hull, husk, and bran. (Teaser alert: This is just one reason that the idea of “whole-grain goodness” is a huge misconception, as you’ll learn in chapter 2.)

Nightshades such as tomatoes and potatoes also have tannins:

Still other plant-predator dissuaders include tannins, which impart a bitter taste, and the alkaloids found in the stems and leaves of the nightshade family. You may already know that nightshades, which include such culinary favorites as tomatoes, potatoes, eggplants, and peppers, are highly inflammatory. We’ll come back to the nightshade family, which also includes goji berries …

Old and New Natural Insecticides: We and Our Gut Microbiome Have Evolved to Deal With Some Natural Insecticides, But Not Others

Not all natural insecticides are equally toxic—those in grains, legumes and nightshades are some of the worst because we and our gut microbiome have had less time to adapt to them:

The lectins in beans and other legumes, wheat and other grains, and certain other plants are especially problematic for humans. First, not enough time has elapsed to allow our species to develop immunological tolerance to these substances; nor has sufficient time elapsed for the human gut microbiome to become fully capable of breaking down these proteins.

In Chapter 2, “Lectins on the Loose,” Steven points to four major human dietary transitions that have challenged the ability of human evolution to keep up. (As I noted above, some adaptations, such as being able to digest milk as an adult only require the change of a few genes, other adaptations require the change of many genes.)

CHANGE #1: The Agricultural Revolution.

The advent of the agricultural revolution about ten thousand years ago meant that a totally new source of food—grain and beans—became the dietary staple of most cultures relatively quickly. At that point, the human diet shifted from primarily leaves, tubers, and some animal fat and protein to primarily grains and beans. Until then, the human microbiome had never encountered lectins in grasses (grains) or legumes, and therefore the human gut bacteria, microbes, and immune system had zero experience handling them. …

CHANGE #2: A Mutation in Cows

About two thousand years ago, a spontaneous mutation in Northern European cows caused them to make the protein casein A-1 in their milk instead of the normal casein A-2. During digestion, casein A-1 is turned into a lectinlike protein called beta-casomorphin. This protein attaches to the pancreas’s insulin-producing cells, known as beta cells, which prompts an immune attack on the pancreas of people who consume milk from these cows or cheeses made from it. This is likely a primary cause of type 1 diabetes.6 Southern European cows, goats, and sheep continue to produce casein A-2 milk, but because casein A-1 cows are hardier and produce more milk, farmers prefer them. The most common breed of cows worldwide is the Holstein, whose milk contains this problematic lectinlike protein. If you think that drinking milk gives you a problem, it’s almost certainly the cow’s breed that is at fault, not milk per se. The black and white Holstein is the classic example of the A-1 cow, while the Guernsey, Brown Swiss, and Belgian Blues are all casein A-2. That’s why I recommend that if you consume dairy, you opt for only casein A-2 dairy products, which grocery stores have recently started selling, particularly on the West Coast. Alternatively, use goat or sheep milk products to be safe. …

CHANGE #3: Plants from the New World

It would seem that we should have become pretty tolerant of these new lectins over the past ten thousand years, but let’s take one more trip back in time. Five centuries ago, the last of the major changes in lectin exposure—and perhaps the biggest disruption of all—occurred when Europeans reached the Americas. The explorers brought New World foods back to their native countries, and the Columbian Exchange, named after Christopher Columbus, exposed the rest of the world to a whole array of new lectins. They include the nightshade family, most of the bean family (legumes, including peanuts and cashews), grains, pseudo-grains such as amaranth and quinoa, the squash family (pumpkins, acorn squash, zucchini), and chia and certain other seeds. All are foods that until then no European, Asian, or African had ever seen, much less eaten. Half of the foods you have been told to eat for good health are actually New World plants that most of mankind had no prior exposure to, meaning your body, your gut bacteria, and your immune system are ill prepared to tolerate them. Getting to know a new lectin in five hundred years is equivalent to speed dating in evolution!

CHANGE #4: Contemporary Innovations

In the last five decades we have faced yet another unleashing of lectins in processed foods and most recently in genetically modified organisms (GMOs), including soybeans, corn, tomatoes, and rapeseed (canola). Our bodies have never before encountered any of these lectins. Moreover, with the introduction of broad-spectrum antibiotics, other drugs, and a vast array of chemicals, we have totally destroyed the gut bacteria that would have normally given us a chance to process these lectins and educate our immune system about them. We’ll discuss these deadly disruptors further in chapter 4.

Comparing Steven Gundry’s Recommendations to a Low-Insulin-Index Diet that Tries to Avoid Too Much Animal Protein

Steven’s recommendations amount to true paleo: paleo that recognizes that our distant African ancestors weren’t eating grain, New World plants, cow milk or processed food, ate fruit only in season and honey quite seldom, and probably had many, many meatless days, other than seafood for those on the coast and quite small animals.

Rewinding the agricultural revolution: With very few exceptions, a low-insulin-index diet already means avoiding grain-based products and beans, as can be seen from the tables in “Forget Calorie Counting; It's the Insulin Index, Stupid.” The one direct exception in my diet was plain oatmeal; I now avoid oatmeal for Gundry reasons even though its insulin index is moderate.

The other big issue is cornfed or in other ways grain fed cattle and chickens. That has involved only a shift to higher quality grassfed beef, chickens allowed to forage, and omega 3 eggs. (If chickens are fed enough omega 3-rich foods to make their eggs omega 3 eggs, that is that much less grain they are being fed.) Farmed fish are also grain fed and it takes a little attention to avoid them, but isn’t too hard otherwise.

Rewinding the Holstein revolution: As I write in “Is Milk OK?” I have been worried about dairy in any case. The cheapest and easiest way to deal with the problems of milk would be to simply avoid dairy. However, I love dairy too much to give it up entirely. Steven has given me hope that some of the evidence for health problems from milk have to do with the A1 protein that arose in what I am calling “The Holstein revolution.”

It is not easy to find milk from cows with only the A2 protein, but Whole Foods does carry A2 cow milk with the brand name “A2” and goat butter (goats and sheep are all A2). In the Boulder area, it is also possible to buy goat milk, unsweetened goat kefir and creme fraiche from French cows, which is more likely to be from A2 cows. The Manchego cheese from Costco that I have recommended is sheep cheese and so is fine, and many restaurants have goat cheese (“chevre”) on the menu. What I still can’t find is liquid A2 cow cream (as distinct from the creme fraiche) or A2 half and half. So I have been combining A2 milk with organic cream from costco that probably contains some A1 protein, though hopefully not too much.

Rewinding the Columbian exchange: To me, the logic behind avoiding New World plants was the most eye-opening element in The Plant Paradox. Many of the nightshades are New World plants, as is corn. Like most people, I didn’t even have “New World plant” in my head as a relevant distinction for diet and health. Whatever you think now about the claim that we may not be well-adapted to eat New World plants, it is worth trying this distinction on for size: try noticing which foods are New World plants and which are Old World plants. Since beginning to read The Plant Paradox, I have found myself googling foods to read about their history. It has been interesting.

Potatoes and corn have a very high insulin index, so I didn’t need to think about Old World vs. New World foods to know I should avoid potatoes and corn—and all the many processed foods made with corn. But avoiding tomatoes was a new idea to me, and something of a sacrifice. I have bulked up my Giant Salad with other vegetables, such as broccoli, cauliflower and bok choy, to compensate. (Steven also recommends avoiding culinary “vegetables” that are botanically fruits, such as cucumbers. For me that wasn’t much of a sacrifice.)

Rewinding the rise of modern food processing: At this point in history, avoiding sugar implies avoiding most processed foods. In addition to the ubiquity of sugar in processed food and the issues I raise in “The Problem with Processed Food”, the dominance of corn in processed food that Michael Pollan points out is another strike against processed food.

Steven also warns against GMOs (“genetically modified organisms). In my life as a whole, I have been a defender of GMOs. I have no objection in principle to genetically modifying the plants for our food. But it matters what genetic modification is being made! Many GMOs are genetically modified to add in genes for natural insecticides! Others are genetically modified so that pesticides can be added externally without killing the plant. So, the point of the most most important genetical modifications so far has been to increase the natural insecticides and make it easier to add pesticides externally to our food chain.

Conclusion

Any dietary change is an adjustment. But beginning from where I was before starting to read The Plant Paradox a few months ago, the benefit/cost ratio for me from going the extra Gundry mile has been excellent.

In addition to the ideas that were new to me in The Plant Paradox—a warning against New World plants, a milder warning against culinary vegetables that have seeds in them and so are botanical fruits (such as cucumbers and squash) and some nuances about dairy—I have noted in The Plant Paradox that Steven Gundry has read the same books I have read and discussed on this blog, and come to many of the same conclusions. There is one big difference: while Steven is very much in favor of fasting, he doesn’t push fasting very hard. I do.

I consider fasting—drinking water, but not eating anything for a period of time—the magic bullet for weight loss. (See “4 Propositions on Weight Loss” and “Magic Bullets vs. Multifaceted Interventions for Economic Stimulus, Economic Development and Weight Loss.”) From reading the Plant Paradox, I realized an additional virtue of fasting: even if we are totally ignorant about something unhealthy in the food we are eating, fasting can give our bodies a chance to heal from the damage caused by those foods.

Sometimes people claim that fasting gives them an unparalleled mental clarity. One way that could be true is that fasting, by avoiding all food, necessarily entails avoiding any specific foods that can cause “brain fog.” If we figure out which specific foods cause “brain fog,” then it could be possible to have that level of mental clarity all the time. In that endeavor, I think Steven Gundry is on to something.

Be sure to also read my post Reexamining Steve Gundry's `The Plant Paradox’.”

Don’t miss my other posts on diet and health:

I. The Basics

II. Sugar as a Slow Poison

III. Anti-Cancer Eating

IV. Eating Tips

V. Calories In/Calories Out

VI. Other Health Issues

VII. Wonkish

VIII. Debates about Particular Foods and about Exercise

IX. Gary Taubes

X. Twitter Discussions

XI. On My Interest in Diet and Health

See the last section of "Five Books That Have Changed My Life" and the podcast "Miles Kimball Explains to Tracy Alloway and Joe Weisenthal Why Losing Weight Is Like Defeating Inflation." If you want to know how I got interested in diet and health and fighting obesity and a little more about my own experience with weight gain and weight loss, see “Diana Kimball: Listening Creates Possibilities and my post "A Barycentric Autobiography. I defend the ability of economists like me to make a contribution to understanding diet and health in “On the Epistemology of Diet and Health: Miles Refuses to `Stay in His Lane’.”