Nudging, Big Data, And Well-Being
This article originally appeared in the Monday Magazine of 3 Quarks Daily, see here.
We often make bad choices. We eat sugary foods too often, we don’t save enough for retirement, and we don’t get enough exercise. Helpfully, the modern world presents us with a plethora of ways to overcome these weaknesses of our will. We can use calorie tracking applications to monitor our sugar intake, we can automatically have funds taken from our account to fund retirement schemes, and we can use our phones and smartwatches to make us feel bad if we haven’t exercised in a while. All of these might seem innocuous and relatively unproblematic: what is wrong with using technology to try and be a better, healthier, version of yourself?
Well, let’s first take a step back. In all of these cases what are we trying to achieve? Intuitively, the story might go something like this: we want to be better and healthier, and we know we often struggle to do so. We are weak when faced with the Snickers bar, and we can’t be bothered to exercise when we could be binging The Office for the third time this month. What seems to be happening is that our desire to do what all things considered we think is best is rendered moot by the temptation in front of us. Therefore, we try to introduce changes to our behaviour that might help us overcome these temptations. We might always eat before going shopping, reducing the chances that we are tempted by chocolate, or we could exercise first thing in the morning, before our brains have time to process what a godawful idea that might be. These solutions are based on the idea that we sometimes, predictably, act in ways that are against our own self-interest. That is to say, we are sometimes irrational, and these “solutions” are ways of getting our present selves to do what we determine is in the best interests of future selves. Key to this, though, is we as individuals get to intentionally determine the scope and content of these interventions. What happens when third parties, such as governments and corporations, try to do something similar?
Attempts at this kind of intervention are often collected under the label “nudging”, which is a term used to pick out a particular kind of behavioural modification program. The term was popularized by the now famous book, Nudge, in which Thaler and Sunstein argue in favour of “libertarian-paternalism”.
In line with what I have said above, the authors recommend that because there are reliable ways that we get things wrong (and hence act in ways that go against our own self-interest), we ought to implement certain “choice architectures” that guide our behaviour in desirable directions. The claim is therefore that it is possible to maximise individual welfare with this type of intervention. This kind of interference has been made even more effective by the rise of Big Data analytics.
Talk of “Big Data” refers to the fact that corporations and governments have massive amounts of data on each and every one of us that they can make use of for their own purposes. Usually that purpose is to try and make money or reduce costs, but sometimes, at least ostensibly, these groups say they want to help us be better. They want to “nudge” us in the right direction: From “opt-out” retirement plans to flies in urinals, these seem to be innocuous, and in fact properly beneficial, ways in which our choice architecture can be influenced by third parties to our own advantage. Moreover, through the massive amounts of data that are now available, it is possible to tailor nudges to specific individuals. The data may suggest that I am more prone to making impulsive decisions at night, while you are more likely to buy things early in the morning. I might be swayed by appeals to emotion while you might be susceptible to framing effects. Nudgers can take advantage of this in order to streamline their interventions and make them more effective, perhaps even turning a “nudge” into a “shove”. The combination of Big Data and the success of nudging could thus be conceived of not only as aid to our decision-making, but also as a threat.
The most intuitive objection to such a theory is that it goes against our individual liberty: nudgers get to decide what is best for nudgees, and so those on the receiving end of these interventions seem to have their freedom undermined. Ordinary folk are not trusted to make the “right” choice. While there might be cases where this could be justified (such as opt-out models for retirement and organ donation), this is not true for all cases and in all contexts.
However, I think there is an even deeper problem than this critique of liberty. What those in favour of nudging assume is that “welfare” ought to be understood in a “hedonic” fashion. What this means is that they argue that an individual can be said to be “doing well” so long as they, on average, have more pleasurable rather than unpleasurable experiences. Thus, nudging can be justified because it allows agents to act in ways that are desirable, meaning that they might indeed have more pleasant experiences than they would have in the absence of the nudge. So, even if it erodes the liberty of agents, because it allows them to avoid pain (that follow from bad decisions) and maximise pleasure (that follow from good decisions), nudging can be justified. What I want to do is outline, briefly, is why I think this narrow understanding of well-being is problematic in the case of nudging in particular, but also more generally.
That is because this hedonic account of well-being is far too narrow. It assumes that we are just pleasure maximizing, and that therefore whatever maximizes our pleasure must be good for our welfare. This is just wrong, and perhaps even exactly backwards. Oftentimes, it is precisely in the pursuit of what we find most meaningful in life that we experience a great deal of pain. Bodybuilding, marathon training, or making demanding career choices are all examples where we willingly put ourselves through painful experiences in the pursuit of worthwhile goals. Moreover, it is often the case that it is less about the goal itself and more about the process or the habits formed in the pursuit of the goal that make it especially valuable to us. Yes, running a marathon is a great accomplishment, but the discipline and mental fortitude one needs to have in training are not conducive to well-being in a narrow sense. Rather, they might contribute positively to our character over time, and are thus not merely instrumentally valuable. Nudging, by assuming a narrow and simplistic conception of well-being might therefore undermine well-being in this broader, more holistic sense. If we are not trusted to make our own mistakes and learn from them, how can we hope to be better versions of ourselves? How can we develop moral and intellectual virtues if we are not confronted with situations in which we can actively engage the moral reasoning that underpins many virtuous character traits? It seems that to account for these concerns we need a broader (or perhaps entirely different) conception of well-being, and of what it means to be “living well”.
Based on this, defenders of nudging would do well to adopt a different conception of well-being upon which to build their theory. It might even turn out that certain nudges are inimical to our well-being and having a broader account of well-being would allow us to specify and get a handle on this. This approach is exemplified in a recent paper by Steffen Steinert and Matthew Dennis, where they argue for a eudaimonic conception of well-being, which shares many of the characteristics of what I think a more inclusive account of the concept would entail. Of course, I have not defended a particular account of well-being here (although I hope to do so soon), the point, for now, has just been to point out some problems with such hedonic accounts, and show that the threat to freedom is not the only problem with nudging.