Imagine an algorithm trying to be a restaurant waiter:
“Good
evening, would you like to start with a starter?”
…
“Now
that you’ve ordered a starter, would you like to re-order your starter, or try another
one of our starter range?”
…
“Welcome
back! Since you tried that starter last time, how about trying it again this
time?”
Has no-one in the technology
sector ever considered the value of variety?
A.I. has been the hot new thing for a few decades now, with
commentators often praising the geniuses who write such profitable algorithms, and
occasionally criticising them.
Let me be clear for those new to the subject: the algorithms
that are the basis of the world’s biggest technology companies are not A.I. They
are computational statistics; just a lot of different datasets and computers
powerful enough to test many possibilities and find correlations between results.
Intelligence requires insight; the ability to understand mechanisms, form
expectations about correlations and dismiss coincidences, interpret results and
predict outcomes before testing begins is intelligence. Machines that do these
things are still complete fiction.
Algorithms are not artificial intelligence: they are a tool
to amplify human stupidity (or intelligence, if any can be found). It should be
simple to do it better. Yet no-one does. Why, I can’t say. I can guess.
Examples
of problems
Amazon
Now one of the biggest companies in the world and famed for
mistreating its low-paid employees, everyone seems to be familiar with some of the
Amazon website’s failings. Why would anyone want to buy exactly the same
product they just bought… especially when they have a choice to buy multiple
copies the first time round?
I have bought a mouse and been offered another one; a
computer, and been offered another one; toothpaste, and been offered more… until
a few months later when I’ve used up that batch, when Amazon has forgotten that
I ‘like’ toothpaste products. I’ve looked at products, decided that I don’t
want them, and had Amazon suggest them to me again.
Amazon recommends junk, is always trying to take extra
money from me by tricking me into a PRIME subscription with a tiny little ‘skip’
button hidden somewhere to get normal delivery, and yet has such limited search
that I had to scroll through endless bins noting the dimensions myself to find ones
I liked that would also fit in the space in my kitchen.
Facebook
My world has shrunk over the last few years. My life is
smaller and poorer because I relied on Facebook as a tool of choice.
I originally found Facebook
a fun tool for staying in touch with acquaintances: I could see what they
posted and share my own life without the awkwardness of imposing on them, or
the forwardness of e-mailing directly. It was a fantastic innovation, and one
for which we should thank the inventors Zuckerberg stole it from.
However, Facebook has gradually decided that it isn’t a
networking tool, but an anti-network tool. Acquaintances I liked reading about
have disappeared; some of them still post on Facebook, but I never see that. I
have no social network any more. The hundreds of friends I’ve accumulated from
different lives and interests have shrunk to maybe half a dozen. Everything else, if I scroll for five minutes or fifty, is junk. Much
of it is repeated, as if I’d rather read the thing I deliberately avoided 5
seconds ago than see something interesting.
I didn’t join a friendship network to follow celebrity or
corporate content, but even Facebook’s ‘chronological’ option in the ‘news’
feed is not a chronological list of friends’ activity. It is the usual curated feed
with the junk I’m not interested in re-ordered. Facebook refuses to show me the
activity of the friends who are the reason I’m on Facebook.
I am not
surprised that it is haemorrhaging users. It no longer serves any purpose even
in my life, one of its more enthusiastic initial converts due to my social
awkwardness at using other means of connection.
Netflix
This has rankled for ages. Netflix is a subscription
service. It shouldn’t suffer from any competing incentives to providing me with
the best value. The Netflix algorithm uses the bland categories of Blockbuster,
a company that it bankrupted, and refuses to allow me to block things I’ve
already watched. My whole opening screen is now things I have seen; it is the
work of some minutes to search through the junk I’ll never watch for anything
new. Their filing system is a disgrace. Just think of the advanced
computational statistics you could run with modern computing: what part of a
programme does this individual stop at? Is that gory, horrific, excruciatingly
embarrassing, dull… could you write a programme to judge that? Did the person
pick the show up again, or give up? How can we help this person get more of
what he wants?
Instead, Netflix has gone with the easy approach of how to
keep someone addicted with what their customers will be satisfied with –
judging by the similarity of their approach to Amazon or YouTube. Maybe that’s
just what the experts know how to do, having worked at those other companies.
If it stops me hiding shows then I won’t see how thin its
offering is: I still don’t see things I want to watch, but I’m left with the possibility
that there’s something new hidden amongst all the junk, making Netflix seem
like a well-stocked provider.
YouTube
I grew up with dogs and recently watched a YouTube video of
some working dogs ratting at a farm. I’d never seen dogs hunting before (except
African wild dogs on a BBC nature documentary) and I was curious. YouTube
decreed that I had an interest in rats! I’ve watched a few puzzle
presentations, so now I have calculus as one of my suggested subjects. I
watched a clip of a film scene I liked, so another suggestion is ‘fandango
movieclips’.
There might be skilled programmers writing code in distant
servers that allows Google’s almighty machines to crunch correlations, but if
there’s any insight or intelligence in what I’ve seen, it’s hard to find. How could
anyone be interested in ‘Fandango movieclips’? Why would anyone assume that I
care about film clips in general? And if I watch the final scene of ‘The Good,
the Bad and the Ugly’, why would I want to watch it again immediately
afterwards on a different channel?
Sure, it might be a channel that posts clips of well-liked
films, but it’s hardly an interest is it? Not like, say, stamp-collecting or
painting.
YouTube is the biggest in my personal litany of computational
statistics failures. I started by listening to music on YouTube, just to try to
work out if I liked classical music enough to buy any. From there I found some
occasional videos on other subjects. The videos are often diverting, but rarely
truly valuable viewing – the same problem as everywhere else. Diverting is good
enough for online engagement, but doesn’t actually improve lives. It steals
them away. But viewing is easy to measure, so diverting enough to watch is what
we get; genuine value is hard for a computer to measure and so no-one worries
about it.
I have recently watched quite a few videos about computer
games, and now I see endless videos repeating the same content at me. If I
watch one video about the (genuinely) exciting release of Total War: Warhammer
III, then I must want to watch every single commentator’s take on it, because
they are all similar content.
So far, so familiar. But alongside these wastes of screen
space is an insidious new one. I watched a few film reviews, and noticed that
the reviewer has a thing against preachy, poor writing. Another online
reviewer also has a thing against preachy, poor writing, but YouTube has
combined my viewings of an anti-woke film reviewer with computer gaming and now
wants me to watch Jordan Peterson, Laurence Fox and ‘Moron DESTROYS
feminist!!11!’ videos. Facebook thinks I might like nutjob ideas from Nigel
Farage or Turning Point.
The
overall effect
The path is predetermined; I cannot choose my own way. If I
watch one gaming video, I am compelled to watch them all. And once I’ve watched
them all, I must be radicalised, shoehorned into the neat category that YouTube
has chosen for me. My individuality, if I watch much more of this, will be
erased. Strange that the lunatic Right, which cares so much about individuality
and not being a sheeple, only exists because it can herd its recruits like
livestock, using advertising and malgorithms.
Anything that might have been good in these online
treasure-troves is buried beneath the malgorithm that serves as hellish gatekeeper,
confounding and redirecting me from what I really want. I am in a constant
state of warfare with an invisible enemy, yet all over the news I can read
celebrations of ‘artificial intelligence’.
However clever the programming (and statistics, if any),
this fundamentally misunderstands what benefits humanity. Maybe it’s good for
distraction: the ‘content that’s just good enough to distract someone for a few
more moments’ approach, or maybe ‘just enough to fool someone into a purchase
at this moment’. But it’s not adding value to people’s lives. At best, the
economists’ ‘consumer surplus’, which is the extra value the customers get
above their purchase price, is minimised to 0. At worst, it’s taking advantage
of people’s weaknesses to give less than no benefit.
As far as I can tell, the algorithms I encounter have two mechanisms:
what connections have already been made (i.e. what content is consumed by other
people who consumed this content) and offering more that is exactly like what
has just been shown. There is no clever AI, no insight or genius: just data
processing. I can get better advice from a 6-yr old.
The
answers
The
cause
I can see
only two plausible options for why AI is so stupid: either
i) the programmers are too caught up in the complicated
mathematics and computing to think more broadly about what they’re actually
doing (i.e. they’re not the geniuses some commentators like to say); or
ii)
ii) there is more profit in having bad algorithms.
There’ll be some of each, varying
from problem to problem, but in my experience it’s very easy to run even big
and respected organisations without real insight, innovation or thoughtfulness.
From the risk management failures that caused the financial crisis through to
things I have directly observed, there is more than enough evidence to justify assuming
institutional stupidity as a default explanation.
Of course, where there’s an obvious financial incentive, or
documented evidence, we shouldn’t dismiss greed: the other thing that large
organisations achieve is to separate bad consequences from the decision-makers,
freeing them responsibility and guilt.
These two things are not entirely
separate; trusting, obeying or creating a system frees people from the need to
consider each decision, whether to work to invent a new answer or to choose a
moral one. And organisations necessarily involve systems of control,
communication and interaction, or they are not organisations but
disorganisations. Anyway, that’s a bit off-topic… and yet system creation is
rather relevant to a discussion of algorithms.
Personally, I think it’s obvious
that programmers were not geniuses who knew all the consequences and outcomes
of their work beforehand. They merely had to be clever enough to do the work,
not to understand all its implications too. But repeated testing and
observation allows them to wilfully or thoughtlessly recreate what others chanced
upon.
The idea of the wisdom of crowds, thoughts of market
forces and simple unthinking hopefulness all combine to make people stick with
assumptions and methods that merit more consideration. If something achieves
results, it must be right. That’s why Newtonian physics has never been
superseded by quantum mechanics. There is no improvement needed on the success
we already have.
1.
Variety and radicalisation
Innovation, learning and wisdom
come from exposure to new things. Humans need to expand their minds; those who
find their mental worlds confined often turn to drugs for such expansion. And
too much stagnation leads not only to depression but to senility. The
malgorithms do the exact opposite of what humans need: they give us more of
what we’ve already had.
If I have bought an item, I will be funnelled into more
such items. What was a passing interest will become a life-consuming myopic
tunnel – if I was tempted into giving you profits once by such content, I can
be tempted again, if you bombard me enough until you catch me at a weak moment.
The only way I can escape these myopic tunnels is by being
so temptable and free with my money/eyeballs that I’ll respond to whatever
content I am shown, creating such a wide array of ‘interests’ that tunnelling
is temporarily stymied (although it will always be the most stable outcome of the
system). Even in this case my interests are still not my own: they become
random, controlled by the great malgorithm in the sky once again, and this time
hugely profitable, because the malgorithm can offer guaranteed profits from me
to whoever pays for its promotion.
Innovation comes from linking disparate ideas and concepts
together. The tech sector itself will tell you that fulfilment comes from
solving problems; from creating your own answers or artworks. You will be
pushed towards creativity by seeing myriad different things, not by being given
exactly the same thing, either as a direct repeat (Netflix and Amazon),
something that does the same thing (YouTube and Amazon) or something that’s
next on a path that others have already beaten (YouTube and Amazon again).
Novelty is key to human flourishing. We know that the best
way to learn is to be challenged just within your ability. Things need to be
recognisable, but new. Algorithms leap on any expressed preference (of which
more later…) and convert it into your only preference. Imagine a food
algorithm: ‘you bought chocolate just now, so here’s a meal of chocolate. Ooh,
that tempted you yesterday, so a meal of chocolate will tempt you today. Others
who buy chocolate also buy cakes, so why not have a cake for pudding?’ And then
a rash of news articles praising the geniuses who have written code to give
people what they want, plus a few opinion pieces controversially suggesting
that the population might be getting unhealthier…
When most of us eat food, we do not want that type of food
for a while later: we have a refractory period for consumption of a particular
product. The same for sex: once someone has had sex a couple of times, he needs
a few minutes to recover. The same applies to viewing habits, or shopping
habits: it’s not beyond the wit of man to note a preference, but only act on it
later, when it might be helpful. Could a clever computer work out how long a
tube of toothpaste might last - or even how long it lasted for that individual
– and only suggest a repeat purchase that much time later? Could one note a
gaming news/opinion video and only suggest another one when there’s a chance
there’d be something new to say? There’s a product refractory period and the
individual’s refractory period for that product… a whole new world of variables
to calculate and feed back into an algorithm.
In the meantime, show us something new. Things distantly
linked to our viewing: leap ten links down a chain if it’s too hard for a
computer to grasp the concept of a video or product and suggest something
intelligently. Instead of ‘people who bought this looked at this’, work through
a chain: ‘people who looked at your thing also looked at content B, and people
who looked at that also looked at C…. so here is item J’. If 10 steps doesn’t
work, experiment, but with a weighting against lower links. Think of creating
whole new markets for products, rather than simply strip-mining anything that
looks like one.
I can search for very niche content. I don’t need AI crunching
my data for that. AI should improve on search results rather than simply repeat
them. AI often seems to me to be worse than the search box. I get to type
things into the search box; AI has chosen what to type for me. The joy of going
into a shop is seeing things I wouldn’t think to look for. I’m already capable
of searching for what I already like. AI often removes that ‘shop joy’ –
Facebook and YouTube being the biggest offenders. Amazon, amazingly enough, looks
to have done some work to keep a little of it.
In sum, exaggerating expressed preference (I really want to
emphasise that’s just a name for it: we could have a long argument about how
much preference is expressed…) makes the system unstable with equilibrium at
complete radicalisation (i.e. just one interest consumed all the time). Most
people do not get that far because their usage of the algorithm is limited; because
there are occasional, random additional inputs; and because even relaxed users
eventually rebel against the mental confinement of the algorithm.
Imagine a
programmer who put limits on what his algorithm would radicalise: time delays
on repeat content, limits on the amount of display space dedicated to any one
category of content, random content great leaps of connections away from the user’s
previous usage… someone who would create a programme to help every individual
reach their most preferred content and yet also help keep them healthy and
alert by never getting stuck in a rut.
2.2.
Customisation
On the
subject of finding ways to show us new things that aren’t entirely random, an
algorithm for each individual makes more sense than one for the whole
population, trying to treat us as all the same. How did supposedly intellectual
people forget to include individual (meta-) preferences as a variable in their
computational statistics? Is it beyond the wit of man to allow individuals to
customise their preferences? Not in the Facebook manner of ‘news feed’ or ‘news
feed [with a different name]’, or by selecting individual items and asking it
to show more or less of them, without knowing how they’ve been categorised, but
by choosing more complex sets of criteria?
Just imagine what you might learn about humanity if you
gave people conscious control over what they were shown, rather than relying on
revealed preferences! What if the programmers’ interpretation of what is
revealed is itself wrong? If I watch a video someone placed in front of me, did
I like it? Do I like it as much as one I sought out for myself? If I don’t
engage with someone, is it because I dislike them? Or is it possible I’m not
just a simpering moron thoughtlessly reacting to my passing whims: that I am
giving that person space, not crowding them, worried about what message I will
send them, not sure if it’d feel weird to invade something that might be meant
for closer friends… talk about American bias! I know that Americans are taught
to be outgoing at all costs, that being shy is practically evil and being pushy
is being a balanced adult, but even many Americans manage to reject this weird
propaganda, and the rest of us prefer to laugh at it than mimic it.
Imagine how much I would rely on an algorithm that
supported my attempts to better myself rather than undermined them: one that
allowed me to ask to stay in touch with friends and showed me posts from people
I had NOT seen or interacted with for a while! How much joy would that bring to
isolated people, trapped in lockdowns?
The difference between the customisation I envisage and
search engines is that most complex systems have sensitivities or assumptions
fed in. A population average fed into an algorithm might work reasonably well,
but it might be even better if users could change those inputs to suit
themselves. Show me more variety; place little weight on low numbers of
interactions with content as I’m just testing it out; limit the amount of this
type of content because I want help with self-control rather than attacks on it…
I might genuinely only be interested in so much of a certain type of content: if
a company knows when I reach my limit and offers new things at that point, it’s
much more likely to keep my interest.
Trust the users. Rather than only ask them for data points
and trust you to join the dots however you choose, allow them to suggest shapes.
Give them a tool to tailor to enhance their lives rather than one that tries to
force them to be like everyone else and never change. Using averages makes it
easier to be average: deviations from what the algorithm expects are not
catered for. Perhaps there are large tails of the distribution who could be served
so much better?
People were
probably initially thrilled to find any sort of meaning from population data;
that itself seemed brilliant. And the algorithm’s self-referential ‘vicious
cycle’ nature (see later) ensures that small signals become large ones, so
no-one ever needed to think about improving the initial signal.
3.
Noise
Anyone who has ever done any
analysis of anything, let alone real sound engineering, knows about noise. Is something
a signal, or is it noise? Has the extra-sensitive algorithm detected a revealed
preference from my behaviour, or was it mere chance that I did those things,
and next time I’ll want to do something else? If next time the algorithm shows
me the same things, that’s a response to my signal and we’ll never know whether
the signal was true or not. The lack of any consideration of noise in all these
algorithms is detestable, and yet understandable. They want to find chinks in
our armour of self-control. They can leap on a potential weakness, and if they
fail, leap on the next one without worry: it costs them nothing to try.
That’s the only way to understand the way an algorithm can
decide I like rats after watching one video of working dogs; or decide that I
like jigsaw puzzles after looking at just one puzzle (people only have one
birthday a year, and I’m hardly going to buy puzzles for years in a row). But
it would be more accurate to wait. Intelligent programming, rather than brute
force attacks on users, would calculate the criteria for judging whether
something was a thought for a birthday present or a burgeoning hobby.
Just think how much revenue you might get from me if you
could show me things a friend might like in the few weeks before his birthday,
rather than showing me things that might be good for the friend who just had
his birthday!
What if a ‘like’ on Facebook didn’t mean ‘give me more
interactions with the person who posted this content’, but had intelligent
analysis behind it, trying to place it into one or more of, say, the following
categories: ‘this is just a random interaction’; ‘I enjoy things on this
subject no matter who posts it’; ‘I am interested in many things this person
has to say’; and ‘I enjoyed this particular instance of this sort of content
because it was better than others’.
If Facebook ever had anyone trying such a thing, he must
not have got very far. GIGO is the greatest rule of statistics, or any
analysis: ‘garbage in, garbage out’. I have always used my ‘likes’ sparingly,
being far too laconic and shy for the modern bare-all world. For me they were a
way to signal quality, the last option on my list: ‘I enjoyed this particular
instance of this sort of content because it was better than others’.
Programmers need to find a way to weight every single
interaction that is currently used merely to contribute to ‘show more of our
categorisation’ as contributing varyingly to my categories above. And leave
dummy variables open in case their categorisations are not the ones I am using.
For example, when I pick my viewing, I can very simply define my television
choices as a) a preference for action and adventure of any sort; b) a firm veto
of anything that uses cringeworthiness (for drama or comedy) and c) decent scriptwriting
without massive plotholes or idiots as protagonists. I am surprisingly accurate
at judging (b) and (c) from a mere written description, let alone a short clip.
Netflix cannot comprehend (b) or (c) and so its attempts to find content for me
are doomed.
4.
Quality
If technology companies have a notion
of quality, it seems to be only to get people hooked: show the most popular
videos, and then show more of exactly the same thing once someone has watched
them. I doubt that they have a separate rating from ‘views’ or sales.
What if I only want to watch quality videos, and only one
or two on any one subject?
We could try to work out if quality could be teased out
from popularity, especially given the runaway nature of popularity, which often
feeds more on itself than intrinsic quality. Are there some people, like me,
who give a better signal of quality by rarely interacting with things? How do
you judge quality before you have many likes, and once you have some data, how
do you taper your analysis into ever greater certainty as more and more data
accumulate?
I would start with measuring quality by the level of
interaction by people not usually interested in that content. Or perhaps by
finished viewings. Amazon can use reviews, perhaps noting the proportion of
total purchases who review items, but YouTube, Facebook and the like are stuck
with ‘likes’, a very poor measure of quality.
Dating apps have ‘superlikes’. These have limited supply
and are intended to make users spend on buying more, as they also circumvent
some barriers to matching. But the limited supply should also make them a
better signal of attractiveness.
Clever data analysts would weight ‘likes’ more from those
who use less and are less interested in the content. A like from someone who
gives out two a day should be 500 times as valuable as from someone who gives
out 1,000. Although I can imagine that perhaps first exposure to some content
might generate a like no matter how good it is. For example, someone might like
an anti-vaccination video because it seems to share important new information no
matter whether it was well-produced or a basement rant. And, in fact, more
persuasive anti-vaccination content is actually worse content, as it better
promotes harmful lies.
Can you measure truth? Probably not without any human
assessment. But you could rate truth values of some content, and then check who
likes truthful content but not false content, and give their ‘likes’ or
interactions high value in automated judgements. You could multiply your
inputs.
There are plenty of places to start. I’ve seen no evidence
anyone has even tried!
If people are still not convinced that popularity is a poor
measure of quality, may I invoke the Nazis, as every online debate must always
do? They were popular, but not good.
What should algorithms do with a measure of quality? That’s
another interesting question. Established quality might drown out new, unrated
content. But I doubt tech companies really care about such high-level,
long-term side effects. Quality might be essential for a
healthy society. Quality might help drive user engagement. After all, if
you really can measure it well, then you don’t have to avoid any addictive or otherwise
dodgy practices; you can simply do all the usual malpractice but with higher-quality
content!
Good content isn’t actually needed. Popularity is
actually what is required to keep users engaged; good content was simply how
people used to try to be popular. If popularity can be achieved some other way
(c.f. the Conservative party and Donald Trump) then quality is entirely
irrelevant to the business in hand, despite it being far more beneficial for
people’s lives. But imagine if you could have both!
The incentive not to care about
quality comes from problems with acting on that judgement. If even a derived
variable in an algorithm looks like a judgement of quality, it becomes harder,
mentally, to claim not to be a publisher. This fear might be one of PR rather
than reality, as using popularity is just as much a judgement of quality as any
more complex calculation: it’s merely a very poor one.
5.
Feedback
The feedback loop of algorithmic
control ruins the data, rather than reinforcing it. If I exert control over
something, I can no longer treat its new state as the natural state; as raw
data. If I chop a tree down, I can’t conclude that it grows horizontally. These
feedback loops exist elsewhere too: in the stock markets, where the ‘wisdom’ of
crowds is itself a signal, and all that matters is outperforming the crowd, the
feedback loop drowns out any price discovery. In any popularity contest, across
friendship, politics, media influencers, writing or music, success begets more
success: people are drawn to, and more likely to hear from, those who are
already popular.
Yet algorithms are built on the strange idea that if people
respond to the algorithm then they must have been interested in that to begin
with. At the same time as trumpeting the power of algorithms to persuade people
(to advertisers), the data analysts are assuming that no power was exerted and
the results are all innate preference.
Imagine you’ve just bought a new computer (and maybe you’ve
moved to a new house and new ISP and whatever else companies measure). The
algorithm doesn’t know you yet. You’re at the top of a hill with a whole world
to explore. What direction do you choose? Some people know where they want to
go: they have one hobby and it’s all-consuming. But others might prefer to
explore. Tough luck: once your ball starts rolling downhill, that’s it. The
algorithm will whizz you into the deepest rut in that direction and keep you
there. Did you like computer gaming? Well, now you hate women and want to
massacre school students in America. If you want to see what’s in the other
direction, or even in the next valley over; if perhaps there’s a mine of
information elsewhere that you might prefer, tough luck. You have to work your
way out of the hole yourself. The malgorithm will try to feed you misogyny,
over-eager to guess where you’re going, desperate to push you into the rut,
even though the nature of ruts in the road is that you don’t want to get stuck
in the mud everyone else has created. Your first tiny push and willingness to
explore has been turned into a ‘signal’ that you want to wallow in filth.
It's an untenable position. Knowing humanity well, and
these companies a little, I feel confident asserting that very few people
within them have even thought about this. It’s a fundamental logical
inconsistency that has got lost in the rush to complete discrete subroutines or
‘improvements’.
But one could, if feeling more charitable, assume that the
controlling ‘intelligence’ in the companies is a little intelligent. Perhaps
they know that exerting power changes the environment and that therefore their
conclusions about people’s preferences are unfounded. If that’s the case, there
is only one possibility remaining: that they are evil, exactly as the backlash against
the tech sector claims. They know that they are distorting, rather than
discovering, people’s preferences, and continue down that path because it is
profitable.
Maybe they believe that distorting
people’s preferences: bombarding them in the hope of a moment of weakness, or
showing them content that is just distracting enough, rather than as worthwhile
as possible, is more profitable than adding value to people’s lives by
discovering and servicing their real preferences. I think that’s an unproven
assumption.
This is what most advertising is:
it is about creating new preferences. But it’s a strangely self-consuming world
in which these content-providers steal more and more of your life through
distorting your preferences in order to show you more adverts for other things
which want to fit into your dwindling life! The advertising platforms are
crowding out the products that want to use them.
But if it really is true that
people’s real, consistent preferences, in the most virtuous versions of
themselves that they aspire to be, are for less consumption, it says a lot
about our world that the companies promoting global destruction through massive
consumption – through encouraging people to be the impulsive, lesser versions
of themselves - are valued so highly, and praised as the height of innovation.
If, on the other hand, people do want to waste their lives and the planet, unguided
by malign influences, perhaps nudging us to be better, rather than indulging our
weaknesses, would be the right thing to do.