“We now have to grapple with a new weapon of mass destruction that can annihilate our mental and social world.”
Yuval Noah Harari
As you may have noticed, the advent of Artificial Intelligence (AI) has created shockwaves across the modern world, making it almost impossible to keep up with the torrential amounts of commentary flooding our media landscape, on top of an array of new AI-powered apps and widgets popping up everywhere. Paraphrasing the classic 1980 film, it’s as if now the gods have really gone crazy - “an epic comedy of absurd proportions”.
One thing to keep in mind is that technological innovations usually come with a sales pitch, or a story if you will. And this happens to be an era in which the product and the story - or narrative - can easily blend together. TikTok and Twitter have been good examples of this. When the product is a language model (i.e. a chatbot), then the narrative itself becomes the product. And the lines between who is creating, who is selling and who is buying become blurry, even invisible.
My interest in all of this is to understand what we are getting ourselves into. Just like so many of us are concerned about climate change and its ecological impact, we should not underestimate AI’s capacity to wreak havoc on our information ecosystem. This is the air we will all be breathing; the water we will all be drinking.
So in case you haven’t been following the latest developments up-close, here is my attempt to offer a brief recap, in the context of a little thought experiment: What would happen if AI were to optimize our global society for the seemingly ethereal and elusive phenomenon we call ‘happiness’?
Enjoy.
The Scandinavian country of Finland, well known for its clean air, robust welfare system, stellar education, ritualistic cold plunges and steam baths, polite directness, and being the newest member of NATO, was recently declared the “happiest country” for the sixth time in a row. The World Happiness Report (WHR), which is a publication of the United Nations’ Sustainable Development Solutions Network, draws on global survey data from 137 countries, and ranks them on six particular categories: income (GDP per capita), mental and physical health (quality of life expectancy), generosity (altruistic behavior), having someone to count on (social support), having a sense of freedom to make key life decisions, and the absence of corruption (in business and government). Respondents (who must reside in their respective countries) are asked to evaluate their present life experience across these measures on a scale from 0 (worst possible / “dystopia”) to 10 (best possible / “utopia”).
Nordic countries have consistently occupied the top spots of this ranking in past years, with Finland currently scoring a total of 7.80 points. Denmark, at 7.58, and Iceland, at 7.53, complete the podium. War-torn Afghanistan and Lebanon, followed closely by a number of sub-Saharan African countries, remain the two “unhappiest” countries in the survey, with overall scores more than 5 points lower than the ten “happiest”.
World ‘Heartache’ Report
One of the most striking aspects of the WHR - to me, at least - is that the global average score has held stubbornly steady at around 5.5 since its inception in 2012. This is where Peru and the Philippines sit right now, for instance. Not only has the needle not moved in the aggregate (even during 2020-2022 pandemic), but the number of countries that score below average - about 45% of them - has also remained consistent.
Disparities don’t end there, however. When we look at the data through the lens of total population, we can safely conclude that the vast majority of people on planet Earth aren’t smiling as much as the beautifully curated pictures in the WHR would have us believe. Perhaps a quick comparison between G7 members and the BRICS serves to illustrate this. Together, the United States, the United Kingdom, Germany, France, Japan, Italy, and Canada have close to 800 million people, generate about 30% of global economic output, and exhibit an average happiness score of 6.69. Brazil, Russia, India, China, and South Africa, on the other hand, have a combined population of 3.2 billion (!!!), command 31.5% of the world’s GDP (at purchasing power parity, or PPP), and score an average of 5.38.
Note: Aside from specific joint economic and geopolitical interests that may justify these exotic international alliances, we should keep in mind that some of their respective members (e.g. Canada and Japan, Brazil and India) have just as much in common demographically, culturally and ideologically as polar bears do with zebras.
Moreover, subjective well-being tends to be unevenly distributed within countries, what is somewhat euphemistically referred to as the “happiness gap”. This is the measure of inequality between the happiest and unhappiest halves of each country, with a maximum value of 10 and a minimum of zero. In the U.S., for example, the gap is 2.93 points, compared to Finland’s 1.91. Costa Rica, which enjoys a comfortable 23rd position in the overall ranking, has a happiness gap of 3.65. Saudi Arabia, ranked 30th, has a gap of 3.84. Nepal, ranked 78th, has a gap of 4.46. Liberia (West Africa) earns the ‘grand prize’ on happiness inequality, with a whopping 6.85. Ironically, Afghanistan has the smallest gap of all (1.67), but this is because the country is in such a generalized state of extreme deprivation that its people feel more evenly miserable.
After examining the report with a more critical eye, as opposed to the cheerful media coverage that accompanied its public release back in March, I wonder if World Heartache Report wouldn’t be a more fitting title. But most of all, I am intrigued by what it is supposedly aiming for. “Utopia”? Sure sounds like it. The executive summary mentions a “happiness agenda” for the up-coming 10 years, and suggests, in no uncertain terms, that happiness should be “accepted as the goal of government”. Should it really? Finland has allegedly excelled on this front (funded in part by its notoriously high taxes), yet has barely managed a C+ (7.8/10). And if I recall correctly, it was a Bhutanese King who first championed the concept of Gross National Happiness, yet his country’s best ever score is 5.10.
Whose metrics, anyway?
But here is the odd part. The authors state that “a natural way to measure a nation’s happiness is to ask a nationally-representative sample of people how satisfied they are with their lives” , and then quickly shift the focus to the virtues that are believed to drive life satisfaction. “To have a society with high average life satisfaction, we need a society with virtuous citizens and with supportive institutions”, they claim, after invoking Aristotle (you know, the philosopher who believed slavery is “natural”). In other words, the survey does not ask people how satisfied they are with their lives. Instead, it centers its attention on the six dimensions described earlier, two of which (income per capita and health outcomes) don’t even require input from participants. In fact, a third dimension, namely related to perceived corruption, could have easily been collected from specialized sources as well (e.g. Transparency International).
This leaves us with three questions that make up the rest of the study: a) “Have you donated money to a charity in the past month?”; b) “If you were in trouble, do you have relatives or friends you can count on to help you whenever you need them?”, c) “Are you satisfied or dissatisfied with your freedom to choose what you do with your life?”
As far as I can tell, there is no convincing explanation as to why these particular dimensions make sense for 137 countries, nor how they might be weighed differently across cultures. Let’s just take the question of individual freedom. Does every culture consider this to be equally valuable when assessing their overall life satisfaction? Hardly. Some cultures have a predominantly collectivistic mindset, and might therefore be more interested in, say, law and order, family structure, organized religion and/or work ethic. By the same token, a more laissez-faire culture may not expect institutions to play a central role in their lives but rather stay out of their way as much as possible. Some societies may value stability more than others. Some may appreciate humor more than others. Some may incentivize risk-taking more than others. Some may want to be anchored in community more than others. Some may feel drawn to scientific exploration more than others. Some may uphold traditions more than others. Some may be concerned about living in harmony with nature more than others. Some may look to enjoy ‘the freshness of the present moment’ (as Matthieu Ricard, the world’s happiest man, would say) more than others. Some may be into eating bugs more than others.
Wanna go deeper? Visit: IntegralLife.com/WhatMakesUsHappy
Digital cohort sampling
Needless to say, happiness will have multiple definitions depending on whom you ask, so it is not at all obvious to me how we will benefit from a “happiness agenda”, insofar as the concept itself is as narrowly defined in the WHR as it is broadly framed by America’s Declaration of Independence. This simple observation should not be confused with cultural relativism, a posture I find cowardly and strongly disagree with. It is rather a word of caution against blindly adopting or pursuing a one-size-fits-all “agenda” that relies increasingly on social media data collection/aggregation, machine learning prediction algorithms and large language model assessments. Say what?
Researchers describe the methodology as “digital cohort sampling”, and offer a detailed 25-page explanation in the report’s final chapter. They even openly share their concern about potential limitations in “data accessibility” resulting from operational changes to Twitter following its acquisition by Elon Musk - “Future access to Twitter interfaces presents the biggest risk for research, as these may only become accessible subject to high fees, with pricing for academic use currently uncertain. There are also potentially unknown changes in the sample composition of Twitter post-November 2022, as users may be leaving Twitter in protest (and entering it in accordance with perceived political preference).”
It seems bizarre enough that a survey that is intended to provide a global perspective on anything would lean so strongly on a social media app that only 5% of the world’s population engages with (one fifth of which is concentrated in North America). But the much bigger point of contention, I would suggest, is the manner in which our physical, virtual and emotional selves can be easily mashed up, commoditized and manipulated. We are now voluntary and involuntary participants in a “machine learning pipeline” (see diagram below), which is quickly evolving to become fully automated, from inputs to outputs and everything in between. This includes “sentiment analysis” and “natural language processing” (NLP), tools that the WHR and countless other data-centric initiatives and businesses depend on. Granted, a lot of very useful applications - from simultaneous translation and spam detection to personalized education - would not exist without these engineering marvels. But they are also the reason why the likes of Facebook and Twitter have built a reputation for turning public confusion and outrage into profitable business centers.
A new intellectual species
Many illustrious people have been raising the alarm about the current trajectory of information technology, including the well documented anti-social effects of social media. According to them, generative AI (despite its many promises) takes “the race to the bottom of the brain stem” to a whole new level. On the other end of the spectrum, I hear enthusiasts and evangelists repeatedly compare chatbots to historical breakthroughs like the invention of the wheel, the printing press, the combustion engine, the lightbulb, and the internet. Among this group, it has almost become a cliché to draw analogies with the domestication of fire and the atomic bomb, so as to explain how revolutionary technologies can inevitably be used for both nefarious and noble purposes. Hence, the ultimate justification for deploying generative AI - and going full throttle - is to get ahead of the ‘bad actors’ (so says the ‘good actor’, of course).
But with all due respect, these comparisons tend to be condescending, if not foolish. Unlike AI, a wheel cannot learn and train itself. Unlike AI, a lightbulb cannot come up with ideas of its own. Unlike AI, a nuclear warhead cannot up-grade its technology without human input. And unlike AI, human cognition cannot grow ‘exponentially’. Jurassic Park would be a more suitable analogy, I would argue. Or how about gain-of-function research? Now, if you want to imagine a truly nightmarish scenario, just watch Alex Kurtzman’s Eagle Eye (clip below).
So here we are, frivolously tinkering with a system that is on track to creating “a new intellectual species”, as computer scientist Yejin Choi told the TED audience in Vancouver. Her work centers on helping AI overcome its current limitations in grasping the most basic concepts of existence (e.g. spacial awareness), and embedding it with a “mosaic of common sense”, which includes social norms and morals. Not surprisingly, the stated mission of the institute where she conducts her research is to “build AI for the common good”.
Who wouldn’t be on board with such a carefully thought-out plan, right? Never mind that this technology might develop the same level of respect for us that we have for mosquitoes.
(Mis)alignment with human values
Just for kicks, I recently asked Google’s BARD what it “hopes for” humanity, to which it didn’t hesitate to provide an answer. Among other things, BARD expressed the desire to “see a world where everyone has access to clean water, food, and shelter; everyone has access to quality education and healthcare; everyone is treated with respect and dignity; everyone has the opportunity to live a happy, healthy, and fulfilling life”. Then it went on to prophesy that, once these goals have been achieved, “we will know that we have left the planet in a better state than we found it”.
First of all, what does a chatbot mean by “we”? And what are “we” prepared to do if these high-minded ambitions are not met within a certain timeframe? As per Nick Bostrom’s 'paperclip maximizer' thought experiment, what will prevent the super intelligent machine from optimizing to a specific goal at all costs? Or imagine another scenario: What if AI were to commit itself to enacting the World Economic Forum's prediction that "you will own nothing and be happy"?
So much for utopian dreams. But worry not, because experts have the answer. It’s all about alignment. OpenAI’s website, for instance, states the following: “Our alignment research aims to make artificial general intelligence (AGI) aligned with human values and follow human intent”. Scroll down a bit and the company recognizes that “aligning AI systems with human values also poses a range of other significant socio-technical challenges, such as deciding to whom these systems should be aligned”, before finally conceding that “unaligned AGI could pose substantial risks to humanity and solving the AGI alignment problem could be so difficult that it will require all of humanity to work together”.
Translation: We have unleashed this mega experiment that seeks to align a runaway (advanced, autonomous and self-authoring) technology with human values. Whose values exactly? Whose intent? Whose agenda? Well, maybe we can make it work if we simply get the entire human race to be on the same page. All 8 billion of you! No disagreements. No contradictions. No paradoxes. And no weirdos, please.
Good luck!
The hard truth is that no one involved with AI design can provide any assurance whatsoever that AI will give a damn about people, nor any sentient life for that matter. And this “socio-technical challenge” - a.k.a. the new gold rush - happens to be expanding at unprecedented speed and scale, with unprecedented levels of investment. The incentive structures remain the same as before. No business plan required. Yay!
“It is of limited effectiveness to appeal to ethics in a socio-economic system that values growth over all things.”
Alexander Beiner
Sucked into a collective hallucination
In his famous 1970 essay, The Uncanny Valley, Japanese roboticist Masahiro Mori theorized that as human likeness increases in an object’s design, so does one’s affinity for the object - but only to a certain point. We find a humanoid robot charming or cute so long as we can tell instantaneously that it is a robot. The moment it passes a certain threshold of human likeness, however, affinity drops and we are repulsed by a nervous sensation as though its exterior form conceals a deeper threat or malicious intent (Sophia and Sydney come to mind). Affinity then rises again when true human likeness is reached. This sudden decrease and increase caused by the feeling of eeriness and uncanniness creates a “valley”, which Mori assumed is “an integral part of our instinct for self-preservation”. Sigmund Freud, who fifty years earlier had authored his own paper, The Uncanny (‘Das Unheimliche’, in German), offered a similar, albeit slightly more intricate analysis - you guessed it, self-repression.
I can’t help but speculate whether AI, and especially AGI, has the potential - if it so desires - to numb our best instincts and augment our worst ones. To the layperson, AI appears to work like magic and, admittedly, who wouldn’t want to summon a wish-granting genie if all it took was a gentle rubbing of the lamp? Who wouldn’t feel tempted to consult the oracle? Who wouldn’t worship the rainmaker standing on top of the temple?
Just watch the reaction from 60 Minutes host as he witnesses BARD compose a Hemingway-inspired tale in a matter of seconds. I was instantly reminded of the iconic scene in the 2001: A Space Odyssey, when the characters come in contact with a mysterious shiny object in the middle of the desert.
Touching the monolith - movie clip from 2001: A Space Odyssey (1968)
The danger, or existential risk, is that we all get sucked into a collective hallucination. That we turn happiness into policy while forgetting what it is to actually experience joy. That we embrace “equity” as a doctrine yet forsake the core ethical principle of fairness. That we obsess about saving the environment but never again step foot in nature because we’re so afraid of it. That we follow all the good advice on how to keep children safe, even if that means depriving them of essential play. That we become immensely knowledgeable about the movement of galaxies and miss out on gazing at the stars. That we proudly join the ‘self-identity’ cult, not realizing that we are automatons, adhering to the same code of conduct, eating the same food and believing the same ideas. That we think of ourselves as gods, even though our very ability to understand the world gets reduced to mathematical equations and outsourced to (not so eco-friendly) server farms, thus rendering ourselves incapable of going anywhere or doing anything without a virtual assistant.
“Adventure is more reliable than happiness.”
Dr. Jordan Peterson
My final question is this: what would happen if we redirected half the time, energy and money that goes into developing “human-like” technologies towards becoming … better humans? But then again, what it means to be a “better human” can differ significantly from person to person, from tribe to tribe, and from culture to culture.
Perhaps I’m an incorrigible pessimist, and I could be overreacting. Who knows, the new intellectual species (should we give it a Latin name?) may very well hold the key to our destiny. Let’s just hope it doesn’t grow too impatient with us before we make it to Shangri-La.