A trap to end human employment?

Why AI is not the full story about our new class based society

Mark Burgess
14 min readJun 13, 2023

Plus ça change, plus c’est la même chose. I stumbled across a website of public speakers recently–individuals who could be hired to talk at company events on a wide variety of topics for a (not inconsiderable) fee. At first I thought: this is interesting, perhaps I could join! But then I baulked. I had only heard of one person on the list out of a hundred or more, so who were these people? They seemed to be simply a club of business colleagues who had figured out that they could leverage the trust they had in one another to make money, pontificating on subjects in a hopefully entertaining way. But what did they really know? They had no obvious credentials. Why should anyone trust what they had to say? But, of course, the answer is simple: trust is not a rational calculation.

Trust in the Internet era

Trust is an interesting aspect of human judgement. It works the same online as offline, but our habits are somewhat different. Its two parts (trust and trustworthiness) seem to work together as a coarse regulator of impressions, shaping our preferences and keeping us out of trouble. The role of trust in the 21st century is interesting, not only for conventional society, but on web forums, in computer games, and in virtual reality. But it’s a lazy judgement, so we afford it to things we believe we “know” well by default: starting with tribe and kin.

The names on the speaker site were all people from Norway’s business elite, with only a few public figures like journalists to lend credibility: a group of well-to-do local socialites involved in the money and leadership business. Who would trust them? Of course, their own tribe. The question of what they might know is perhaps not really important to their listeners. Trust means you’re not so much listening to the details as accepting the promise. We live for good stories after all, not for accurate facts. Trust is not about accuracy, after all, yet it’s a powerful impulse that keeps people both in and out of exclusive clubs.

The use of storytelling to build and maintain trust is something that makes the world of leadership different from the world of the “hands-on” domain-experts. For trained workers, the stories told are more like recipes aand yarns about work experience. Skilled jobs are easier to automate, because such clearly identifiable procedural stories about concrete things make the playbook more mechanical. Leadership is less automatable for the opposite reason. The irony is then that leaders are the ones, perhaps without any formal training, who will get to keep their jobs in the future, when automation challenges employment.

This distinction between leaders and workers has little to do with subordination, and more to do with algorithms. We have become Eloi and Morlocks, a dual class society, just as H.G. Wells predicted. That division is widening in the age of technological enhancements and software automation–along with its associated wealth gap (which has nothing to do with skill).

Hiring by trust or by risk management

The distinction between these two classes is most keenly apparent in the way we fill their job positions. We hire “senior managers” in a very different way from the way we hire “skilled workers”. For senior positions (where Human Resources might not even be involved) we ask: do I trust this person to be an asset? Is he or she part of my network of peers and trusted individuals? If so, we give them the benefit of the doubt and find a way to afford them, sometimes in spite of lamentable track records. For everyone else, we ask: how much is this person going to cost me, and what do I stand to lose by hiring them? How can I shrink wrap my needs into the cheapest disposable asset?

For class A we manage trust via an inscrutable grapevine of tribal connections: are they like me, are they accepted by my group? For class B we manage risk by probing a much larger and more open market of unknowns: can they give me immediately and precisely what I need? How much do I stand to lose?

Workers want to believe that employers need them and therefore ought to take care of them, but in the 21st century we’ve found alternatives and we’ve made interpersonal relationships a more disposable commodity. In a sense, our modern prioritization on individual freedoms has served to undermine the traditional notion of keeping a single workplace as a home for life, so companies may now feel the need to check that they can rely on employees to do as they’re told and to keep promises previously taken for granted. On top of that, we're in a time of extreme judgementalism and righteousness about the right to "self-identify" and thus treat one another. If you prioritise “me” over “we”, why should anyone prioritise you? Nowhere is the tension between freedom and authority more keenly in focus than in the workplace.

Two cities

Class A leaders “network” (verb) with their peers to ensure the flow of money in a business. They look at workers (with a market lens) as being part of the ambient machinery not as part of a family, in spite of much rhetoric to the contrary. They want ready-programmed workers with precisely tailored experience: robots (in the original sense). They ask: could I get someone or something cheaper to do class B work? Could I outsource this task to another country to bring down costs? I don’t need long term creative input or innovation under normal circumstances, I just need execution to bring in the money! Could I simply replace a person with a machine that will just cut out the middle-man and save me some taxes?

Some like to blame these attitudes on globalisation, but that’s a red herring. Globalisation potentially has huge benefits to the human race, in terms of coherence, economies of scale, and the ending of conflict. But companies don’t treat offshore suppliers as people either: essentially they are just more machinery. To a jealous national workforce wondering where jobs went, foreigners are just easier to despise than abstract technology, so now we can add “AI” as a catchall banner to the list of “foreign” suppliers that can take our jobs too.

Software might not be as good in every detail as a trained human, but it might be good enough, because work is usually trying to shrinkwrap and suppress human qualities anyway. Also software doesn’t get tired and it doesn’t try to offer ideas. Sometimes machines are just a better fit, but you do have to pay overtime when you rent them as a service.

Age and individualism

If for no other reason than demographics, we’re edging closer to a world of dependency on automation. We can’t scale processes to raise the standard of living for all, at least not using human effort alone, because no matter how well we train humans to act like machines, for speed and for consistency, they burn out more quickly and have a limited speed and endurance.

Lately, the phenomenal success of chat GPT has already witnessed people being laid off because, in these nascent moments, it’s actually cheaper to generate “good enough” answers with a free AI service. A company flying close to the financial edge could save a little by doing this in the short term, but we should all be cautious of this implicit bait and switch. Apart from the inscrutability of the result, it won’t be cheap forever. Remember when renting cloud computers was so much cheaper than buying your own? Then suddenly it wasn’t. AI is not a public good, it’s just another business looking to monopolise a market.

Machinery and software are needed to secure our lifestyles. Part of the reason is that today’s human services are often sadly lacking, especially in the age of rights and individuality. A doctor makes us wait three hours without a word and then barely hears us when we finally get into the consulting room–then he or she simply feeds us the standard drugs prescribed by a computer, apparently without a care in the world. An expensive software developer is more interested in playing with a mashup of the latest toys than in engineering a disciplined design and forms a protest group when asked to focus. Against this backdrop, it’s not surprising that we look for something that will better take care of our needs when we’re on the receiving end of a relationship.

AI doesn’t envy us our work or want to steal it from us any more than foreigners do, but its masters are after the same money as everyone else. What makes AI so talked about is that it’s an effective strawman for discussing the inhuman side of redundancy–even as it shows us that any harm inflicted onto ourselves is ultimately our own doing.

Sense of purpose

Whether class A or B, human employment is not only about executing tasks. A large part of it is about bringing stability and coherence to the world we’ve inherited. It’s about keeping people satisfied with their lives, and offering a sense of meaning to our existence. Machines as tools are welcome when we face dangers or can’t manage alone; but we don’t always understand how best to scale effort without emulating humans. In the long run there has to be an improvement to the lives of people: enhancement rather than replacement. Finding new jobs is traumatic for many and it resets all our trust gauges.

Organisations typically feel no special obligation to the societies in which they operate. We might try to blame that on capitalism, or on competition, but more broadly it’s just a short sightedness in the way everyone manages promises and expectations. The poorly named Information Age has given us the misleading impression that we are all independent–that we can have anything at the push of a button without involving anyone else. The moving parts that make those wishes come true are invisible to us. As a result, most of us are Pavlovian tourists in an upending civilization we take too much for granted.

Class division and time management

In his book Capital in the 21st Century, Thomas Piketty argued how we are gradually reverting to a form of feudalism, consisting of a class of workers and the modern equivalent of some rich landowners (perhaps, four letter tycoons). He concluded this after examining the financial records of Britain, France, and a few other countries over several centuries. After the broad economic reset that followed two world wars, financial dynasties were wiped out and living standards became more equitable and had an upward trend. Social inequality then started growing again after the financial deregulation in the 1980s. Though many aspects of our lives are better today than ever before, the idea that people’s standard of living is on a trajectory of improvement is no longer true. In purely economic terms, society is splitting further into class A and class B once again. Many children will not be better off than their parents for the first time in half a century.

We can each draw a circle around ourselves. On the inside is what we trust, on the outside we manage risk instead. Trust works in our favour because it means we don’t have to expend significant effort watching over everything in our environment that might harm us. It’s a bit like when Mankind discovered fire and the virtues of cooking to extract more nutrition from the same amount of food. That surplus energy enabled us to free up the time cost of living: to develop–not merely to survive. Trust is a software innovation along the same lines.

The question we face today is, how do we properly integrate the machines of our own making into this picture?

The question we face today is, how do we properly integrate the machines of our own making into this picture?

Robots

Asimov developed the idea that robot machinery could be a force for good in society. Robots were supposed to relieve us from menial physical work. We were supposed to go on to do better things, more fulfilling things. The truth is that most people don’t have anything more fulfilling to do than to go to their repetitive job, because work itself is both a meditation for us, and more than that it has a social dimension that brings coherence by building trust together.

Augmenting humans with technology is not too difficult, but fully replacing even simple manual tasks with robot workers is. Physical motion and realtime skills are hard to emulate with machinery, particularly outside a controlled environment. Much of the animal brain is devoted to sensing and to moving for a reason. Combine that with on the spot decision-making and it’s far beyond our capabilities today.

Human window cleaners are still the ones who dice with death to wash the windows of skyscrapers on the 99th floor, and human children are still the ones enslaved to scrabble around in mines in some parts of the world, extracting the rare metals used by their technology overlords. Anyone who comes to your home to install or repair an appliance needs those trivial yet non-reproducible manual skills.

We’ve had more success in transforming human tasks with software systems in the area of administrative work, because isolated procedural reasoning with controlled input and output is easier to scale than manual labour: accounting, online sales, scanning of medical images or satellite data, analysts, weather forecasting, etc. All these can easily be done in cyberspace. Humans can stay on as advisors and tweakers of the software workhorses.

Author's painting: "Bureaucracy"

In other words, it’s not what we traditionally called “menial” jobs that are potentially being replaced or reduced first, but the mid-level administrative and professional jobs that form the backbone of the service economy–the middle class route to wealth. Accountants and cashiers will be the first to go, then architects and doctors, and so on. The rungs of the ladder are being removed. Those skilled advisors will work alongside services to train and improve them, but may lose their autonomy as independent consultants and become subordinate to a class A figurehead in larger monopolised firms.

In the twenty-first century Western narrative, our uniqueness and individuality are the keys to a better world. It’s a good yarn for rousing emotions during elections, but in a sense, the opposite is true: coherence and replaceability are what sustains the world that sustains us. If a single person’s absence were a major blow to the world, we would not survive very long. Coherence is what allows us to depend on one another èn masse, to share the work, to scale up repeatable production, to be able to afford deep dive specialisation, and to identify and develop novelty. Trust allows us to prosper–because we don’t have to waste time fearing everything and everyone.

Unless we remember to value what humans bring to the table, and offer a path to betterment, the skills that begat any technology (including AI) become just as automatable as the next. In the worst case scenario, people could end up being just the eyes, ears, and hands of stagnant automated processes: mere vehicles to provide sensory data and follow orders. It’s a recipe for depression, so let’s not do that.

Procedure or leader?

When the knowledge economy came into being, no one really imagined that a highly trained group such as software engineers or scientists could lose their jobs. When jobs were disappearing in the 1980s, governments argued that the miners and factory workers should “simply retrain”. We tell our kids “learn to code and you’ll never be out of work”.

Retraining is a lot to ask of a human (mind and body), particularly one that has built and learned to trust in a particular set of skills at great personal cost. Today we’re being asked to endure that stress for our entire lifetimes: an endless game of snakes and ladders (chutes and ladders). Those advocating this are not usually those affected. Personally, I don’t see that as a realistic expectation. Even younger workers across the globe have begun to “lay flat” and “work to minimum rule” rather than sacrifice themselves on the mill of toil. Mental exhaustion is becoming an issue. To ask this, along side raising the retirement age, is untenable.

Software and AI will ultimately force us to change us attitudes to the way our economy works, otherwise there will be nowhere for trust to grow–but it’s a slow process. We are habitual creatures after all. No chef ever believed people would prefer a cheap but predictable frozen pizza to a freshly baked one, but many do. AI is a universal frozen pizza.

Why then are some willing to trust AI so early and so easily? It has partly to do with the economics of trust, but also with the perceived decline of trustworthiness in the services we get from humans today and the rift between the classes.

The new economics of knowledge

Chat GPT is a marvellous human accomplishment. Let’s not get into how it works or whether it’s truly smart or not. The key point is that, for certain tasks, it can imitate a corpus of experience that would take a human years to ingest (and which took humans many years to produce). Language models are not the last word in AI, and they are not our equals. They don’t clean windows on the 99th floor, for one thing. And yet we trust them more than we should.

We’re quick to trust answers from machines and from Internet prophets partly because we take less responsibility for our own knowledge than before. As humans, we’ve failed to scale our own knowledge management, confusing what’s available on the web with what we know ourselves. Stockpiling learning is not knowing it. We lean too easily on Google search and social media like crutches. We’ve stepped back from the idea of learning by rote, to rehearse experience in the human way. Many people are now almost purely reactive, as if life were a game of smartphone pinball, and everyone is used to getting their own way at the push of a button. If we believe Dunbar’s social brain hypothesis, the very reason for our large brains is to manage engagement in social processes. If we let go of that, what are we to do when The Machine Stops? Who will debug and repair our critical infrastructure if we don’t even know it’s there?

Our minds may have certain cognitive limitations, but today we’ve begun to voluntarily forego our own capabilities. We’re arguably a little too happy to have an artificial butler do everything for us, as we aspire to join class A. We shirk responsibility, by outsourcing what we once had to work hard for, relying on “trusted third parties” such as “social media influencers”, who feed us streams of entertaining factoids to trust. We have Twitter to rely on so we don’t have to pore over every issue ourselves, and it even tells us what we should think about the results, like an off the shelf opinion poll.

In the 21st century, we are expected to show up and be “seen” in online forums in order to get our information–and make our way in society like debutantes through new tribes, by sucking up at virtual society balls.

Postchat–the new trust game

Understanding all these entwined issues, through the lens of trust, is a project I’m currently looking at, in a small but forward-looking project for NLnet. How we trust one another, and how we trust tools like software automation and “AI” is still a fluid issue as society adapts to its own size and to its technologies. We need to notice more about our changing social fabric than we’ve been paying attention to lately.

The results so far suggest that we humans prefer whatever is simple over what is complex, easy answers over good or detailed answers. It takes a lot of effort to invest in creative complexity, and we afford that more to ourselves than to others. We find it harder to get along with one another, because we’ve placed individuality on a pedestal, above a sense of common good. That tips the balance in favour of machinery like AI, and leads to a general dumbing down of human participation in our shared circumstances. AI doesn’t answer back or push back when we make ill considered requests. It doesn’t keep us waiting like doctors. Yet it delights with quirky and amusing results.

As a man of a certain age, with friends in the same boat, I can report that any class B skilled person who finds themselves out of work today–i.e. out of a context in which they are known and trusted–is viewed as more of a liability than an asset, no matter their past accomplishments. Trust is in short supply for most of us. Many businesses, after all, barely make ends meet, particularly post COVID and in the shadow of growing international conflicts, not to mention with European labour laws still tailored to the age of big industries. The future lies in these smaller businesses and looser cooperatives who don’t have such deep pockets to survive the legislative jeopardy. All this might make it quite risky for an employer to bet on a human versus a machine.

For now, the message is clear. If you’re a skilled worker and still in employment, hold onto it for dear life. But also, perhaps, get ready to clean windows.

--

--

Mark Burgess
Mark Burgess

Written by Mark Burgess

@markburgess_osl on Twitter and Instagram. Science, research, technology advisor and author - see Http://markburgess.org and Https://chitek-i.org

Responses (1)