Are we human yet?
Josephmark Product Designer Jai Mitchell was invited to attend a panel that posed the question, can technology make us more human? Here, he reveals how he went about dissecting that question and why.
Last month I was honoured to be invited to Sydney to speak on an AGDA panel for Vivid Ideas Festival. They had assembled some incredibly intelligent individuals for the panel, and a really interesting topic for discussion: Can technology make us more human?
While this topic and this kind of discussion is certainly something I’m deeply interested in and passionate about, it wasn’t something I felt I had any truly solidified ideas or opinions around.
So, I started writing.
A few thousand words later, I felt slightly better prepared to discuss the topic with such a stellar panel, and had accidentally written the following essay.
Can technology make us more human?
What does it mean to be human?
Before we can answer this question, we need to define what we mean by ‘human’. Are we talking about humanity as opposed to other animals, like a similar species of ape, or humanity as opposed to technology, such as a robot?
If we’re talking about humanity as the things that separate us from other animals, we can talk about things like language, art, music, social skills, philosophy, religion, and arguably, empathy.
In this case, it’s hard to disagree that technology has enabled more humans to devote more time, effort and collaboration to the parts of our lives we consider the humanities – and has made it easier for the rest of us to enjoy them.
From a holistic view, humans are better off today than we have been, at least since the agricultural revolution, on just about every scale that would seem to matter.
Lower levels of poverty, improved health and life expectancies, greater education, overall increased peace and widespread freedom, and at least part of that should be attributed to technological advancements.
All of this means that many more humans are equipped, capable and have the time (within their day and lifespan) to spend focussing on those more positive aspects of humanity.
Technology may continue to push this trend as more traditional work roles are automated to allow us more time to enjoy these things – but the process of adapting to an automated society, and economy, may be a tricky patch to negotiate our way through.
“The process of adapting to an automated society, and economy, may be a tricky patch to negotiate our way through.”
Concepts like implementing a universal basic income or taxing robots are thrown around as potential solutions, but the effects these might have on the fundamental models of society can be quite a confronting and divisive topic.
How we’ll deal with these effects technology will have on humanity is a very complex issue, and one I’m not going to go into, but by this definition of human I think it’s safe to say technology has increased our humanity, and will likely continue to do so.
But that’s only one way of looking at what it means to be human. What about another definition, ‘Humanity as opposed to technology’? To explore this, we first need to discuss how we are defining ‘technology’.
Technology is more than computers
It could be argued that language itself is a form of technology, and that the formation of language by an odd bunch of hairless apes, about 100,000 years ago, is the single technology platform that allowed the creation of everything we know of as ‘human’, as well as everything we know of as ‘technology’.
That might make the answer to the question a little obvious, so let’s not use that definition.
If we talk about technology simply as the tools, machines and innovations we’ve created, we talk about things like our incredible feats of engineering or the raw processing power of modern computers and the potential near futures of artificial intelligence.*
This definition of technology as specifically things humans made, rather than skills we’ve mastered, gives us a very different picture. By this measure, technology is increasingly becoming more human itself, and it’s being driven there by humans.
Corporations have chatbots or conversational voice interfaces, often with human names and personalities, like Apple’s Siri or Amazon’s Alexa. Google took it a step further with the Google Home, allowing you to hold a conversation with Google itself as your ‘assistant.
Think about that. Originally a humble search website, Google became a verb (“I’ll Google it!”). It expanded into a corporation with countless other products and services, and now they promote an anthropomorphised concept of that corporation as the personality within voice and chat assistant interfaces.
Comparing ourselves to this kind of technology, it sometimes feels like technology is catching up with us – computers have languages, neural networks can create art, machines are making music, and Siri and Alexa have a level of social skills. This will only increase as we continue to push the boundaries of AI.
Sure, each of these creations to date lack much of the nuance to truly consider them at the same human level, but the time may come when this gap is closed.
The human machine
We may then prefer to define our humanity by traits that computers are unlikely to develop. Traits that we might share with other animal species; creativity, intuition, a complex set of emotions that can be over-simplified down to representations of either love or fear, and the one skill we still like to think makes us unique – abstract thought.
We developed these traits, as well as those listed above that separate us from apes, because we have these massive brains in our head constantly perceiving and making sense of the world in real-time, in ways that animals and computers may never quite match. They also help us to remember those perceptions and understandings, which helps us learn, and thus influences our future perceptions, understandings and reflexes.
To cope with the phenomenal amount of constant information being fed from our sensory systems and from within our own minds, our brains have developed a few incredible skills of their own. Our minds are constantly filtering this overwhelming stimuli, based on our existing perceptions and experiences, just so we can make sense of the world.
The problem is that for every tricky rule our brains have developed to help us make sense of the world, there is an exception that creates a weakness in our perception. These are cognitive biases, and when combined with the complex array of human emotions – lead to a whole lot of problems when combined with new technologies.
“For every tricky rule our brains have developed to help us make sense of the world, there is an exception that creates a weakness in our perception”
However we want to see the world, we can always find seemingly legitimate sources to confirm our beliefs. Whatever we already think is a major issue, we’ll notice more often. And whatever we already believe, we’ll find someone else’s flawed logic to help us rationalise.
This was a lot harder when news came from fewer, arguably more highly scrutinised sources, and finding more information was a trip to the library.
We can easily remove ideas that challenge us from our carefully constructed filter bubbles, which then serve to reinforce the way we see the world. We couldn’t do that so easily when our social circles were predominantly in the meatspace.
That same mechanism could lead to an increased sense of ‘community’ online, as inside our echo chambers we feel quite positive about having found a group of people who talk, think and see the world the same way we do.
Data will tear us apart
At a higher level, these effects of technology on humanity have likely been instrumental in creating more extreme and sharply defined partisanship on all kinds of issues in society.
This is certainly not helped by the drastically increased ease with which anyone with an opinion can start an online publication. It’s led to a proliferation of access to seemingly legitimate news with every possible partisan spin on the spectrum.
The fight for eyeballs in the attention economy has led to the rise of ‘click-bait’ – intentionally inflammatory content (or just the headline) that hopes to gain extra views.
“The fight for eyeballs in the attention economy has led to the rise of ‘click-bait’ – inflammatory content that hopes to gain extra views.”
Then the human mind goes to work, being drawn to negative news far more likely than positive news (negativity bias), ignoring or denying facts or opinions that clash with their own beliefs (backfire effect), and always being able to find a counter argument that backs up their belief (confirmation bias).
Once those extra clicks start coming in, publications quickly realise there’s more clicks (read: money) in publishing more and more incendiary negative stories, and a particular subject area begins to trend.
Once that subject begins to trend, another little bug in our brains called the availability heuristic kicks in, and we start to generally believe that this subject being more represented in the media means it’s a more common occurrence in life, increasing fear, which kicks off a whole spiralling cycle back through the confirmation and negativity bias, and backfire effect.
And slowly (or not so slowly), the good old days of the majority reading overlapping sets of the same handful of newspapers is being replaced by everyone having their own favourite heavily partisan and opinionated news source.
I have no data to back this up, but it seems as though people are more adamant than ever that they’re right (and that everyone else is wrong) on a wide range of subjects.
“People are more adamant than ever that they’re right (and that everyone else is wrong) on a wide range of subjects.”
At its worst, this kind of pattern could lead to increases in extremism, tribalism and a whole host of other horrible things that also end in ‘-ism’. Excluding feminism, of course.
That’s not to say that all is lost, and that technology will destroy society. It’s my opinion that we’re going through growing pains as humanity adjusts to the concept of having all the world’s information, and opinions, literally in the palm of our hands.
Technology has led to an amplification of these things, but it can, and should, be harnessed to counteract them.
Should technology make us less human?
Perhaps we should actually be striving to develop technologies that make us less human, by finding ways to intentionally counteract those tendencies and by compensating for our innate human weaknesses.
We need to educate ourselves and each other about all the ways our all-powerful brains can be undermined by simple tricks that a few nefarious people and corporations have co-opted for their own benefit.
There definitely seems recently to be a renewed urgency to discuss the ethics of design and technology. This may be in part due to an increase of awareness of ways these weaknesses can be, and are being, used against us.
Organisations like Time Well Spent, DotEveryone, IDEO and a host of other passionate and intelligent people are pushing to raise awareness around these issues amongst the people actually building the products of tomorrow.
Anecdotally it seems as though the tide of designers who are paying attention to this, openly discussing it, and occasionally calling out their contemporaries on it is rising (though, that could be my confirmation bias and availability heuristic kicking in).
On the other side there are many seemingly well-intentioned “thought leaders” of our industry selling frameworks for human manipulation, for example the Hooked model for building Habit Forming Products by Nir Eyal.
The concept, while it might simply seem like an objective tool for product strategy in itself, gives power to the products that adopt it.
The designers, developers and product managers on those products may also have the best of intentions, and are simply trying to be good at their job by building successful products for their employer.
Their employers may also have admirable intentions. Building a seemingly innocuous product that many people use and ideally love, and delivering profits to their investors or shareholders.
The shareholders and investors general intention is to simply hope for a decent return on their dollar and that everyone else in the equation builds a successful product.
Life in the fake news era
I don’t get the feeling that either Mark Zuckerberg nor Jack Dorsey are intentionally trying to build products that have negative effects on society.
I highly doubt either of them intended to help create a political climate that could see Donald J. Trump become the President of the United States of America.
They both clearly believe in their products, and are able to focus on the unquestionable good that each product has done for connecting communities, or raising awareness of certain issues, and are probably more focussed on maintaining the behemoths they’ve created and delivering good news to their investors or the stock market.
So perhaps it falls back on those designers, developers, and product managers building these technologies to be careful how they define their metrics for success?
The people building the products of tomorrow need to be more aware of any potential side-effects the next feature release or update might have on every edge-case of society. We need to be actively working to identify and attempt to solve for problems that our own products might amplify.
That’s a big ask. Our jobs are often already hard enough. Many of even the most senior and experienced designers, developers and product managers won’t be equipped to consider every possible side effect on every possible edge case. So how can we expect them to?
A good start is making sure our product teams have more diverse backgrounds, with more varied life experiences to draw from.
We should also encourage our teams and each other to adopt a growth mindset, to accept their flaws and mistakes, and use them as opportunities to learn.
Finally, we need to foster a culture of more open, constructive criticism in the rest of our industry, to make sure we’re talking about these issues, and calling each other out when we’ve overlooked something.
I think I’m ready to answer the question now…
As for the original question, can technology make us more human? We can probably each answer this ourselves by looking at humans in the Stone Age, as little as ~10,000 years ago, or as much as a few million years ago, and asking: are we more human now, than they were?
If your answer to that is no, then by your definition technology is unlikely to make us more human – but whether it’s through merging our own brains with artificial super intelligence, or something else, technology may still lead us to something beyond human.
If your answer is yes – then of course it will continue to evolve what it means to to be human. In fact, perhaps the question itself needs to change.
Technology is already influencing our evolution, and the rate of technological advancement is increasing at exponential speeds.
My personal bet is that in 1,000, or maybe even 100 years from now – if humans still exist – historians will look back on this period and see people that more closely resembled the Stone Age cave-dwellers than themselves.
Perhaps we haven’t yet reached the final form of human evolution? Perhaps the question shouldn’t be whether tech can make us more human, but rather, are we even fully human yet?
“Perhaps the question shouldn’t be whether tech can make us more human, but rather, are we even fully human yet?”
As the ones developing, distributing and helping to popularise the technologies that may influence that evolution, it’s imperative that we are careful with every step we take not to further supercharge our human inadequacies.
*Note: I didn’t even want to touch on the concept of direct human brain-to-computer interfaces too much – this conversation gets WAY too complex for my computer unassisted brain to handle once you get into that, but you should definitely go read Wait But Why’s amazing essay on Elon Musk’s Neuralink – if you have a spare three or four hours.
Thanks to Jesse Richardson, Georgia Dixon and everyone else who let me bounce these ideas off them until they made some sense. Also, massive thanks to Anita, Brent, Mike and everyone at AGDA NSW for inviting me to be on the panel.
Finally, thanks to you if you read this far. I’d love to hear other ideas, or learn more about any of this through further discussion and other input, so please leave a response, tweet me or email me at email@example.com.
Josephmark is a digital ventures studio based in Brisbane, Sydney and Los Angeles. We design, develop and launch meaningful digital products that change the way we work, play and connect. Find us on Facebook, Instagram and Twitter.