Monday, December 24, 2018
Tuesday, December 04, 2018
is it art?
A couple of New York-based Russian artists, Vitaly Komar and Alex Melamid conducted a ‘conceptual' art experiment back in the mid-1990s.
To begin, they appointed market researchers Martila & Kiley Inc to conduct surveys on aesthetic preferences and tastes in painting in over a dozen countries.
The goal was to find out what a ‘people's art' might look like. When the results of these surveys came in the dynamic duo would make the paintings to reflect the results. The resulting artworks were billed as ‘Most Wanted'. In contrast, they also produced paintings to reflect the ‘least wanted'.
Melamid described their concept for the project in this way:
In a way it was a traditional idea, because faith in numbers is fundamental to people, starting with Plato's view of a world which is based on numbers.
In ancient Greece, when sculptors wanted to create an ideal human body, they measured the most beautiful men and women and then made an average measurement, and that's how they described the ideal of beauty and how the most beautiful sculpture was created. In a way, this is the same thing; in principle, it's nothing new.
It's interesting: we believe in numbers, and numbers never lie. Numbers are innocent. It's absolutely true data. It doesn't say anything about personalities, but it says something more about ideals, and about how this world functions. That's really the truth, as much as we can get to the truth. Truth is a number.
In just about every country, the favourite – the most wanted - was some kind of landscape featuring a few human figures going about their business, some animals in the foreground, with a big blue sky and some coastline or a path extending into the distance, and some water - a river, the sea or a lake.
(Just about every country wanted this - only the Italians deviated slightly, although the ideal was still heavily figurative.)
And almost universally rejected – the least wanted - were abstract compositions, featuring geometric or angular shapes. That's not to say non-figurative or non-narrative painting although can't still be appealing. Humans have a permanent innate taste for virtuoso displays. Spectacular giant Pollock's or the Rothko room at the Tate, for example. Rothko was influenced by Michelangelo’s Laurentian Library in Florence.
But science still has no agreed explanation for why anyone should claim to enjoy 'conceptual' art, 'installations' or participating in art-speak. Some kind of pretentious trait counter-signaling is likely.
The disappointed artists remarked ‘in looking for freedom, we found slavery.’
Really?
Of course, one of the great mysteries of art is why it even exists in the first place.
Although every culture draws and paints, dances, sings, makes music and tells stories the origins of human aesthetics are still mostly a puzzle.
But the origins of visual art might be a wee bit clearer.
As our Russian friends found out, across all cultures humans tend to prefer representations - visual experiences - depicting environments where they have; a vista - an advantage in height, there is an open terrain, diverse vegetation and a nearby body of water. Because a landscape such as this was ideal survive-and-thrive habitat for our ancestors who lived on the African Savannah.
This doesn’t sound much like modern cities, of course.
Although it does explain the price of an apartment in a block overlooking Central Park in New York or Hyde Park in London – there’s a nice one on Bayswater Road on the market today for 18.5 million.
Some problems in the modern workplace may also result from this kind of mismatch.
An evolutionary mismatch occurs when evolved traits or mechanisms that were once advantageous become maladaptive due to changes in the environment, particularly when environmental change happens fast.
Most of human evolution took place in hunter-gatherer groups of 50-150 individuals that worked together to find food and protect the village.
There was no middle-management, HR departments, unconscious bias training, or strategy away-days. There was not even any real distinction between work and life.
If you look around the typical modern office if there's virtually no greenery and it's challenging to get sunlight (windows don't count, you need to actually get outside in the sun for at least half an hour a day).
Vitamin D deficiency is a huge problem, even in countries like Australia, where I live.
Goodness knows what the situation is like in places inside the Arctic circle, like parts of Sweden and Finland where it’s basically dark for 6 months of the year.
Our psychology - and our physiology - are still primarily aligned for the Pleistocene era, but we're in an environment that's very different.
Lush green landscape and blue skies are an innate, evolved preference, present in human nature since that time, the two million or so years during which modern human beings evolved.
Apparently, Arthur Danto, a Columbia University philosopher and postmodern art theorist, suggested that the results of the ‘Most Wanted’ experiment were a product of the hideous worldwide 'calendar' industry (reproductions, poster shops etc) – toeing the cultural relativism party line, he means our tastes (as uneducated plebs) in art are purely a product of social construction or ‘culturisation’.
But the calendar industry [sic] has not conspired to influence taste, but rather any success it has experienced is because it caters to universal, deep-rooted, prehistoric, innate human preferences. Aesthetic taste is an evolutionary trait, and is shaped by natural selection.
Something we’d do well to remember in the advertising business, from time to time.
Add to: Digg | del.icio.us | Technorati | Yahoo | reddit
Posted by Eaon Pritchard at Tuesday, December 04, 2018 View Comments
Labels: art
Tuesday, November 27, 2018
beware of the semi-attached figure
What can you do if you need to convince someone of something, but you don’t have proper evidence?
One simple way is to demonstrate something else to be true and then just pretend it’s the same thing.
In statistics, this trick is known as the ‘semi-attached figure’.
Simply pick a couple of things that sound kind of the same – though they aren’t (this is the important point) – and make a comparison between them to validate your conclusion.
An everyday example would be the number of reports that contrast hours spent TV viewing with hours spent on the internet, as though those activities were the same thing.
One reputable market research firm recently tried to convince an audience I was in about the popularity of a particular on-demand Aussie TV channel.
‘Who is watching?’ they asked. ‘Well, 90% of viewers are aware of the service!”.
Sounds impressive, however, awareness of the existence of something is not the same as usage of the service.
It has long been a common tactic of persuasion to cite information that initially seems to uphold an assertion, but upon closer inspection is pretty much irrelevant to the actual claim.
This means stating one thing as a proof for something else.
For example, if some report claims says ‘85% of CEOs think that Blockchain will change the way their organisations do marketing by 2020”– what does that actually prove?
This implies that CEOs are some sort of authority on the application of Blockchain technology.
Or marketing.
There’s no shortage of reports showing the decline of advertising spends on printed news.
The implication is that advertisers should spend more on whatever the alternative is that’s being sold.
Of course, a decline in advertising spend does not necessarily mean a decline in readership.
When dealing with any ‘evidence’ of this nature, ask yourself how the evidence specifically proves the claim. Could there be alternative explanations that would make the claim false?
If the evidence isn’t necessarily relevant to the conclusion then you are probably dealing with a semi-attached figure.
(Note: For more fun with statistics I always recommend 'How To Lie With Statistics' by Darrel Huff, first published in 1954.)
Add to: Digg | del.icio.us | Technorati | Yahoo | reddit
Posted by Eaon Pritchard at Tuesday, November 27, 2018 View Comments
tinbergen's four questions
The Dutch ethologist and ornithologist, Nikolaas Tinbergen - along with colleagues Karl von Frisch and Konrad Lorenz - received the 1973 Nobel Prize in Physiology and Medicine for their 'discoveries concerning organisation and elicitation of individual and social behaviour patterns'. Essentially kickstarting our understanding of the innate properties of animal behaviour.
Alongside this accolade, Tinbergen’s most famous contribution to science is the ‘four questions’ framework, originally posed in his 1963 article ‘On Aims and Methods of Ethology’.
This simple framework goes a long way towards explaining how and why any animal exhibits a behaviour, and was instrumental in putting the nature vs nurture debate to bed once and for all. The model shows how all behaviour (and all traits) are products of complicated interactions between genes and the environment.
Tinbergen and his colleagues argued that any analysis must address four aspects of a trait: how it works, what function it serves, how it develops, and its evolutionary history.
Although not posited as explicitly evolutionary, Tinbergen’s Four Questions - as they have since come to be known - detail the basic considerations a researcher should want to make. And they still hold.
(Ethologists tended to focus on observable behaviour and so didn't go deep into the psychological mechanisms, that came later as areas of ethology morphed into evolutionary psychology.)
The four questions are grouped under two headings.
Proximate questions.
2. How does it develop? - how does the behaviour change with age, experience and environment?
Ultimate questions.
4. Why did this behaviour help the organism/species survive/reproduce?
To illustrate how this framework can be applied, think of the last time you stuffed a Big Mac into your face. What was the decision process behind that?
Was I hungry? Perhaps it was just convenient? I had a hangover? Or it’s a treat every now and again?
These kinds of explanations for behaviour operate at the proximate level.
These causes point to relatively up-close and immediately present influences—to what you are presently feeling or thinking or a plausible story you tell yourself.
Yes, proximate reasons are important, but they tell only tell part of the story.
Proximate reasons don't address the broader question of why Big Macs are appealing in the first place.
Understanding the deeper reasons for preferences and behaviour requires an ultimate explanation.
Ultimate explanations focus not on the relatively immediate triggers of behaviour, but on its evolutionary function.
In the Big Mac scenario, humans have psychological mechanisms that respond positively to the sight, smell, and taste of foods rich in sugars and fats.
These mechanisms exist because an attraction to these kinds of foods helped our ancestors obtain calories and survive in an environment that where they were often scarce.
So whereas the proximate reasons you bought a Big Mac may be many and varied, the ultimate cause is that a desire for sugary and fatty foods helped solve the critical evolutionary challenge of survival in the ancestral environment.
McDonald's, Burger King and KFC have become some of the biggest brands in the world and wield colossal global advertising budgets. However, it's no accident that they got there selling burgers, fried chicken and milkshakes rather than salad.
Market researchers, like social scientists, have typically been concerned with the proximate influences on behaviour.
Moreover, anything masquerading as insight asserting that people generally want to experience pleasure or happiness, and to avoid pain or sadness is just banal.
However, an evolutionary perspective highlights that there is a deeper level of explanation rooted in the adaptive function of behaviour.
This is a useful lens through which to look at motivation because while there could be any amount of proximate motives for a given behaviour and many goals people pursue, there is a much smaller set of ultimate evolutionary functions that behaviour might serve.
These functions are almost certain to be connected to recurrent adaptive problems that our ancestors would have faced. And as they are rooted deep in our long evolutionary history, they can shape all stages of consumer journeys and decision-making processes.
Add to: Digg | del.icio.us | Technorati | Yahoo | reddit
Posted by Eaon Pritchard at Tuesday, November 27, 2018 View Comments
Labels: tools
the reverse naturalistic fallacy
The naturalistic fallacy, as outlined by Scottish philosopher David Hume, is the leap from is to ought.
The moralistic fallacy, is the opposite. It refers to making the leap from ought to is. The claim that the way things should be is the way they are.
This is sometimes called the reverse naturalistic fallacy.
For example, take some randomly selected Simon Sinek platitude like this:
Or how about:
This kind of glibness falls squarely into the reverse naturalistic bucket, which makes them great Linkedin fodder for the mass of suckers.
Nice ideas. But just because that’s the way things ought to be doesn’t mean it’s anything like the way things really are.
Sinek himself is no sucker, of course. I’d kill for one-tenth of his book sales.
Add to: Digg | del.icio.us | Technorati | Yahoo | reddit
Posted by Eaon Pritchard at Tuesday, November 27, 2018 View Comments
Labels: fallacy
Friday, October 19, 2018
magic
The philosopher Daniel Dennett has this anecdote about his friend the theologian Lee Siegel.
Siegel has published a number of papers and books on Indian religion and culture including this 1991 book on Indian street magic, Net of Magic: Wonders and Deceptions in India.
Siegel explains that when he told people he was writing a book on magic, he was often asked “Is it a book about real magic?”
By 'real magic' of course, people mean ‘miracles’ and acts involving ‘supernatural powers’.
Seigel would answer, ‘No, the book is about conjuring tricks, rope tricks, snake charming, illusions etc. Not real magic.'
So when people say ‘Real magic’, that really refers to the kind of magic that is not real.
Magic that cannot be done.
Whilst the magic that is real - the kind of magic that CAN actually be done - is not ‘real magic’.
It’s a trick.
‘Real magic’ is miraculous, a violation of the laws of nature.
Yet many people still want to believe in real magic.
A strange compulsion to believe in ‘real magic’ affects many people when the topic is advertising and brands.
This magical thinking assigns patterns and causation to events where patterns and causation do not exist.
Arthur C Clarke famously observed that 'any sufficiently advanced technology is indistinguishable from magic'.
But did he mean ‘real magic’? Or the kind of magic that can be done?
My good friend Mark Earls proposed, back in 2013, that we should try substituting the word 'Magic' for 'Big' in Big Data.
‘…if we only master Magic Data, it will make us all-powerful; the sword of Magic Data will banish all evils.’
Magic data is now inexorably linked to magic AI and magic machine learning.
Not to mention the enduring popularity of other ‘magical’ things like content marketing, influencers, the enduring cult of ‘Lovemarks’, and a multitude of other maladies.
Gossage's observations in 1960-odd seem prophetic, now.
‘Advertising…is constantly being lured into seemingly allied fields that have little to do with its unique talents and often interfere with them. … But there is one job it does well that no other communication form does at all: the controlled propagation of an idea with a defined objective though paid space.’
One reason that we can tend to engage in magical thinking is that it gives a small feeling of security in our professional lives. That we have special knowledge about how to influence outcomes.
But there is, of course, a kind of magic that CAN be done - namely, make something creative and interesting and put it in places where people will see it - the controlled propagation of an idea with a defined objective.
Or if you prefer the 2018 version, it's what Binet and Field call the virtuous circle.
It might not be 'real magic' but, when it works, it’s magic nonetheless.
'Trust none of what you hear,
And less of what you see'
Add to: Digg | del.icio.us | Technorati | Yahoo | reddit
Posted by Eaon Pritchard at Friday, October 19, 2018 View Comments
Labels: magic
Tuesday, July 24, 2018
the destroyers of advertising
Advertising had made it possible for consumers to make some sense of these complex abstractions.
But because the concept of what-is-advertising has now been twisted out of recognition – principally by the emergence of highly targeted surveillance-fuelled direct response, content-factories, influencers [sic] etc etc - the NEW ‘advertising’ (ie the abandonment of any conventional ideas of originality, creativity in favour of pastiche and mediocrity - bearing a resemblance to advertising ) cannot fulfil this need.
And now, because people have started to ignore and block this kind of advertising, they don't remember, or credit, the role advertising performed in culture, when it used to BE advertising.
And more worrying is this.
As it becomes more and more accepted that this new definition of advertising IS the advertising, we are failing to distinguish between what is real advertising and what are, in fact, the products of the destroyers of advertising.
Add to: Digg | del.icio.us | Technorati | Yahoo | reddit
Posted by Eaon Pritchard at Tuesday, July 24, 2018 View Comments
a ball and chain in the place where your mind's wings should have grown
'A philosophic system is an integrated view of existence. As a human being, you have no choice about the fact that you need a philosophy. Your only choice is whether you define your philosophy by a conscious, rational, disciplined process of thought and scrupulously logical deliberation -- or let your subconscious accumulate a junk heap of unwarranted conclusions, false generalizations, undefined contradictions, undigested slogans, unidentified wishes, doubts and fears, thrown together by chance, but integrated by your subconscious into a kind of mongrel philosophy and fused into a single, solid weight: self-doubt, like a ball and chain in the place where your mind's wings should have grown.'
Add to: Digg | del.icio.us | Technorati | Yahoo | reddit
Posted by Eaon Pritchard at Tuesday, July 24, 2018 View Comments
Friday, July 06, 2018
death by 6,000 nibbles
When it needs cleaned, the tang looks for its pal, the Cleaner Wrasse who can be recognised by its bright electric blue colour and black stripe that runs down the length of its body.
Cleaner Wrasses hang around in 'cleaning stations'. Agencies in the reef.
The Wrasse is given access to the Tang’s gills and mouth, and then it eats any parasites and dead tissue off larger fishes' skin in a mutualistic relationship that provides food and protection for the wrasse, and considerable health benefits for the Tang. A reciprocal situation.
And so in order to gain access, the Cleaner Wrasse must first perform a secret dance – a special ‘code’ - in order to win the Tang’s trust.
This system normally works out fine, the symbiosis between two species, both partners are indispensable and the mutual advantage is obvious.
But there’s some other fish that mimic Cleaner Wrasses. For example, a species of Blenny called Aspidontus Taeniatus has evolved the same behavior.
It is almost identical in size and appearance to the Cleaner Wrasse. It even sports the same shiny stripe down its back and lurks around near the same reefs watching.
If approached by a Yellow Tang, the deceptive Blenny also knows the code.
The secret dance.
But once allowed in, instead of providing a cleaning service, the rogue Blenny uses its super sharp teeth to rip chunks of flesh from the hapless client.
Rather than ridding his client of parasites, Blenny IS the parasite. But in disguise.
The murky world of advertising technology [sic] contains many similar parasites, well adept at making themselves appear to be useful.
They look a bit like something to do with advertising, they can talk a language that’s a bit like the language of advertising. They know the code, which kinds of secret dances will get them access to the big fish.
And there’s lots of them.
This year’s chiefmartec.com adtech ‘lumascape’ graphic actually charts 6,829 marketing technology solutions from 6,242 unique marketing technology vendors.
While that represents ‘just’ 27% growth from 2017’s total (5,381) solutions, the percentage of growth the scale and velocity of this space is staggering.
In fact, the size of the 2018 landscape is equivalent to all of the marketing tech landscapes from 2011 through 2016 added together. Indeed, in 2011 they numbered just 150.
All of them having a nibble. All of them getting a chunk.
Where does all the money go?
Some of these companies are legit.
Some of the money may even find its way back into the industry, somehow.
But once you let them in, they keep biting.
And there are so many it’s hard to see how they can be kept out.
Add to: Digg | del.icio.us | Technorati | Yahoo | reddit
Posted by Eaon Pritchard at Friday, July 06, 2018 View Comments
Friday, June 29, 2018
adaptive
'All over the country, we want a new direction,
I said all over this land, we need a reaction,
Well there should be a youth explosion,
Inflate creation,
But something we can command,
What's the point in saying destroy?
I want a new life for everywhere,
We want a direction, all over the country,
I said I want a reaction, all over this land,
You g-got to get up and move it, a youth explosion,
Because this is your last chance,
Can't dismiss what is gone before,
But there's foundations for us to explore,
All around the world I've been looking for a new'
The 19 year old Paul Weller intuitively knew something of adaptive leadership.
Adaptive leadership is about change that enables the capacity to thrive.
Adaptive change interventions build on the past rather than jettison it.
Organizational change happens through ex-peri-ment-ation.
Adaptive leadership values diversity of views.
New adaptations have the potential of significantly displacing, re-regulating, and rearranging old structures.
Add to: Digg | del.icio.us | Technorati | Yahoo | reddit
Posted by Eaon Pritchard at Friday, June 29, 2018 View Comments
Wednesday, June 27, 2018
successful adaptations are both conservative and progressive
'Successful adaptive changes build on the past rather than jettison it.
In biological adaptations, though DNA changes may radically expand the species’ capacity to thrive, the actual amount of DNA that changes is minuscule.
More than 98 percent of our current DNA is the same as that of a chimpanzee: it took less than a 2 percent change of our evolutionary predecessors’ genetic blueprint to give humans extraordinary range and ability.
A challenge for adaptive leadership, then, is to engage people in distinguishing what is essential to preserve from their organization’s heritage from what is expendable.
Successful adaptations are thus both conservative and progressive.
They make the best possible use of previous wisdom and know-how.
The most effective leadership anchors change in the values, competencies, and strategic orientations that should endure in the organization.'
Add to: Digg | del.icio.us | Technorati | Yahoo | reddit
Posted by Eaon Pritchard at Wednesday, June 27, 2018 View Comments
nothing cooks without some heat
In his autobiography Miles Davis tells a story about the 1970 line-up of his touring band - this was the band that featured on the live half of the Live-Evil album - the one that featured the legendary Keith Jarrett on keys and briefly included the equally legendary Gary Bartz on sax.
Bartz had been grumbling a bit in private about Jarrett over-playing 'busy shit' behind his sax solos. Eventually he approached Miles and asked him to have a word with Kieth.
Miles agreed.
Later Keith Jarrett was talking with Miles about some other bits and pieces and as he was leaving Miles calls Keith back to tell him how much Gary Bartz was loving what he was doing behind his sax solos and could he please do even more of that kind of of thing.
Cookin' with Miles.
Nothing cooks without some heat.
Add to: Digg | del.icio.us | Technorati | Yahoo | reddit
Posted by Eaon Pritchard at Wednesday, June 27, 2018 View Comments
Labels: creativity, friction
Monday, June 04, 2018
prestige intelligence and the transcendent self
The philosopher Daniel Dennett recalls the time computer scientist Joseph Weizenbaum – a good friend of Dennett’s – harboured his own ideas and ambition about becoming a philosopher.
Weizenbaum had recounted how one evening, after ‘holding forth with high purpose and furrowed brow at the dinner table’, his young daughter had exclaimed, ‘Wow! Dad just said a ‘deepity!’
Dennett was suitably impressed – with the coinage, not necessarily his friend’s ambitions in the philosophy department – and subsequently adopted ‘deepity’ as a categorising device and explains correct usage like this.
‘A deepity is a proposition that seems both important and true— and profound— but that achieves this effect by being ambiguous.’
Pictured below is some expensively produced promotional collateral given to attendees of an ‘upfronts’ type showcase from an Australian media organization that we attended recently.
Deepity indeed. ‘Disruptive collaboration' is a favourite but all seem to fit Dennett’s description perfectly.
Strangely out-of-place is the final card promising ‘commercial solutions’. How dull in its pragmatism and downright usefulness.
Add to: Digg | del.icio.us | Technorati | Yahoo | reddit
Posted by Eaon Pritchard at Monday, June 04, 2018 View Comments
Labels: bullshit
Monday, May 14, 2018
how do you mend a broken heart?
As they went into their final match of the 1985/86 Scottish football season, away to 6th placed Dundee on May 3, league leaders Hearts had gone a full 27 league games without defeat and needed only to avoid losing to ensure they would be Scottish champions for the first time since 1960.
Two Albert Kidd goals for Dundee in the final 10 minutes shattered Hearts dreams, as Celtic were stuffing St Mirren 5-0 in Paisley and so nicked the title on the last day.
But Hearts still had the Cup to play for.
The final at Hampden against Alex Ferguson's Aberdeen was just a week away.
To try and lift the dejected players for the Cup final the following week, the Hearts management had brought in a top sports psychologist who coached the squad in the week leading up to the final.
Various techniques were employed to attempt to 'erase' the disappointment of blowing the championship and prepare the team to at least lift the cup.
Fergie got wind of the activities at the Hearts training camp.
According to former Aberdeen assistant boss, Willie Garner, as Fergie prepared the Aberdeen players together in the dressing room before the teams walked out at the final, his final instructions were that each Aberdeen player should find an individual Hearts player in the tunnel, shake their hand and offer 'bad luck last week' condolences.
Thus negating any work the psychs might have done to put the bitter disappointment of losing the big prize in the final minutes of the last league game.
Aberdeen went 1-0 up in the first two minutes and added two further goals later on, destroying Hearts 3-0.
Strategy.
Identifying the critical factors in a situation, and designing the means to overcome them.
Getting upstream of the problem.
Add to: Digg | del.icio.us | Technorati | Yahoo | reddit
Posted by Eaon Pritchard at Monday, May 14, 2018 View Comments
Wednesday, April 18, 2018
no robot apocalypse (yet)
'The Frankenstein complex' is the term coined by 20th century American author and biochemistry professor Isaac Asimov in his famous robot novels series, to describe the feeling of fear we hold that our creations will turn on us (their creators) — like the monster in Mary Shelley’s 1818 novel.
One hundred years later in 2018 we still seem worried about this idea of subordination. That we might ultimately lose the ability to control our machines.
At least part of the problem are the concerns about AI alignment. Alignment is generally accepted as the ongoing challenge of ensuring that we produce AIs that are aligned with human values. This is our modern Frankenstein complex.
For example, if what has been described as an AGI (Artificial General Intelligence) ever did develop at some point in the future would it do what we (humans) wanted it to do?
Would/could any AGI values ‘align’ with human values? What are human values, in any case?
The argument might be that AI can be said to be aligned with human values when it does what humans want, but…
Will AI do things some humans want but that other humans don’t want?
How will AI know what humans want given that we often do do what we want but not what we ‘need’ to do?
And — given that it is a superintelligence — what will AI do if these human values conflict with its own values?
In the notorious thought experiment AI pioneer Eliezer Yudkowsky wonders if we can specifically prevent the creation of superintelligent AGIs like the paperclip maximizer?
In the paperclip maximizer scenario a bunch of engineers are trying to work out an efficient way to manufacture paperclips, and they accidentally invent an artificial general intelligence.
This AI is built as a super-intelligent utility-maximising agent whose utility is a direct function of the amount of paperclips it makes.
So far so good, the engineers go home for the night, but by the time they’ve returned to the lab the next day, this AI has copied itself onto every computer in the world and begun reprogramming the world to give itself more power to boost its intelligence.
Now, having control of all the computers and machines in the world, it proceeds to annihilate life on earth and disassembles the entire world into its constituent atoms to make as many paperclips as possible.
Presumably this kind of scenario is what is troubling Elon Musk when he dramatically worries that ‘…with artificial intelligence we are summoning the demon.’
Musk — when not supervising the assembly of his AI powered self-driving cars can be found hanging out in his SpaceX data centre’s ‘Cyberdyne Systems’ (named after the fictitious company that created “Skynet” in the Terminator movie series) — might possibly have some covert agenda in play in expressing his AI fears given how deep rival tech giants Google and Facebook are in the space. Who knows?
The demon AI problem is called ‘value alignment’ because we want to ensure that its values align with ‘human values’.
Because building a machine that won’t eventually come back to bite us is a difficult problem. Although any biting by the robots is more likely to be a result of our negligence than the machine’s malevolence.
More difficult is determining a consistent shared set of human values we all agree on — this is obviously an almost impossible problem.
There seems to be some logic to this fear but it is deeply flawed. In Enlightenment Now the psychologist Steven Pinker exposes the ‘logic’ in this way.
Since humans have more intelligence than animals — and AI robots of the future will have more of it than us — and we have used our powers to domesticate or exterminate less well-endowed animals (and more technologically advanced societies have enslaved or annihilated technologically primitive ones), it surely follows that any super-smart AI would do the same to us. And we will be powerless to stop it. Right?
Nope. Firstly, Pinker cautions against confusing intelligence with motivation. Even if we did invent superhuman intelligent robots, why would they want to take over the world? And secondly, knowledge is acquired by formulating explanations and testing them against reality, not by running an algorithm (and in any case big data is still finite data, whereas the universe of knowledge is infinite).
The word robot itself comes from an old Slavonic word rabota which, roughly translated, means the servitude of forced labour. Rabota was the kind of labour that serfs would have had to perform on their masters’ lands in the Middle Ages.
Rabota was adapted to ‘robot’ — and introduced into the lexicon — in the 1920’s by the Czech playwright, sci-fi novelist and journalist Karel Capek, in the title of his hit play, R.U.R. Rossumovi Univerzální Roboti (Rossum’s Universal Robots).
In this futuristic drama (it’s set in circa 2000) R.U.R. are a company who initially mass-produced ‘workers’ (essentially slaves) using the latest biology, chemistry and technology.
These robots are not mechanical devices, but rather they are artificial organisms — (think Westworld) — and they are designed to perform tasks that humans would rather not.
It turns out there’s an almost infinite market for this service until, naturellement, the robots eventually take over the world although, in the process, the formula required to create new ‘robots’ has been destroyed and — as the robots have killed everybody who knows how to make new robots — their own extinction looms.
But redemption is always at hand. Even for the robots.
Two robots, a ‘male’ and a ‘female’, somehow evolve the ‘human’ abilities to love and experience emotions, and — like an android Adam and Eve — set off together to make a new world.
What is true is that we are facing a near future where robots will indeed be our direct competitors in many workplaces.
As more and more employers put artificial intelligences to work, any position involving repetition or routine is at risk of extinction. In the short-term humans will almost certainly lose out on jobs like accounting and bank telling. And everything from farm labourers, paralegals, pharmacists and through to media buyers are all in the same boat.
In fact, any occupations that share a predictable pattern of repetitive activities, the likes of which are possible to replicate through Machine Learning algorithms, will almost certainly bite the dust.
Already, factory workers are facing increased automation, warehouse workers are seeing robots move into pick and pack jobs. Even those banking on ‘new economy’ poster-children like Uber are realizing that it’s not a long game — autonomous car technology means that very shortly these drivers will be surplus to requirements.
We have dealt with the impact of technological change on the world of work many times. 200 years ago, about 98 percent of the US population worked in farming and agriculture, now it’s about 2 percent, and then the rise of factory automation during the early part of the 20th century - and the outsourcing of manufacturing to countries like China - has meant that there is much less need for labour in Western countries.
New jobs certainly emerge as new technologies emerge replacing the old ones, although the jury is out on the value of many of these jobs.
In 1930, John Maynard Keynes predicted that by the century’s end, technology would have advanced sufficiently that people in western economies would work a 15-hour week. In technological terms, this is entirely possible. But it didn’t happen, if anything we are working more.
In his legendary and highly amusing 2013 essay On the Phenomenon of Bullshit Jobs, David Graeber, Professor of Anthropology at the London School of Economics, says that Keynes didn’t factor into his prediction the massive rise of consumerism. ‘Given the choice between less hours and more toys and pleasures, we’ve collectively chosen the latter.’
Graeber argues that to fill up the time, and keep consumerism rolling, many jobs had to be created that are, effectively, pointless. ‘Huge swathes of people, in Europe and North America in particular, spend their entire working lives performing tasks they secretly believe do not really need to be performed.’ He calls these bullshit jobs.
The productive jobs have, been automated away but rather than creating a massive reduction of working hours to free the world’s population to pursue their own meaningful activities (as Keynes imagined) we have seen the creation of new administration industries without any obvious social value that are often experienced as being purposeless and empty by their workers.
Graeber points out that those doing these bullshit jobs still ‘work 40 or 50 hour weeks on paper’ in reality their job often only requires working the 15 hours Keynes predicted — the rest of their time is spent in pointless ‘training’, attending motivational seminars, and dicking around on Facebook.
To be fair, robots are unrivaled at solving problems of logic, and humans struggle at this.
But robot ability to understand human behavior and make inferences about how the world works are still pretty limited.
Robots, AIs and algorithms can be said to ‘know’ things because their byte-addressable memories contain information. However, there is no evidence to suggest that they know they know these things, or that they can reflect on their states of ‘mind’.
Intentionality is the term used by philosophers to refer to the state of having a state of mind — the ability to experience things like knowing, believing, thinking, wanting and understanding.
Think about it this way, third order intentionality is required to for even the simplest of human exchanges (where someone communicates to someone else that someone else did something), and then four levels are required to elevate this to the level of narrative (‘the writer wants the reader to believe that character A thinks that character B intends to do something’).
Most mammals (almost certainly all primates) are capable of reflecting on their state of mind, at least in a basic way — they know that they know. This is first-order intentional.
Humans rarely engage in more than fourth-order intentionality in daily life and only the smartest can operate at sixth-order without getting into a tangle. (‘Person 1 knows that Person 2 believes that Person 3 thinks that Person 4 wants Person 5 to suppose that Person 6 intends to do something’’).
For some perspective, and in contrast, robots, algorithms and black boxes are zero-order intentional machines. It’s still just numbers and math.
The next big leap for AIs would be with the acquisition first or second-order intentionality — only then the robots might just about start to understand that they are not human. The good news is that for the rest of this century we’re probably safe enough from suffering any robot apocalypse.
The kind of roles requiring intellectual capital, creativity, human understanding and applied third/fourth level intentionality are always going to be crucial. And hairdressers.
And so, the viability of ‘creative industries’ like entertainment, media, and advertising, holds strong. Intellectual capital, decision-making, moral understanding and intentionality.
For those of us in the advertising and marketing business it should be stating the obvious that we should compete largely on the strengths of our capability in those areas, or the people in our organisations who are supposed to think for a living.
By that I mean all of us.
For those who can still think any robot apocalypses are probably the least of our worries. But take a look inside the operations of many advertising agencies and despair at how few of their people are spending time on critical thinking tasks and creativity.
Even more disappointing is when we’d rather debate whether creativity can be ‘learned’ by a robot rather than focusing on speeding up the automation of the multitude of mundane activities in order to get all of our minds directed at fourth, fifth and (maybe) sixth order intentionality. The things that robots’ capabilities are decades away from, and that we can do today, if we could be bothered.
By avoiding critical thinking, people are able to simply get shit done and are rewarded for doing so.
Whilst there are often many smart people around, terms like disruption, innovation and creativity are liberally spread throughout agency creds power point decks, as are ‘bullshit’ job titles like Chief Client Solutions Officers, Customer Paradigm Orchestrators or Full-stack Engineers, these grandiose labels and titles probably serve more as elaborate self-deception devices to convince their owners that they have some sort of purpose.
The point being that far from being at the forefront of creativity most agencies direct most of their people to do pointless work giving disproportionate attention to mundane zero-order intentionality tasks that could and should be automated.
Will robots take our jobs away? Here’s hoping.
Perhaps the AI revolution is really the big opportunity to start over. To hand over these bullshit jobs — the purposeless and empty labour we’ve created to fill up dead space — and give us another bite at the Keynes cherry, now liberated to be more creative and really put to use our miraculous innate abilities for empathy, intentionality and high level abstract reasoning.
To be more human.
Because, and as evolutionary theory has taught us, we humans are fairly unique among species. We haven’t evolved adaptations like huge fangs, inch-thick armour plating or the ability to move at super speed under our own steam.
All of the big adaptations have happened inside our heads, in these huge brains we carry around, built for creativity and sussing out how the world works and how other humans work.
That’s the real work. Not the bullshit jobs.
In The Inevitable, Kevin Kelly agrees that the human jobs of the future will be far less about technical skills but a lot about these human skills.
He says that the ‘bots are the ones that are going to be doing the smart stuff but ‘our job will be making more jobs for the robots’.
And that job will never be done.
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
Eaon’s first book ‘Where Did It All Go Wrong? Adventures at the Dunning-Kruger Peak Of Advertising’ is out now on Amazon worldwide and from other discerning booksellers.
This article is an adapted excerpt from his second book ‘What’s The Point of Anything? More Tales from the Dunning-Kruger Peak’ due at the end of 2018.
Add to: Digg | del.icio.us | Technorati | Yahoo | reddit
Posted by Eaon Pritchard at Wednesday, April 18, 2018 View Comments
Labels: advertising, agencies, agency life, artificial intelligence, robots
Tuesday, April 10, 2018
george carlin
Add to: Digg | del.icio.us | Technorati | Yahoo | reddit
Posted by Eaon Pritchard at Tuesday, April 10, 2018 View Comments