Tuesday, July 24, 2018

the destroyers of advertising

Brands are complex abstractions.

Advertising had made it possible for consumers to make some sense of these complex abstractions.

But because the concept of what-is-advertising has now been twisted out of recognition – principally by the emergence of highly targeted surveillance-fuelled direct response, content-factories, influencers [sic] etc etc - the NEW ‘advertising’ (ie the abandonment of any conventional ideas of originality, creativity in favour of pastiche and mediocrity - bearing a resemblance to advertising ) cannot fulfil this need.

And now, because people have started to ignore and block this kind of advertising, they don't remember, or credit, the role advertising performed in culture, when it used to BE advertising.

And more worrying is this.

As it becomes more and more accepted that this new definition of advertising IS the advertising, we are failing to distinguish between what is real advertising and what are, in fact, the products of the destroyers of advertising.


a ball and chain in the place where your mind's wings should have grown

'A philosophic system is an integrated view of existence. As a human being, you have no choice about the fact that you need a philosophy. Your only choice is whether you define your philosophy by a conscious, rational, disciplined process of thought and scrupulously logical deliberation -- or let your subconscious accumulate a junk heap of unwarranted conclusions, false generalizations, undefined contradictions, undigested slogans, unidentified wishes, doubts and fears, thrown together by chance, but integrated by your subconscious into a kind of mongrel philosophy and fused into a single, solid weight: self-doubt, like a ball and chain in the place where your mind's wings should have grown.'


Ayn Rand, Philosophy: Who Needs It? 1984


Friday, July 06, 2018

death by 6,000 nibbles

The Yellow Tang is a brightly colored fish that swims in the tropical reefs of the Indian Ocean.

When it needs cleaned, the tang looks for its pal, the Cleaner Wrasse who can be recognised by its bright electric blue colour and black stripe that runs down the length of its body.

Cleaner Wrasses hang around in 'cleaning stations'. Agencies in the reef.

The Wrasse is given access to the Tang’s gills and mouth, and then it eats any parasites and dead tissue off larger fishes' skin in a mutualistic relationship that provides food and protection for the wrasse, and considerable health benefits for the Tang. A reciprocal situation.

And so in order to gain access, the Cleaner Wrasse must first perform a secret dance – a special ‘code’ - in order to win the Tang’s trust.

This system normally works out fine, the symbiosis between two species, both partners are indispensable and the mutual advantage is obvious.

But there’s some other fish that mimic Cleaner Wrasses. For example, a species of Blenny called Aspidontus Taeniatus has evolved the same behavior.

It is almost identical in size and appearance to the Cleaner Wrasse. It even sports the same shiny stripe down its back and lurks around near the same reefs watching.

If approached by a Yellow Tang, the deceptive Blenny also knows the code.

The secret dance.

But once allowed in, instead of providing a cleaning service, the rogue Blenny uses its super sharp teeth to rip chunks of flesh from the hapless client.

Rather than ridding his client of parasites, Blenny IS the parasite. But in disguise.

The murky world of advertising technology [sic] contains many similar parasites, well adept at making themselves appear to be useful.

They look a bit like something to do with advertising, they can talk a language that’s a bit like the language of advertising. They know the code, which kinds of secret dances will get them access to the big fish.

And there’s lots of them.

This year’s chiefmartec.com adtech ‘lumascape’ graphic actually charts 6,829 marketing technology solutions from 6,242 unique marketing technology vendors.

While that represents ‘just’ 27% growth from 2017’s total (5,381) solutions, the percentage of growth the scale and velocity of this space is staggering.

In fact, the size of the 2018 landscape is equivalent to all of the marketing tech landscapes from 2011 through 2016 added together. Indeed, in 2011 they numbered just 150.

All of them having a nibble. All of them getting a chunk.

Where does all the money go?

Some of these companies are legit.

Some of the money may even find its way back into the industry, somehow.

But once you let them in, they keep biting.
And there are so many it’s hard to see how they can be kept out.
Then it's death by 6,000 nibbles.


Friday, June 29, 2018

adaptive



'All over the country, we want a new direction,
I said all over this land, we need a reaction,
Well there should be a youth explosion,
Inflate creation,
But something we can command,

What's the point in saying destroy?
I want a new life for everywhere,
We want a direction, all over the country,
I said I want a reaction, all over this land,
You g-got to get up and move it, a youth explosion,
Because this is your last chance,

Can't dismiss what is gone before,
But there's foundations for us to explore,

All around the world I've been looking for a new'

The 19 year old Paul Weller intuitively knew something of adaptive leadership.

Adaptive leadership is about change that enables the capacity to thrive.

Adaptive change interventions build on the past rather than jettison it.

Organizational change happens through ex-peri-ment-ation.

Adaptive leadership values diversity of views.

New adaptations have the potential of significantly displacing, re-regulating, and rearranging old structures.


Wednesday, June 27, 2018

successful adaptations are both conservative and progressive

'Successful adaptive changes build on the past rather than jettison it.

In biological adaptations, though DNA changes may radically expand the species’ capacity to thrive, the actual amount of DNA that changes is minuscule.

More than 98 percent of our current DNA is the same as that of a chimpanzee: it took less than a 2 percent change of our evolutionary predecessors’ genetic blueprint to give humans extraordinary range and ability.

A challenge for adaptive leadership, then, is to engage people in distinguishing what is essential to preserve from their organization’s heritage from what is expendable.

Successful adaptations are thus both conservative and progressive.

They make the best possible use of previous wisdom and know-how.

The most effective leadership anchors change in the values, competencies, and strategic orientations that should endure in the organization.'


Heifetz, Grashow, and Linsky | The Practice of Adaptive Leadership: Tools and Tactics for Changing Your Organization and the World | 2009 Harvard Business School Publishing



nothing cooks without some heat


In his autobiography Miles Davis tells a story about the 1970 line-up of his touring band - this was the band that featured on the live half of the Live-Evil album - the one that featured the legendary Keith Jarrett on keys and briefly included the equally legendary Gary Bartz on sax.

Bartz had been grumbling a bit in private about Jarrett over-playing 'busy shit' behind his sax solos. Eventually he approached Miles and asked him to have a word with Kieth.

Miles agreed.

Later Keith Jarrett was talking with Miles about some other bits and pieces and as he was leaving Miles calls Keith back to tell him how much Gary Bartz was loving what he was doing behind his sax solos and could he please do even more of that kind of of thing.

Cookin' with Miles.
Nothing cooks without some heat.




Monday, June 04, 2018

prestige intelligence and the transcendent self

The philosopher Daniel Dennett recalls the time computer scientist Joseph Weizenbaum – a good friend of Dennett’s – harboured his own ideas and ambition about becoming a philosopher.

Weizenbaum had recounted how one evening, after ‘holding forth with high purpose and furrowed brow at the dinner table’, his young daughter had exclaimed, ‘Wow! Dad just said a ‘deepity!’

Dennett was suitably impressed – with the coinage, not necessarily his friend’s ambitions in the philosophy department – and subsequently adopted ‘deepity’ as categorising device and explains correct usage like this.

‘A deepity is a proposition that seems both important and true— and profound— but that achieves this effect by being ambiguous.’

Pictured below is some expensively produced promotional collateral given to attendees of an ‘upfronts’ type showcase from an Australian media organization that we attended recently.




Deepity indeed. ‘Disruptive collaboration' is a favourite but all seem to fit Dennett’s description perfectly.

Strangely out-of-place is the final card promising ‘commercial solutions’. How dull in it’s pragmatism and downright usefullness.







Monday, May 14, 2018

how do you mend a broken heart?

As they went into their final match of the 1985/86 Scottish football season, away to 6th placed Dundee on May 3, league leaders Hearts had gone a full 27 league games without defeat and needed only to avoid losing to ensure they would be Scottish champions for the first time since 1960.

Two Albert Kidd goals for Dundee in the final 10 minutes shattered Hearts dreams, as Celtic were stuffing St Mirren 5-0 in Paisley and so nicked the title on the last day.

But Hearts still had the Cup to play for.

The final at Hampden against Alex Ferguson's Aberdeen was just a week away.

To try and lift the dejected players for the Cup final the following week, the Hearts management had brought in a top sports psychologist who coached the squad in the week leading up to the final.

Various techniques were employed to attempt to 'erase' the disappointment of blowing the championship and prepare the team to at least lift the cup.

Fergie got wind of the activities at the Hearts training camp.

According to former Aberdeen assistant boss, Willie Garner, as Fergie prepared the Aberdeen players together in the dressing room before the teams walked out at the final, his final instructions were that each Aberdeen player should find an individual Hearts player in the tunnel, shake their hand and offer 'bad luck last week' condolences.

Thus negating any work the psychs might have done to put the bitter disappointment of losing the big prize in the final minutes of the last league game.

Aberdeen went 1-0 up in the first two minutes and added two further goals later on, destroying Hearts 3-0.

Strategy.

Identifying the critical factors in a situation, and designing the means to overcome them.


Or Predatory Thinking - as Dave Trott would say.

Getting upstream of the problem.

Wednesday, April 18, 2018

no robot apocalypse (yet)

'The Frankenstein complex' is the term coined by 20th century American author and biochemistry professor Isaac Asimov in his famous robot novels series, to describe the feeling of fear we hold that our creations will turn on us (their creators) — like the monster in Mary Shelley’s 1818 novel.

One hundred years later in 2018 we still seem worried about this idea of subordination. That we might ultimately lose the ability to control our machines.

At least part of the problem are the concerns about AI alignment. Alignment is generally accepted as the ongoing challenge of ensuring that we produce AIs that are aligned with human values. This is our modern Frankenstein complex.

For example, if what has been described as an AGI (Artificial General Intelligence) ever did develop at some point in the future would it do what we (humans) wanted it to do?

Would/could any AGI values ‘align’ with human values? What are human values, in any case?

The argument might be that AI can be said to be aligned with human values when it does what humans want, but…

Will AI do things some humans want but that other humans don’t want?

How will AI know what humans want given that we often do do what we want but not what we ‘need’ to do?

And — given that it is a superintelligence — what will AI do if these human values conflict with its own values?

In the notorious thought experiment AI pioneer Eliezer Yudkowsky wonders if we can specifically prevent the creation of superintelligent AGIs like the paperclip maximizer?

In the paperclip maximizer scenario a bunch of engineers are trying to work out an efficient way to manufacture paperclips, and they accidentally invent an artificial general intelligence.

This AI is built as a super-intelligent utility-maximising agent whose utility is a direct function of the amount of paperclips it makes.

So far so good, the engineers go home for the night, but by the time they’ve returned to the lab the next day, this AI has copied itself onto every computer in the world and begun reprogramming the world to give itself more power to boost its intelligence.

Now, having control of all the computers and machines in the world, it proceeds to annihilate life on earth and disassembles the entire world into its constituent atoms to make as many paperclips as possible.

Presumably this kind of scenario is what is troubling Elon Musk when he dramatically worries that ‘…with artificial intelligence we are summoning the demon.’

Musk — when not supervising the assembly of his AI powered self-driving cars can be found hanging out in his SpaceX data centre’s ‘Cyberdyne Systems’ (named after the fictitious company that created “Skynet” in the Terminator movie series) — might possibly have some covert agenda in play in expressing his AI fears given how deep rival tech giants Google and Facebook are in the space. Who knows?

The demon AI problem is called ‘value alignment’ because we want to ensure that its values align with ‘human values’.

Because building a machine that won’t eventually come back to bite us is a difficult problem. Although any biting by the robots is more likely to be a result of our negligence than the machine’s malevolence.

More difficult is determining a consistent shared set of human values we all agree on — this is obviously an almost impossible problem.

There seems to be some logic to this fear but it is deeply flawed. In Enlightenment Now the psychologist Steven Pinker exposes the ‘logic’ in this way.

Since humans have more intelligence than animals — and AI robots of the future will have more of it than us — and we have used our powers to domesticate or exterminate less ­­well-endowed animals (and more technologically advanced societies have enslaved or annihilated technologically primitive ones), it surely follows that any super-smart AI would do the same to us. And we will be ­powerless to stop it. Right?

Nope. Firstly, Pinker cautions against confusing intelligence with motivation. Even if we did invent superhuman intelligent robots, why would they want to take over the world? And secondly, knowledge is acquired by formulating explanations and testing them against reality, not by running an algorithm (and in any case big data is still finite data, whereas the universe of knowledge is infinite).

The word robot itself comes from an old Slavonic word rabota which, roughly translated, means the servitude of forced labour. Rabota was the kind of labour that serfs would have had to perform on their masters’ lands in the Middle Ages.

Rabota was adapted to ‘robot’ — and introduced into the lexicon — in the 1920’s by the Czech playwright, sci-fi novelist and journalist Karel Capek, in the title of his hit play, R.U.R. Rossumovi Univerzální Roboti (Rossum’s Universal Robots).
In this futuristic drama (it’s set in circa 2000) R.U.R. are a company who initially mass-produced ‘workers’ (essentially slaves) using the latest biology, chemistry and technology.

These robots are not mechanical devices, but rather they are artificial organisms — (think Westworld) — and they are designed to perform tasks that humans would rather not.

It turns out there’s an almost infinite market for this service until, naturellement, the robots eventually take over the world although, in the process, the formula required to create new ‘robots’ has been destroyed and — as the robots have killed everybody who knows how to make new robots — their own extinction looms.

But redemption is always at hand. Even for the robots.

Two robots, a ‘male’ and a ‘female’, somehow evolve the ‘human’ abilities to love and experience emotions, and — like an android Adam and Eve — set off together to make a new world.

What is true is that we are facing a near future where robots will indeed be our direct competitors in many workplaces.

As more and more employers put artificial intelligences to work, any position involving repetition or routine is at risk of extinction. In the short-term humans will almost certainly lose out on jobs like accounting and bank telling. And everything from farm labourers, paralegals, pharmacists and through to media buyers are all in the same boat.

In fact, any occupations that share a predictable pattern of repetitive activities, the likes of which are possible to replicate through Machine Learning algorithms, will almost certainly bite the dust.

Already, factory workers are facing increased automation, warehouse workers are seeing robots move into pick and pack jobs. Even those banking on ‘new economy’ poster-children like Uber are realizing that it’s not a long game — autonomous car technology means that very shortly these drivers will be surplus to requirements.

We have dealt with the impact of technological change on the world of work many times. 200 years ago, about 98 percent of the US population worked in farming and agriculture, now it’s about 2 percent, and then the rise of factory automation during the early part of the 20th century - and the outsourcing of manufacturing to countries like China - has meant that there is much less need for labour in Western countries.

Indeed, much of Donald Trump’s schtick around bringing manufacturing back to America from China is ultimately fallacious, and uses China as a convenient scapegoat.

Even if it were possible to make American manufacturing great again, because of the relentless rise of automation any rejuvenated factories would only require a tiny fraction of human workers.

New jobs certainly emerge as new technologies emerge replacing the old ones, although the jury is out on the value of many of these jobs.

In 1930, John Maynard Keynes predicted that by the century’s end, technology would have advanced sufficiently that people in western economies would work a 15-hour week. In technological terms, this is entirely possible. But it didn’t happen, if anything we are working more.

In his legendary and highly amusing 2013 essay On the Phenomenon of Bullshit Jobs, David Graeber, Professor of Anthropology at the London School of Economics, says that Keynes didn’t factor into his prediction the massive rise of consumerism. ‘Given the choice between less hours and more toys and pleasures, we’ve collectively chosen the latter.’

Graeber argues that to fill up the time, and keep consumerism rolling, many jobs had to be created that are, effectively, pointless. ‘Huge swathes of people, in Europe and North America in particular, spend their entire working lives performing tasks they secretly believe do not really need to be performed.’ He calls these bullshit jobs.

The productive jobs have, been automated away but rather than creating a massive reduction of working hours to free the world’s population to pursue their own meaningful activities (as Keynes imagined) we have seen the creation of new administration industries without any obvious social value that are often experienced as being purposeless and empty by their workers.

Graeber points out that those doing these bullshit jobs still ‘work 40 or 50 hour weeks on paper’ in reality their job often only requires working the 15 hours Keynes predicted — the rest of their time is spent in pointless ‘training’, attending motivational seminars, and dicking around on Facebook.

To be fair, robots are unrivaled at solving problems of logic, and humans struggle at this.

But robot ability to understand human behavior and make inferences about how the world works are still pretty limited.

Robots, AIs and algorithms can be said to ‘know’ things because their byte-addressable memories contain information. However, there is no evidence to suggest that they know they know these things, or that they can reflect on their states of ‘mind’.

Intentionality is the term used by philosophers to refer to the state of having a state of mind — the ability to experience things like knowing, believing, thinking, wanting and understanding.

Think about it this way, third order intentionality is required to for even the simplest of human exchanges (where someone communicates to someone else that someone else did something), and then four levels are required to elevate this to the level of narrative (‘the writer wants the reader to believe that character A thinks that character B intends to do something’).

Most mammals (almost certainly all primates) are capable of reflecting on their state of mind, at least in a basic way — they know that they know. This is first-order intentional.

Humans rarely engage in more than fourth-order intentionality in daily life and only the smartest can operate at sixth-order without getting into a tangle. (‘Person 1 knows that Person 2 believes that Person 3 thinks that Person 4 wants Person 5 to suppose that Person 6 intends to do something’’).

For some perspective, and in contrast, robots, algorithms and black boxes are zero-order intentional machines. It’s still just numbers and math.

The next big leap for AIs would be with the acquisition first or second-order intentionality — only then the robots might just about start to understand that they are not human. The good news is that for the rest of this century we’re probably safe enough from suffering any robot apocalypse.

The kind of roles requiring intellectual capital, creativity, human understanding and applied third/fourth level intentionality are always going to be crucial. And hairdressers.

And so, the viability of ‘creative industries’ like entertainment, media, and advertising, holds strong. Intellectual capital, decision-making, moral understanding and intentionality.

For those of us in the advertising and marketing business it should be stating the obvious that we should compete largely on the strengths of our capability in those areas, or the people in our organisations who are supposed to think for a living.

By that I mean all of us.

For those who can still think any robot apocalypses are probably the least of our worries. But take a look inside the operations of many advertising agencies and despair at how few of their people are spending time on critical thinking tasks and creativity.

Even more disappointing is when we’d rather debate whether creativity can be ‘learned’ by a robot rather than focusing on speeding up the automation of the multitude of mundane activities in order to get all of our minds directed at fourth, fifth and (maybe) sixth order intentionality. The things that robots’ capabilities are decades away from, and that we can do today, if we could be bothered.

By avoiding critical thinking, people are able to simply get shit done and are rewarded for doing so.

Whilst there are often many smart people around, terms like disruption, innovation and creativity are liberally spread throughout agency creds power point decks, as are ‘bullshit’ job titles like Chief Client Solutions Officers, Customer Paradigm Orchestrators or Full-stack Engineers, these grandiose labels and titles probably serve more as elaborate self-deception devices to convince their owners that they have some sort of purpose.

The point being that far from being at the forefront of creativity most agencies direct most of their people to do pointless work giving disproportionate attention to mundane zero-order intentionality tasks that could and should be automated.

Will robots take our jobs away? Here’s hoping.

Perhaps the AI revolution is really the big opportunity to start over. To hand over these bullshit jobs — the purposeless and empty labour we’ve created to fill up dead space — and give us another bite at the Keynes cherry, now liberated to be more creative and really put to use our miraculous innate abilities for empathy, intentionality and high level abstract reasoning.

To be more human.

Because, and as evolutionary theory has taught us, we humans are fairly unique among species. We haven’t evolved adaptations like huge fangs, inch-thick armour plating or the ability to move at super speed under our own steam.

All of the big adaptations have happened inside our heads, in these huge brains we carry around, built for creativity and sussing out how the world works and how other humans work.

That’s the real work. Not the bullshit jobs.

In The Inevitable, Kevin Kelly agrees that the human jobs of the future will be far less about technical skills but a lot about these human skills.

He says that the ‘bots are the ones that are going to be doing the smart stuff but ‘our job will be making more jobs for the robots’.

And that job will never be done.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Eaon’s first book Where Did It All Go Wrong? Adventures at the Dunning-Kruger Peak Of Advertising’ is out now on Amazon worldwide and from other discerning booksellers.

This article is an adapted excerpt from his second book ‘What’s The Point of Anything? More Tales from the Dunning-Kruger Peak’ due at the end of 2018.

Tuesday, April 10, 2018

george carlin


"I’m 71, and I’ve been doing this for a little over 50 years, doing it at a fairly visible level for 40. 

By this time it’s all second nature. It’s all a machine that works a certain way: the observations, the immediate evaluation of the observation, and then the mental filing of it, or writing it down on a piece of paper. 

I’ve often described the way a 20-year-old versus, say, a 60- or a 70-year-old, the way it works. 

A 20-year-old has a limited amount of data they’ve experienced, either seeing or listening to the world. At 70 it’s a much richer storage area, the matrix inside is more textured, and has more contours to it. 

So, observations made by a 20-year-old are compared against a data set that is incomplete. Observations made by a 60-year-old are compared against a much richer data set. And the observations have more resonance, they’re richer."

Adding to Bob Hoffman's observation last week that 'People over 50 aren't creative enough to write a f***ing banner ad, but they are creative enough to dominate in Nobels, Pulitzers, Oscars, and Emmys.'


Friday, March 23, 2018

personality crisis

From my latest WARC column

----------------------------------------
The nefarious activities of bad actors in the Facebook/Cambridge Analytica debacle may spark an unwarranted moral panic around the use of psychometric profiling in consumer research, argues Eaon Pritchard.

Science is what it is.

As the saying goes, the universe is under no obligation to make sense to you. No moral sense, at least.

It’s been widely reported Cambridge Analytica and others actors in the Facebook data debacle have appeared/claimed to use personality profiling and psychometric techniques as ‘weapons of psychological warfare’ (sic).

This is concerning because we do not need any moral panic around established science simply because of the application by bad actors.

As my good friend Richard Chataway commented on Twitter this week:

This (the CA/Facebook situation) does not invalidate the science. Psychometrics (i.e. Big 5 personality traits) have a much greater predictive power for behaviour than demographics or other segmentation types typically used in comms.

What CA and the other actors in the Facebook data debacle have done with data in combination with the other elements of skullduggery and dirty-tricks reported should be rightly condemned.

But this does not invalidate the science. And it would be very dangerous for this idea to spread.

For those unfamiliar with the big 5, I’ve summarised below. This summary is based on the chapter in ‘Spent’, an evolutionary perspective on consumer behaviour by the psychologist Geoffrey Miller. It’s the best description - and most accessible to the lay person - that I have found.

Most people will understand the distribution of human intelligence. It forms a bell curve, with most people clustered around the middle, close to IQ 100 – the average. Distribution tapers off fairly quickly as scores deviate, so that blockheads and geniuses are rarer.

All the Big Five personality traits follow a similar bell-curve distribution.

Most people sit near the middle of the curve on the other traits, openness, conscientiousness, agreeableness, emotional stability and introversion/extraversion, either slightly lower or higher.

The Big 5 (plus IQ) is established science whereas the typical demographic/personality types used in market segmentation studies, for example, are mostly complete fiction.

When sex/gender, birthplace, language, cultural background, economic status, and education appear to predict consumer behavior the real reason is because these factors correlate with the big 5 + IQ traits, not because they directly cause the behavior.

Similarly the common organisational ‘personality’ frameworks like Myers-Briggs and HBDI are also nonsensical – because traits are normally distributed.

These universal traits are fairly independent and don’t correlate particularly, people display all six traits in different ways and combinations.

Although intelligent people tend to be more open than average to new experiences, there are plenty of smart people, who stick to their football, reality tv and the pub.

Likewise there are plenty of open-minded people who love strange ideas and experiences, but who are not very smart. This explains the market for dubious new technology products and things like homeopathy. Open minded but not so smart = gullible.

(For ad industry observers, much of the research suggests that short-term creative intelligence is basically general intelligence plus openness, while long-term creative achievement is also predicted by higher than average conscientiousness and extraversion traits. Planners would need to score fairly high on intelligence and conscientiousness but are more likely to be disagreeable. Account people could get by on middling for most traits but above average emotional stability is a must-have.)

Importantly, for the situation under discussion, these traits can predict social, political, and religious attitudes fairly well and can therefore be used to nudge people to act in line with their make-up (and corresponding moral foundations).

Left leaning people tend to show higher openness (more interest in diversity), lower conscientiousness (less bothered with convention), and higher agreeableness (concern for care and fairness)

Conservatives show lower openness (more traditionalism), higher conscientiousness (family-values, sense of duty), and lower agreeableness (self-interests and nationalism etc).

That’s one data point.

In my book ‘Where Did It all Go Wrong’ I speculate that the real opportunity for applications of Machine learning and AIs offer us much more than just the better mousetraps of targeting and delivery.

'The big opportunity is for understanding what people value, why they behave the way they do, and how people are thinking (rather than just what).

Everyone will be familiar with the words of the data-scientist W. Edwards Deming who asserts ‘Without data you are just another person with an opinion’.

In our business there are no shortage of opinions.

Deming, quite rightly, demands the objective facts. And we have more facts and data at our disposal than at any time in human history.

However to complete the picture, and to take the opportunity that data and technology give for creativity, I propose an addendum to Deming’s thesis.

Without data you are just another person with an opinion? Correct.

But, without a coherent model of human behaviour, you are just another AI with data.

This could bring new, previously hidden, perspectives to inform both the construction of creative interventions and deeper understanding exactly where, when and how these interventions will have the most power.'


It’s important in light of recent events to note that these methods can can be used by bad actors for nefarious means or the slightly less bad.

But the science is what it is.


value alignment problem

The problem of AI alignment is generally accepted as the challenge of ensuring that we produce AI that is aligned with human values.

For example, if an AGI (Artificial General Intelligence) ever did develop at some point in the future would it do what we (humans) wanted it to do?

Would/could any AGI values ‘align’ with human values?

What are human values, in any case?

The argument might be that AI can be said to be aligned with human values when it does what humans want, but...

Will AI do things some humans want but that other humans don’t want?

How will AI know what humans want given that we often do do what we want but not what we ‘need’ to do?

And – given that it is a superintelligence - what will AI do if these human values conflict with its own values?

In the notorious thought experiment AI pioneer Eliezer Yudkowsky wonders if we can specifically prevent the creation of superintelligent AGIs like the paperclip maximizer?

In the paperclip maximizer scenario a bunch of engineers are trying to work out an efficient way to manufacture paperclips, and they accidentally invent an artificial general intelligence.

This AI is built as a super-intelligent utility-maximising agent whose utility is a direct function of the amount of paperclips it makes.

So far so good, the engineers go home for the night, but by the time they’ve returned to the lab the next day, this AI has copied itself onto every computer in the world and begun reprogramming the world to give itself more power to boost its intelligence.

Now, having control of all the computers and machines in the world, it proceeds to annihilate life on earth and disassembles the entire world into its constituent atoms to make as many paperclips as possible.

The problem is called ‘value alignment’ because we want to ensure that its values align with ‘human values’.

Because building a machine that won’t eventually come back to bite us is a difficult problem.

Determining a consistent shared set of human values we all agree on is obviously an almost impossible problem.

The Facebook/Cambridge Analytics kerfuffle ‘exposed’ this weekend by the Guardian and New York Times is an example.

The Guardian are outraged because ‘It’s now clear that data has been taken from Facebook users without their consent, and was then processed by a third-party and used to support their campaigns’

Ya think?

In fact CA just cleverly used the platform for what it was ‘designed’ for.

This is exactly what Don Marti nicely captured as ‘the new reality… where you win based not on how much the audience trusts you, but on how well you can out-hack the competition.

Extremists and state-sponsored misinformation campaigns aren’t “abusing” targeted advertising. They’re just taking advantage of a system optimized for deception and using it normally.’


And are the Guardian and NYT outraged because parties who’s values don’t align with theirs out-hacked them?

After all, back in 2012 The Guardian reported with some excitement how Barack Obama's re-election team built ‘a vast digital data operation that for the first time combined a unified database on millions of Americans with the power of Facebook to target individual voters to a degree never achieved before.’

Whoever can build the best system to take personal information from the user wins, until it annihilates life on the internet and disassembles the entire publishing world into its constituent atoms.

Is data-driven advertising going to be the ad industry’s own paperclip maximizer?

Any AGI is a long way off but in a more mundane sense we already have an alignment problem.

And this only helps deceptive sellers.


----------------------------------------------------

Originally published on my regular WARC column.


Wednesday, February 28, 2018

a zinger of a signal

The KFC apology ad from last week was interesting from a few standpoints. Most obviously it was a cute creative execution. Deftly reworking KFC into FCK and the almost Gossage-esque copy.

Secondly, there's the pratfall effect. Brands are fallible, so if a brand is open about its failings and can admit to the odd weakness it's a tangible demonstration of a degree of honesty and, therefore, makes other claims a bit more more believable.

But on a more basic level the choice of media in which to deliver the apology is worthy of comment.

KFC took out full page press ads in the Metro and Sun newspapers.

Why is that significant?

The Handicap Principle is a hypothesis originally proposed in 1975 by Israeli biologist Amotz Zahavi to explain how evolution may lead to 'honest' or reliable signaling between animals which have an obvious motivation to bluff or deceive each other.

Zahavi describes how - in order to be effective - signals must be:

1. Reliable

2. And in order to be reliable, signals have to be costly.

It’s an elegant idea: waste makes sense - ‘Conspicuous’ waste in particular.

In my recent book I make several references to The Handicap Principal, here's one excerpt:

‘By wasting [conspicuously], one proves conclusively that one has enough assets to waste and more. The investment - the waste itself - is just what makes the advertisement reliable.’

Psychologists will tell you that humans are pretty good intuitive biologists.

We have innate abilities to be able to identify the kinds of plants that are safe to eat, or animals that are likely to be predators or venomous.

We are also pretty good intuitive psychologists. We can identify what others are thinking and feeling, or what kind of mood they are in with very few cues.

I’d also argue that people are pretty good intuitive media strategists.

We don’t know how much a full-page ad in the broadsheet newspaper costs, exactly. But we do know that it was pretty damn expensive.

We don’t know exactly how much that retargeting banner ad costs but we know that it’s pretty cheap.


Likewise, we can easily and intuitively detect high or low production values that reflect the level of economic investment in any piece of communications. All these indicators are signals.

The kinds of signals that carry an implicit sense of ‘cost’ on behalf of the signaler can be trusted, to a degree.

The signaler has put their money where their mouth is'.


For this reason The KFC apology can be 'trusted' to a degree. It's the extravagance of the gesture that contributes to advertising effectiveness by increasing credibility.

That's the Colonel's secret recipe.

It's not data-driven, there's no surveillance-fed algorithms, no targeting or tracking or data-leakage, it needs not know anything at all of it's audience.

It's just a big, juicy, costly, zinger of a signal.

-------------------------------------------------------------------

My book 'Where Did It All Go Wrong? Adventures at the Dunning-Kruger Peak Of Advertising' is out now on Amazon worldwide and from other discerning booksellers.

Thursday, February 15, 2018

everything changes. everything stays the same

'The Renaissance (1350–1600) produced favorable conditions for charlatans. Old ways of thinking were cast aside, and it seemed that anything was possible.

A semiliterate village dweller might have been aware of a new discovery, but he or she was probably not sufficiently educated to distinguish fact from fiction. Charlatans could not have flourished without the support of a willing, naïve audience.


The extraordinary power of impostors is therefore only to be understood after a consideration of the minds and circumstances of their gullible victims, the crowds who sought them out, half convinced before a word was spoken.

If charlatans had not existed, villagers would have invented them.'

Wednesday, February 14, 2018

now you can buy my book...


‘A proto-meme is beginning to ‘go critical’. This book is a part of that meme. 

The meme is not fully formed but at its core is one thought. Somewhere the advertising business has kinda lost the plot, we’re not sure exactly sure where. 

So many incompetents, who can’t know we are incompetent because the skills we need to produce the right answers are exactly the skills we lack in order to know what a right answer is.

What happens now? Who knows?

But this book tackles it head-on with punk rock, cheap philosophy and evolutionary psychology as we take a hair-raising ride to the Dunning-Kruger peak of advertising…’

With a foreword written by Mark Earls ( author of Herd, I'll Have What She's Having and CopyCopyCopy etc) the book is available on Amazon worldwide and in more discerning bookstores.

There is also a Kindle version, however the 200 page paperback fits nicely in the back pocket of your selvage for optimum disagreeableness trait signaling.

Tuesday, February 13, 2018

engagement

To properly understand advertising, it needs to be viewed as part of popular culture.

When it works it is often because this is the environment it inhabits.

Not any particular media vehicle.

If anything, this has only become more important as the number of potential media choices and environment grows.

I'm fond of Paul Feldwick's 'showbusiness' argument, that goes something along these lines.
Advertising and entertainment have forever been inextricably linked.

The best advertising has always borrowed most of its creative themes from 'show business'.
The popular music, comedy, celebrities, sport, drama, sexiness and fashions of the day.

Advertising and popular culture are two parts of the same whole.

Paul suggests that not much has really changed since PT Barnum and The American Medicine Show.
A song-and-dance to put a smile on their faces, and put them in the mood to buy.

Maybe everything is PR. Or at least 'publicity'.

Media themselves are only an audience gatherer.

Sure, they can help with engagement by attracting an audience appropriate for the message and maybe keeping a bit of attention.

Media engagement, however, does not equate to advertising engagement. Nor is that media's job.

Paradoxically, in spite of the infinite number of media channels now available, when great contemporary advertising works it is often because it truly inhabits the broader culture - and it stands up on its own.

Advertising is a mass phenomenon.

'The publicising function of good brand advertising is all-pervasive'.

As the old saying goes 'If you want engagement, make a more engaging ad.'
This is an engaging ad, if ever there was one.

And there's no business like showbusiness.