Friday, July 06, 2018

death by 6,000 nibbles

The Yellow Tang is a brightly colored fish that swims in the tropical reefs of the Indian Ocean.

When it needs cleaned, the tang looks for its pal, the Cleaner Wrasse who can be recognised by its bright electric blue colour and black stripe that runs down the length of its body.

Cleaner Wrasses hang around in 'cleaning stations'. Agencies in the reef.

The Wrasse is given access to the Tang’s gills and mouth, and then it eats any parasites and dead tissue off larger fishes' skin in a mutualistic relationship that provides food and protection for the wrasse, and considerable health benefits for the Tang. A reciprocal situation.

And so in order to gain access, the Cleaner Wrasse must first perform a secret dance – a special ‘code’ - in order to win the Tang’s trust.

This system normally works out fine, the symbiosis between two species, both partners are indispensable and the mutual advantage is obvious.

But there’s some other fish that mimic Cleaner Wrasses. For example, a species of Blenny called Aspidontus Taeniatus has evolved the same behavior.

It is almost identical in size and appearance to the Cleaner Wrasse. It even sports the same shiny stripe down its back and lurks around near the same reefs watching.

If approached by a Yellow Tang, the deceptive Blenny also knows the code.

The secret dance.

But once allowed in, instead of providing a cleaning service, the rogue Blenny uses its super sharp teeth to rip chunks of flesh from the hapless client.

Rather than ridding his client of parasites, Blenny IS the parasite. But in disguise.

The murky world of advertising technology [sic] contains many similar parasites, well adept at making themselves appear to be useful.

They look a bit like something to do with advertising, they can talk a language that’s a bit like the language of advertising. They know the code, which kinds of secret dances will get them access to the big fish.

And there’s lots of them.

This year’s chiefmartec.com adtech ‘lumascape’ graphic actually charts 6,829 marketing technology solutions from 6,242 unique marketing technology vendors.

While that represents ‘just’ 27% growth from 2017’s total (5,381) solutions, the percentage of growth the scale and velocity of this space is staggering.

In fact, the size of the 2018 landscape is equivalent to all of the marketing tech landscapes from 2011 through 2016 added together. Indeed, in 2011 they numbered just 150.

All of them having a nibble. All of them getting a chunk.

Where does all the money go?

Some of these companies are legit.

Some of the money may even find its way back into the industry, somehow.

But once you let them in, they keep biting.
And there are so many it’s hard to see how they can be kept out.
Then it's death by 6,000 nibbles.


Friday, June 29, 2018

adaptive



'All over the country, we want a new direction,
I said all over this land, we need a reaction,
Well there should be a youth explosion,
Inflate creation,
But something we can command,

What's the point in saying destroy?
I want a new life for everywhere,
We want a direction, all over the country,
I said I want a reaction, all over this land,
You g-got to get up and move it, a youth explosion,
Because this is your last chance,

Can't dismiss what is gone before,
But there's foundations for us to explore,

All around the world I've been looking for a new'

The 19 year old Paul Weller intuitively knew something of adaptive leadership.

Adaptive leadership is about change that enables the capacity to thrive.

Adaptive change interventions build on the past rather than jettison it.

Organizational change happens through ex-peri-ment-ation.

Adaptive leadership values diversity of views.

New adaptations have the potential of significantly displacing, re-regulating, and rearranging old structures.


Wednesday, June 27, 2018

successful adaptations are both conservative and progressive

'Successful adaptive changes build on the past rather than jettison it.

In biological adaptations, though DNA changes may radically expand the species’ capacity to thrive, the actual amount of DNA that changes is minuscule.

More than 98 percent of our current DNA is the same as that of a chimpanzee: it took less than a 2 percent change of our evolutionary predecessors’ genetic blueprint to give humans extraordinary range and ability.

A challenge for adaptive leadership, then, is to engage people in distinguishing what is essential to preserve from their organization’s heritage from what is expendable.

Successful adaptations are thus both conservative and progressive.

They make the best possible use of previous wisdom and know-how.

The most effective leadership anchors change in the values, competencies, and strategic orientations that should endure in the organization.'


Heifetz, Grashow, and Linsky | The Practice of Adaptive Leadership: Tools and Tactics for Changing Your Organization and the World | 2009 Harvard Business School Publishing



nothing cooks without some heat


In his autobiography Miles Davis tells a story about the 1970 line-up of his touring band - this was the band that featured on the live half of the Live-Evil album - the one that featured the legendary Keith Jarrett on keys and briefly included the equally legendary Gary Bartz on sax.

Bartz had been grumbling a bit in private about Jarrett over-playing 'busy shit' behind his sax solos. Eventually he approached Miles and asked him to have a word with Kieth.

Miles agreed.

Later Keith Jarrett was talking with Miles about some other bits and pieces and as he was leaving Miles calls Keith back to tell him how much Gary Bartz was loving what he was doing behind his sax solos and could he please do even more of that kind of of thing.

Cookin' with Miles.
Nothing cooks without some heat.




Monday, June 04, 2018

prestige intelligence and the transcendent self

The philosopher Daniel Dennett recalls the time computer scientist Joseph Weizenbaum – a good friend of Dennett’s – harboured his own ideas and ambition about becoming a philosopher.

Weizenbaum had recounted how one evening, after ‘holding forth with high purpose and furrowed brow at the dinner table’, his young daughter had exclaimed, ‘Wow! Dad just said a ‘deepity!’

Dennett was suitably impressed – with the coinage, not necessarily his friend’s ambitions in the philosophy department – and subsequently adopted ‘deepity’ as categorising device and explains correct usage like this.

‘A deepity is a proposition that seems both important and true— and profound— but that achieves this effect by being ambiguous.’

Pictured below is some expensively produced promotional collateral given to attendees of an ‘upfronts’ type showcase from an Australian media organization that we attended recently.




Deepity indeed. ‘Disruptive collaboration' is a favourite but all seem to fit Dennett’s description perfectly.

Strangely out-of-place is the final card promising ‘commercial solutions’. How dull in it’s pragmatism and downright usefullness.







Monday, May 14, 2018

how do you mend a broken heart?

As they went into their final match of the 1985/86 Scottish football season, away to 6th placed Dundee on May 3, league leaders Hearts had gone a full 27 league games without defeat and needed only to avoid losing to ensure they would be Scottish champions for the first time since 1960.

Two Albert Kidd goals for Dundee in the final 10 minutes shattered Hearts dreams, as Celtic were stuffing St Mirren 5-0 in Paisley and so nicked the title on the last day.

But Hearts still had the Cup to play for.

The final at Hampden against Alex Ferguson's Aberdeen was just a week away.

To try and lift the dejected players for the Cup final the following week, the Hearts management had brought in a top sports psychologist who coached the squad in the week leading up to the final.

Various techniques were employed to attempt to 'erase' the disappointment of blowing the championship and prepare the team to at least lift the cup.

Fergie got wind of the activities at the Hearts training camp.

According to former Aberdeen assistant boss, Willie Garner, as Fergie prepared the Aberdeen players together in the dressing room before the teams walked out at the final, his final instructions were that each Aberdeen player should find an individual Hearts player in the tunnel, shake their hand and offer 'bad luck last week' condolences.

Thus negating any work the psychs might have done to put the bitter disappointment of losing the big prize in the final minutes of the last league game.

Aberdeen went 1-0 up in the first two minutes and added two further goals later on, destroying Hearts 3-0.

Strategy.

Identifying the critical factors in a situation, and designing the means to overcome them.


Or Predatory Thinking - as Dave Trott would say.

Getting upstream of the problem.

Wednesday, April 18, 2018

no robot apocalypse (yet)

'The Frankenstein complex' is the term coined by 20th century American author and biochemistry professor Isaac Asimov in his famous robot novels series, to describe the feeling of fear we hold that our creations will turn on us (their creators) — like the monster in Mary Shelley’s 1818 novel.

One hundred years later in 2018 we still seem worried about this idea of subordination. That we might ultimately lose the ability to control our machines.

At least part of the problem are the concerns about AI alignment. Alignment is generally accepted as the ongoing challenge of ensuring that we produce AIs that are aligned with human values. This is our modern Frankenstein complex.

For example, if what has been described as an AGI (Artificial General Intelligence) ever did develop at some point in the future would it do what we (humans) wanted it to do?

Would/could any AGI values ‘align’ with human values? What are human values, in any case?

The argument might be that AI can be said to be aligned with human values when it does what humans want, but…

Will AI do things some humans want but that other humans don’t want?

How will AI know what humans want given that we often do do what we want but not what we ‘need’ to do?

And — given that it is a superintelligence — what will AI do if these human values conflict with its own values?

In the notorious thought experiment AI pioneer Eliezer Yudkowsky wonders if we can specifically prevent the creation of superintelligent AGIs like the paperclip maximizer?

In the paperclip maximizer scenario a bunch of engineers are trying to work out an efficient way to manufacture paperclips, and they accidentally invent an artificial general intelligence.

This AI is built as a super-intelligent utility-maximising agent whose utility is a direct function of the amount of paperclips it makes.

So far so good, the engineers go home for the night, but by the time they’ve returned to the lab the next day, this AI has copied itself onto every computer in the world and begun reprogramming the world to give itself more power to boost its intelligence.

Now, having control of all the computers and machines in the world, it proceeds to annihilate life on earth and disassembles the entire world into its constituent atoms to make as many paperclips as possible.

Presumably this kind of scenario is what is troubling Elon Musk when he dramatically worries that ‘…with artificial intelligence we are summoning the demon.’

Musk — when not supervising the assembly of his AI powered self-driving cars can be found hanging out in his SpaceX data centre’s ‘Cyberdyne Systems’ (named after the fictitious company that created “Skynet” in the Terminator movie series) — might possibly have some covert agenda in play in expressing his AI fears given how deep rival tech giants Google and Facebook are in the space. Who knows?

The demon AI problem is called ‘value alignment’ because we want to ensure that its values align with ‘human values’.

Because building a machine that won’t eventually come back to bite us is a difficult problem. Although any biting by the robots is more likely to be a result of our negligence than the machine’s malevolence.

More difficult is determining a consistent shared set of human values we all agree on — this is obviously an almost impossible problem.

There seems to be some logic to this fear but it is deeply flawed. In Enlightenment Now the psychologist Steven Pinker exposes the ‘logic’ in this way.

Since humans have more intelligence than animals — and AI robots of the future will have more of it than us — and we have used our powers to domesticate or exterminate less ­­well-endowed animals (and more technologically advanced societies have enslaved or annihilated technologically primitive ones), it surely follows that any super-smart AI would do the same to us. And we will be ­powerless to stop it. Right?

Nope. Firstly, Pinker cautions against confusing intelligence with motivation. Even if we did invent superhuman intelligent robots, why would they want to take over the world? And secondly, knowledge is acquired by formulating explanations and testing them against reality, not by running an algorithm (and in any case big data is still finite data, whereas the universe of knowledge is infinite).

The word robot itself comes from an old Slavonic word rabota which, roughly translated, means the servitude of forced labour. Rabota was the kind of labour that serfs would have had to perform on their masters’ lands in the Middle Ages.

Rabota was adapted to ‘robot’ — and introduced into the lexicon — in the 1920’s by the Czech playwright, sci-fi novelist and journalist Karel Capek, in the title of his hit play, R.U.R. Rossumovi UniverzálnĂ­ Roboti (Rossum’s Universal Robots).
In this futuristic drama (it’s set in circa 2000) R.U.R. are a company who initially mass-produced ‘workers’ (essentially slaves) using the latest biology, chemistry and technology.

These robots are not mechanical devices, but rather they are artificial organisms — (think Westworld) — and they are designed to perform tasks that humans would rather not.

It turns out there’s an almost infinite market for this service until, naturellement, the robots eventually take over the world although, in the process, the formula required to create new ‘robots’ has been destroyed and — as the robots have killed everybody who knows how to make new robots — their own extinction looms.

But redemption is always at hand. Even for the robots.

Two robots, a ‘male’ and a ‘female’, somehow evolve the ‘human’ abilities to love and experience emotions, and — like an android Adam and Eve — set off together to make a new world.

What is true is that we are facing a near future where robots will indeed be our direct competitors in many workplaces.

As more and more employers put artificial intelligences to work, any position involving repetition or routine is at risk of extinction. In the short-term humans will almost certainly lose out on jobs like accounting and bank telling. And everything from farm labourers, paralegals, pharmacists and through to media buyers are all in the same boat.

In fact, any occupations that share a predictable pattern of repetitive activities, the likes of which are possible to replicate through Machine Learning algorithms, will almost certainly bite the dust.

Already, factory workers are facing increased automation, warehouse workers are seeing robots move into pick and pack jobs. Even those banking on ‘new economy’ poster-children like Uber are realizing that it’s not a long game — autonomous car technology means that very shortly these drivers will be surplus to requirements.

We have dealt with the impact of technological change on the world of work many times. 200 years ago, about 98 percent of the US population worked in farming and agriculture, now it’s about 2 percent, and then the rise of factory automation during the early part of the 20th century - and the outsourcing of manufacturing to countries like China - has meant that there is much less need for labour in Western countries.

Indeed, much of Donald Trump’s schtick around bringing manufacturing back to America from China is ultimately fallacious, and uses China as a convenient scapegoat.

Even if it were possible to make American manufacturing great again, because of the relentless rise of automation any rejuvenated factories would only require a tiny fraction of human workers.

New jobs certainly emerge as new technologies emerge replacing the old ones, although the jury is out on the value of many of these jobs.

In 1930, John Maynard Keynes predicted that by the century’s end, technology would have advanced sufficiently that people in western economies would work a 15-hour week. In technological terms, this is entirely possible. But it didn’t happen, if anything we are working more.

In his legendary and highly amusing 2013 essay On the Phenomenon of Bullshit Jobs, David Graeber, Professor of Anthropology at the London School of Economics, says that Keynes didn’t factor into his prediction the massive rise of consumerism. ‘Given the choice between less hours and more toys and pleasures, we’ve collectively chosen the latter.’

Graeber argues that to fill up the time, and keep consumerism rolling, many jobs had to be created that are, effectively, pointless. ‘Huge swathes of people, in Europe and North America in particular, spend their entire working lives performing tasks they secretly believe do not really need to be performed.’ He calls these bullshit jobs.

The productive jobs have, been automated away but rather than creating a massive reduction of working hours to free the world’s population to pursue their own meaningful activities (as Keynes imagined) we have seen the creation of new administration industries without any obvious social value that are often experienced as being purposeless and empty by their workers.

Graeber points out that those doing these bullshit jobs still ‘work 40 or 50 hour weeks on paper’ in reality their job often only requires working the 15 hours Keynes predicted — the rest of their time is spent in pointless ‘training’, attending motivational seminars, and dicking around on Facebook.

To be fair, robots are unrivaled at solving problems of logic, and humans struggle at this.

But robot ability to understand human behavior and make inferences about how the world works are still pretty limited.

Robots, AIs and algorithms can be said to ‘know’ things because their byte-addressable memories contain information. However, there is no evidence to suggest that they know they know these things, or that they can reflect on their states of ‘mind’.

Intentionality is the term used by philosophers to refer to the state of having a state of mind — the ability to experience things like knowing, believing, thinking, wanting and understanding.

Think about it this way, third order intentionality is required to for even the simplest of human exchanges (where someone communicates to someone else that someone else did something), and then four levels are required to elevate this to the level of narrative (‘the writer wants the reader to believe that character A thinks that character B intends to do something’).

Most mammals (almost certainly all primates) are capable of reflecting on their state of mind, at least in a basic way — they know that they know. This is first-order intentional.

Humans rarely engage in more than fourth-order intentionality in daily life and only the smartest can operate at sixth-order without getting into a tangle. (‘Person 1 knows that Person 2 believes that Person 3 thinks that Person 4 wants Person 5 to suppose that Person 6 intends to do something’’).

For some perspective, and in contrast, robots, algorithms and black boxes are zero-order intentional machines. It’s still just numbers and math.

The next big leap for AIs would be with the acquisition first or second-order intentionality — only then the robots might just about start to understand that they are not human. The good news is that for the rest of this century we’re probably safe enough from suffering any robot apocalypse.

The kind of roles requiring intellectual capital, creativity, human understanding and applied third/fourth level intentionality are always going to be crucial. And hairdressers.

And so, the viability of ‘creative industries’ like entertainment, media, and advertising, holds strong. Intellectual capital, decision-making, moral understanding and intentionality.

For those of us in the advertising and marketing business it should be stating the obvious that we should compete largely on the strengths of our capability in those areas, or the people in our organisations who are supposed to think for a living.

By that I mean all of us.

For those who can still think any robot apocalypses are probably the least of our worries. But take a look inside the operations of many advertising agencies and despair at how few of their people are spending time on critical thinking tasks and creativity.

Even more disappointing is when we’d rather debate whether creativity can be ‘learned’ by a robot rather than focusing on speeding up the automation of the multitude of mundane activities in order to get all of our minds directed at fourth, fifth and (maybe) sixth order intentionality. The things that robots’ capabilities are decades away from, and that we can do today, if we could be bothered.

By avoiding critical thinking, people are able to simply get shit done and are rewarded for doing so.

Whilst there are often many smart people around, terms like disruption, innovation and creativity are liberally spread throughout agency creds power point decks, as are ‘bullshit’ job titles like Chief Client Solutions Officers, Customer Paradigm Orchestrators or Full-stack Engineers, these grandiose labels and titles probably serve more as elaborate self-deception devices to convince their owners that they have some sort of purpose.

The point being that far from being at the forefront of creativity most agencies direct most of their people to do pointless work giving disproportionate attention to mundane zero-order intentionality tasks that could and should be automated.

Will robots take our jobs away? Here’s hoping.

Perhaps the AI revolution is really the big opportunity to start over. To hand over these bullshit jobs — the purposeless and empty labour we’ve created to fill up dead space — and give us another bite at the Keynes cherry, now liberated to be more creative and really put to use our miraculous innate abilities for empathy, intentionality and high level abstract reasoning.

To be more human.

Because, and as evolutionary theory has taught us, we humans are fairly unique among species. We haven’t evolved adaptations like huge fangs, inch-thick armour plating or the ability to move at super speed under our own steam.

All of the big adaptations have happened inside our heads, in these huge brains we carry around, built for creativity and sussing out how the world works and how other humans work.

That’s the real work. Not the bullshit jobs.

In The Inevitable, Kevin Kelly agrees that the human jobs of the future will be far less about technical skills but a lot about these human skills.

He says that the ‘bots are the ones that are going to be doing the smart stuff but ‘our job will be making more jobs for the robots’.

And that job will never be done.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Eaon’s first book Where Did It All Go Wrong? Adventures at the Dunning-Kruger Peak Of Advertising’ is out now on Amazon worldwide and from other discerning booksellers.

This article is an adapted excerpt from his second book ‘What’s The Point of Anything? More Tales from the Dunning-Kruger Peak’ due at the end of 2018.

Tuesday, April 10, 2018

george carlin


"I’m 71, and I’ve been doing this for a little over 50 years, doing it at a fairly visible level for 40. 

By this time it’s all second nature. It’s all a machine that works a certain way: the observations, the immediate evaluation of the observation, and then the mental filing of it, or writing it down on a piece of paper. 

I’ve often described the way a 20-year-old versus, say, a 60- or a 70-year-old, the way it works. 

A 20-year-old has a limited amount of data they’ve experienced, either seeing or listening to the world. At 70 it’s a much richer storage area, the matrix inside is more textured, and has more contours to it. 

So, observations made by a 20-year-old are compared against a data set that is incomplete. Observations made by a 60-year-old are compared against a much richer data set. And the observations have more resonance, they’re richer."

Adding to Bob Hoffman's observation last week that 'People over 50 aren't creative enough to write a f***ing banner ad, but they are creative enough to dominate in Nobels, Pulitzers, Oscars, and Emmys.'