Friday, March 23, 2018

personality crisis

From my latest WARC column

----------------------------------------
The nefarious activities of bad actors in the Facebook/Cambridge Analytica debacle may spark an unwarranted moral panic around the use of psychometric profiling in consumer research, argues Eaon Pritchard.

Science is what it is.

As the saying goes, the universe is under no obligation to make sense to you. No moral sense, at least.

It’s been widely reported Cambridge Analytica and others actors in the Facebook data debacle have appeared/claimed to use personality profiling and psychometric techniques as ‘weapons of psychological warfare’ (sic).

This is concerning because we do not need any moral panic around established science simply because of the application by bad actors.

As my good friend Richard Chataway commented on Twitter this week:

This (the CA/Facebook situation) does not invalidate the science. Psychometrics (i.e. Big 5 personality traits) have a much greater predictive power for behaviour than demographics or other segmentation types typically used in comms.

What CA and the other actors in the Facebook data debacle have done with data in combination with the other elements of skullduggery and dirty-tricks reported should be rightly condemned.

But this does not invalidate the science. And it would be very dangerous for this idea to spread.

For those unfamiliar with the big 5, I’ve summarised below. This summary is based on the chapter in ‘Spent’, an evolutionary perspective on consumer behaviour by the psychologist Geoffrey Miller. It’s the best description - and most accessible to the lay person - that I have found.

Most people will understand the distribution of human intelligence. It forms a bell curve, with most people clustered around the middle, close to IQ 100 – the average. Distribution tapers off fairly quickly as scores deviate, so that blockheads and geniuses are rarer.

All the Big Five personality traits follow a similar bell-curve distribution.

Most people sit near the middle of the curve on the other traits, openness, conscientiousness, agreeableness, emotional stability and introversion/extraversion, either slightly lower or higher.

The Big 5 (plus IQ) is established science whereas the typical demographic/personality types used in market segmentation studies, for example, are mostly complete fiction.

When sex/gender, birthplace, language, cultural background, economic status, and education appear to predict consumer behavior the real reason is because these factors correlate with the big 5 + IQ traits, not because they directly cause the behavior.

Similarly the common organisational ‘personality’ frameworks like Myers-Briggs and HBDI are also nonsensical – because traits are normally distributed.

These universal traits are fairly independent and don’t correlate particularly, people display all six traits in different ways and combinations.

Although intelligent people tend to be more open than average to new experiences, there are plenty of smart people, who stick to their football, reality tv and the pub.

Likewise there are plenty of open-minded people who love strange ideas and experiences, but who are not very smart. This explains the market for dubious new technology products and things like homeopathy. Open minded but not so smart = gullible.

(For ad industry observers, much of the research suggests that short-term creative intelligence is basically general intelligence plus openness, while long-term creative achievement is also predicted by higher than average conscientiousness and extraversion traits. Planners would need to score fairly high on intelligence and conscientiousness but are more likely to be disagreeable. Account people could get by on middling for most traits but above average emotional stability is a must-have.)

Importantly, for the situation under discussion, these traits can predict social, political, and religious attitudes fairly well and can therefore be used to nudge people to act in line with their make-up (and corresponding moral foundations).

Left leaning people tend to show higher openness (more interest in diversity), lower conscientiousness (less bothered with convention), and higher agreeableness (concern for care and fairness)

Conservatives show lower openness (more traditionalism), higher conscientiousness (family-values, sense of duty), and lower agreeableness (self-interests and nationalism etc).

That’s one data point.

In my book ‘Where Did It all Go Wrong’ I speculate that the real opportunity for applications of Machine learning and AIs offer us much more than just the better mousetraps of targeting and delivery.

'The big opportunity is for understanding what people value, why they behave the way they do, and how people are thinking (rather than just what).

Everyone will be familiar with the words of the data-scientist W. Edwards Deming who asserts ‘Without data you are just another person with an opinion’.

In our business there are no shortage of opinions.

Deming, quite rightly, demands the objective facts. And we have more facts and data at our disposal than at any time in human history.

However to complete the picture, and to take the opportunity that data and technology give for creativity, I propose an addendum to Deming’s thesis.

Without data you are just another person with an opinion? Correct.

But, without a coherent model of human behaviour, you are just another AI with data.

This could bring new, previously hidden, perspectives to inform both the construction of creative interventions and deeper understanding exactly where, when and how these interventions will have the most power.'


It’s important in light of recent events to note that these methods can can be used by bad actors for nefarious means or the slightly less bad.

But the science is what it is.


value alignment problem

The problem of AI alignment is generally accepted as the challenge of ensuring that we produce AI that is aligned with human values.

For example, if an AGI (Artificial General Intelligence) ever did develop at some point in the future would it do what we (humans) wanted it to do?

Would/could any AGI values ‘align’ with human values?

What are human values, in any case?

The argument might be that AI can be said to be aligned with human values when it does what humans want, but...

Will AI do things some humans want but that other humans don’t want?

How will AI know what humans want given that we often do do what we want but not what we ‘need’ to do?

And – given that it is a superintelligence - what will AI do if these human values conflict with its own values?

In the notorious thought experiment AI pioneer Eliezer Yudkowsky wonders if we can specifically prevent the creation of superintelligent AGIs like the paperclip maximizer?

In the paperclip maximizer scenario a bunch of engineers are trying to work out an efficient way to manufacture paperclips, and they accidentally invent an artificial general intelligence.

This AI is built as a super-intelligent utility-maximising agent whose utility is a direct function of the amount of paperclips it makes.

So far so good, the engineers go home for the night, but by the time they’ve returned to the lab the next day, this AI has copied itself onto every computer in the world and begun reprogramming the world to give itself more power to boost its intelligence.

Now, having control of all the computers and machines in the world, it proceeds to annihilate life on earth and disassembles the entire world into its constituent atoms to make as many paperclips as possible.

The problem is called ‘value alignment’ because we want to ensure that its values align with ‘human values’.

Because building a machine that won’t eventually come back to bite us is a difficult problem.

Determining a consistent shared set of human values we all agree on is obviously an almost impossible problem.

The Facebook/Cambridge Analytics kerfuffle ‘exposed’ this weekend by the Guardian and New York Times is an example.

The Guardian are outraged because ‘It’s now clear that data has been taken from Facebook users without their consent, and was then processed by a third-party and used to support their campaigns’

Ya think?

In fact CA just cleverly used the platform for what it was ‘designed’ for.

This is exactly what Don Marti nicely captured as ‘the new reality… where you win based not on how much the audience trusts you, but on how well you can out-hack the competition.

Extremists and state-sponsored misinformation campaigns aren’t “abusing” targeted advertising. They’re just taking advantage of a system optimized for deception and using it normally.’


And are the Guardian and NYT outraged because parties who’s values don’t align with theirs out-hacked them?

After all, back in 2012 The Guardian reported with some excitement how Barack Obama's re-election team built ‘a vast digital data operation that for the first time combined a unified database on millions of Americans with the power of Facebook to target individual voters to a degree never achieved before.’

Whoever can build the best system to take personal information from the user wins, until it annihilates life on the internet and disassembles the entire publishing world into its constituent atoms.

Is data-driven advertising going to be the ad industry’s own paperclip maximizer?

Any AGI is a long way off but in a more mundane sense we already have an alignment problem.

And this only helps deceptive sellers.


----------------------------------------------------

Originally published on my regular WARC column.