What has GDPR ever done for us?
Is GDPR the lesser of two evils?
I was challenged on Twitter to explain my stance that while GDPR and interrelated cookie policies are a bit of a pain in the arse… they are the lesser of two evils. The challenge explicitly requested I used only rational, un-emotive arguments.
Here goes…
First - some problems with a free internet…
Caveat: I am a fierce proponent of net-neutrality and a free internet. That is not to say that I can’t identify some problems.
Let’s start from the beginning - considering two sides of the equation; humans & computers… and more collectively society and the internet.
Branding - the power of repetition.
Humans are pattern-engines. Our brains work by absorbing data, collating it according to patterns and eventually being able to respond through the complex assimilation of experienced patterns and new stimuli. That gives rise to our cognitive essence (scientifically speaking).
Brand marketing, in the Coca-Cola sense of the activity, works by enforcing pattern recognition. Coca-Cola needs you to see their brand a lot, throughout your life. This instills recognition and therefore trust. In turn this makes you more likely to reach for their bottle. Taste-preference (“I prefer this cola”) can be demonstrated to be influenced by brand recognition in manipulated vs blind trials - before you tell me that Coke just tastes better.
The political brand.
Branding doesn’t just work for colas. If you survey 1,000 people and ask them which political policies they agree their answer is vastly influenced by the branding of a political party. Ask them just to agree/disagree with policies and people are moderate - their opinions don’t align with political parties. Ask another 1,000 people the same question but mark the policies with their sponsoring party and people will side almost unanimously with their preferred party.
The underlying principal is the same here as with the cola - brand, allegiance and taste for policies outweighs real personal believe at a rational level.
Intelligence and the intelligence of repetition.
Although the modern internet is vast it still has only a few principal gatekeepers or entry points. In the UK and US, Facebook and Google act as the root of most online sessions. Both are advertising lead businesses.
Now, the challenge levelled at me on twitter asked that I explain “what the actual problem” was with Cambridge Analyica, should I choose to mention them, so let’s look at them as our example.
Cambridge Analytica were using an indiscretion of Facebook’s API to syphon out demographic data on millions of people particularly in the UK and US.
This data, when combined with Facebook’s ad-platform, allowed them to do two important things; first to build extremely rich demographic models and second to target specific messages to specific demographies at scale.
The “problem” here is that Facebook’s indiscretion in providing the data and Cambridge Analytica’s accurate access to people at scale via the Facebook ad platform provides a huge opportunity for the direct manipulation of what people believe.
This manipulation is possible because people’s beliefs can be manipulated. You might find that hard to accept but unfortunately it’s a consequence of how our brains work. Repetition is how branding and marketing works, it’s how tyrants work, it’s how politics works and it’s how learning to play the piano works.
But it also means that we are forever susceptible to misdirection through repetition. The more targeted the misdirection and the more effective the repetition the more easy the deceit.
Remarketing and the Like button.
Many years ago Facebook launched a “Like” button and organisations, in an effort to further their brand, went crazy for them. In just a few years they started appearing on almost every web page.
The power of the “Like” button however has nothing to do with furthering the intentions of the owner of the website. The power here is in allowing Facebook to track your browsing history across the internet using a device called a “cookie” - a small text marker that stores an unique identifier and that is sent to and fro to Facebook whenever that “Like” button is loaded.
Google did the same with adverts, Twitter does the same and there are more devices than cookies to track you; HTML5 stores like LocalStorage, UserAgent/IP fingerprinting and so on.
The point is that there is a secondary market here in tracking your behaviour and that market is growing. You may well find for instance that your parking app is selling your data into this market or that your ISP (if you live in the US) is profiling the background network-traffic noise of your smart house and selling that too.
This market then works the other way. All of that tracking intelligence gives rise to Google & Facebook’s ability to track you around the internet. Not only does this provide accuracy to the savvy marketeer as we’ve seen above… but it improves the capacity for repetition. We’ve all been there - clicked a link to go to website A… and then moments later when we visit website B you’ll see an advert for website A. You might think website A is wasting their money but they’re not… because remember it’s repetition that gets your brain hooked on a brand or an idea.
The unholy trinity.
So the dangers of the internet can be summed up as a trinity of problems as follows:
- Universal Data Collection - the ability a few organisation to watch your behaviour and habits across the internet (and increasingly across your life; your health, travel, music tastes).
- Centralisation & Intelligence - the ability of a few huge organisations, heavily invested in AI, to collate and make sense of this data at scale and with alarming accuracy.
- Access & CI - the ability for these organisation to leverage that intelligence to target you wherever you are with repetitive messaging which they can tailor to you with iterative efficacy - referred to as continuous improvement or “CI”.
Did someone mention GDPR?
So the question was actually about GDPR:
Can you explain what “evil” the GDPR is trying to eliminate.
GDPR is an early attempt to put a dent in this trinity. Not specifically aimed at Google and Facebook mind you. Lots of people collect data, lots of people use it, lots of people drop it in the street and lots of people (consumers particularly) don’t care… GDPR is trying to address a whole range of problems… but the central evil - or opportunity for evil is this:
If I can gain access to data, if I can make an intelligent decision about it and if I can then gain access to you, I have great power.
There are lots of scenarios that GDPR relates to… for example:
If I book a flight with your travel agency and you then loose my passport number, my name and email (which usually quite easily gives rise to my photo from anywhere on the web) then I’m wide open to identify theft and having my bank accounts emptied through no direct fault of my own. GDPR provides for punitive fines for carelessness with data to try to begin to mitigate this risk: that data is collected, collated and then provides access.
Similarly if I casually mooch around on a few news websites reading articles that someone’s sent me - and unbeknownst to me a third party is collecting my reading habits (data), collating those behaviours (intelligence) and then building remarking lists pointing me at targeted content to twist my specic beliefs (access) then GDPR allows for ensuring the third party shows me its intent in doing so.
Is GDPR effective?
I should say for the record I don’t think GDPR is effective.
This conversation started because some US (and generally non-EU) websites are throwing 4xx errors telling you that you can’t access their content because they aren’t and don’t want to be GDPR compliant. Perhaps, if they were doing something insidious before and they’re now taking themselves out of the running you could argue that it is effective… but you could also argue that it’s damaging freedom of expression too.
That’s not the reason I think it’s ineffective. The reason I think it’s ineffective is that it doesn’t actually prohibit any behaviours (apart from abject carelessness on the part of the data-processors)… instead it places the onus on the user to understand what they’re being told by the organisation asking for permission.
Well, that’s total BS unfortunately. First people are generally apathetic. Even if they do understand the mechanics, the risks, their rights and so forth they still just want to read the bloody article. No-one is making an informed choice… and the amount of data collection and spread of data is obscene in many instances. Try it - open an incognito window, go to a major news outlet and when you see the GDPR banner look at the sheer number of organisations your data is being shared with. Often 100’s - from one click. The problem is so out of hand GDPR is merely tickling the stable door after the horse has grown wings and flown away.
Is GDPR pointless?
No. It’s a start. It enshrines rights. That is a very important legislative step forward.
Looking back at our trinity - data, processing & access - I think most people aren’t aware of the exponential increase in the ability for organisations to process the data they collect in an intelligent way.
What often gets referred to as AI (which is really “machine learning”) is a process of pattern matching based on the way your brain works (or ants nests work - which is easier to explain, another time). What’s significant here is the scale at which AI/ML can be effective, the acceleration in the capacity of the modelling frameworks, the general explosion in the industry and ultimately the accuracy of the insights these technologies can provide.
GDPR in and of itself isn’t designed to prohibit any kind of processing or generation of conclusions. It is designed to give people the rights to understand - and if required extract - their data from these models. These are important rights.
Where next?
The next step - if we care about these things - is to find a way to apply some standards as to what can be done with our data.
Is that an ethics question? Is that the role of government to decide? Do any individuals care enough? Does prohibition work?
I have no idea what I think about these questions - and I’ve worked in this industry for 20 years and have been tinkering with LSTMs (the bedrock of AI) and thinking about these questions for over 25 years.
What I do know is that just as much as global warming is a figurative (and potentially literal) tsunami heading our way - so too is the rise of AI. The first danger is not the loss of jobs to robots… the first danger is the loss of free will.
GDPR is aimed at protecting us from the “evil” of heavy, clandestine collection, processing and use of our personal data. This evil can and will be used to manipulate our beliefs at massive scale. Today it undermines our democratic process and our access to information and ideas.
It’s a horrible thing to say but I think the trajectory that we are on is pretty obvious…
In the end this evil trinity will erode and destroy our free will.
