Abstract
Back in June 2016, only days before the Brexit vote, I was flying from Aberdeen Scotland on a British Airways plane, heading back to Italy after the K-Drive Project had just wrapped. I was carrying a laptop with me, containing a small NLP model I had developed over the past weeks to predict the Brexit outcome. Together with the model, I had the infographics detailing the prediction itself, which was uncomfortably different from the mainstream one.
The K-Drive Project, under the umbrella of the Marie Curie actions, is a European funded project based at the University of Aberdeen, aimed at bringing together industry and academia, for mutual exchange and joint effort in the field of semantic technologies. I joined it as a MER (More Experienced Researcher) in January 2016, when the main paper was already being published, so I was assigned with training and public speaking tasks to showcase the NLP technology involved.
During the final month leading to the end of the project, we decided to employ a model we had initially developed for the Scottish Independence Referendum, in order to monitor opinions on social media about Brexit, searching for insights and trends to predict the outcome.
After some consultations, we decided to use Twitter as a source, based on previous experience and know-how: tweets are short, relevant and to the point, written by a wide variety of users often with reliable hashtags, while Facebook posts on the other hand tend to be trivial, too long and written in broken English. Also Reddit threads are sometimes confusing to follow and can be riddled with trolling, whereas on Twitter comments are well distinguished from the original tweet and they can be easily ignored in the analysis.
First of all a software engineer developed a spider to download tweets posted in Scotland, England, Northern Ireland and Wales, so that they could be processed separately, while I was in charge of adapting and further developing the linguistic engine, as a knowledge engineer. The spider was designed to download only tweets containing hashtags related to the Brexit referendum, which is allowed through Twitter API, belonging to a fine-tuned list I prepared.
After an initial analysis of a training corpus of about 400 random tweets, I decided to develop a fully symbolic engine, heavily relying on knowledge graphs to grasp the concepts contained in the tweets, in order to understand the opinions as well as the related mood. I tried to cover the entire semantic field also including Scottish slang.
So everyday we would crawl Twitter, get tweets, convert them and feed them to the linguistic engine. No machine learning involved. The engine remained the same throughout the entire process, so that we could record the trends that were being expressed online, according to geographical area.
But when the first results began to come in, there was some embarrassment, as the output was showing a solid, undisputed advantage for Leave. Actually every single day ended with a clear Leave outcome and the trend kept going, while the entire world was agreeing that Stay was consistently winning by a couple of percentage points.
Of course I was asked why my results were out of tune, so I gave a (self-) convincing speech about how referendum campaigners are always more vocal than the silent majority, so my results were to be intended as insights for social sciences, more than actual predictions for political decision-making. Obviously I avoided mentioning the shy Tory factor.
But while I was completely absorbed by my coding, I could not help noticing that I was trapped between the narrative of mainstream media and my anecdotical experience of actually meeting a significant number of Leavers on a daily basis. My landlady, fellow hillwalkers, fellow bird-watchers, oil&gas people at the pub, the cashier lady of the place down the road. They were young, they were from all sorts of places and backgrounds, some of them had PhDs, and they would ask questions.
There I was, a (southern) European citizen on a European funded project, comfortably sitting in the venerable Meston building, merrily coding my working day away, or walking the muddy paths in search for eider ducks and razorbills and puffins and the occasional fulmar in the weekend.
I was a controversial figure, so questions were asked.
"I’m on a business trip" was generally frowned upon. "I’m working at the University of Aberdeen, on a European funded project" was met with some nervous embarrassment. "I work in IT" just sounded suspicious, since all expats in Aberdeen work in oil&gas really.
Leavers really were vocal. They would tell me about feeling trapped inside European bureaucracy and hoping to be able to join more large-scale projects. Or they would talk about the UK not wanting to pay European taxes, or not being able to sustain an increasing immigration flow.
Of course I could not tell them oh don’t worry! I just have invented a magic computer thingy that says Leave is going to win!
The AI hype had not properly started yet, so they would not believe me.
Anyway, press releases were written and our results were presented. The morning the Referendum outcome was officially announced, I was at home in Italy. I was having breakfast and I thought to myself, well I don’t want to become an alien in the UK, but wouldn’t it be cool if we were right and everybody else was wrong.


Well it actually turned out that we were right and everybody else was wrong.
Apparently, a wise choice of the source (Twitter in this case) and the careful development of a symbolic engine were key to our success.
But then the mainstream analysis came out and I was completely taken aback. The general picture of Brexit UK that was given, where only rural old folks with no schooling had voted Leave, as opposed to the dynamic young professionals in the cities who had all voted Stay, felt completely surreal compared to my direct experience. Obviously, I had to brush off the suspicion that mainstream media were simply framing the matter according to a political agenda. Maybe they knew all along about the strong Leave vote but did not tell for fear of leading to even more people voting Leave and now they were manufacturing a stereotypical picture of the UK inspired by radio drama The Archers, to make Leavers reconsider. But that would be a conspiracy theory so we can rule it out.
So slowly doubt started creeping up. What if my code was biased in the first place, and I got the right result by mistake. But how? No machine learning was involved, so the engine could not have accidentally learned from an unbalanced training set. It was a fully symbolic engine – therefore the bias must have been unintentional.
The point is that a symbolic engine is always theory-free. According to the scientific method you should form a hypothesis based on observations, then you make a prediction, you test it, and you iterate the test until you have data consistent enough to draw conclusions. A symbolic engine does not work like that. It is never developed according to a theory, there are no observations, hypothesis nor predictions. There might be assumptions I guess, but they should be regarded as such and not coded into the engine. A symbolic engine consists in a complex set of generalized and explainable conditions that is developed based on known requirements as well as the analysis of a related training set. Generalization requires intuition, and that will be the human spark in the engine, which will allow it to work autonomously. When the development is finished, a new data set is processed and the code is executed depending on which conditions are true, thus returning an output that mirrors the initial requirements and outlines information and correlations that could or could not be the expected ones.
In the end we do trust calculations over assumptions, just like Galileo.
I also reconsidered Twitter as a valid source. Could it be biased in itself? Are Twitter users representative of all British voters, or maybe is the Leave side somehow over-represented? Still I could not find any convincing evidence, in social science research, to sustain a theory where Leave people would be found more often on Twitter than in real life.
Suddenly the revelation came. What if the everyday contact with Leave people led me to develop code that was better geared to understand Leave vocabulary, concepts, phrasing and wording? Or maybe being Leave people more vocal, in real life as well as on social media, they ended up teaching me their language, and I passed on this knowledge to the engine. Maybe I could not learn properly from Stay people since they were much quieter. After all, it could be that in the half-hearted self-defense of my work, I was actually describing the very sociolinguistic process underlying the success of my model.
So right now I’m worried about my code being unintentionally biased, i.e. suffering from the most dangerous disease in the AI world. Then again, there could be another element. The hype on biases in the current AI narration is so strong, that the hype itself might be affecting my judgement.
In conclusion, when dealing with the development of symbolic engines, I think it is useful to integrate sociolinguistic elements in our analysis. Understanding how our source works, how people communicate within it, how we are able to interpret their message, can help us develop a better-balanced code.
For sure this experience has led me to experiment more with machine learning, not as a blunt replacement to symbolic engines, but for hybrid solutions. Training a ML model on the same set we are using for symbolic development can provide a useful comparison, to highlight and understand biases in our results.
I am more and more convinced that a multidisciplinary approach can be beneficial in this field, to improve our ability to obtain automated, reliable insight from large data sets.