Isaac Oluoch
Towards Data Science
5 min readJun 15, 2017

--

The Brain and Network Distribution: the future lies in efficiency

At the intersection between neuroscience and machine learning are two words: training data. The more training data each of these task-based disciplines have to make to use of, the better their predicitions and analysis will be. But each of these disciplines have a different distribution of data, storage, and energy efficiency to make use of.

The training data that neuroscientists have to make use of is the data produced by our brains. With 100 billion neurons firing at 1 Hz, 10¹⁵ synapses transmitting information between these neurons at 1 Hz, the brain manages to represent timescales ranging from miliseconds to years, with only an energy output of 20 watts. With just 20 watts, and the machinery we assembled from their creativity, we’ve gone to the moon, are building autonomous vehicles, and have created civilisations across history and across the earth.

Neuroscientists produce models that allow them to better understand the mechanisms behind response activity between neighbouring neurons and cross-network neurons in differing brain regions. The accuracy of these models is usually based upon the ability to use the right tools for the right measurement-task. Different measurement-tasks need different brains. To better understand the development of language and perception, the brain of children become better subjects, as the growth of the cortical regions will help better understand the development of the neural circuitry in these tasks and regions. While to study the brain when it is undergoing degeneration or after trauma, aging adults or patients post-surgery are better subjects, since the damaged cortical areas and the effects to other areas will be better observed.

Machine learning algorithims are also reliant on energy, tools and task-based application to assist the programming and software development they are required for. Algorithims trained on image recongition become better at understanding the labels for each image based on the features of the image, which will allow them to then be able generate their own labels when given unlabelled images. Algorithims trained on conversation will become better at understanding tone, inflection, context and humour, after numerous iterations and the proper reinforcement learning programming. Algorithims trained on games will become better once they are given the right cues concerning the objective of the game, rules of the game, and repetition playing until they can be able to use their learning to play any game proficiently.

In both cases, training of the data in each particular task and/or context becomes a determining factor of the accuracy and utilty of the model and algorithm. But the biggest difference between the two, is the computing energy that is used to operate in each particular task/context. While the brain is an organ that weighs about three pounds and only has an energy output of 20 watts, machine learning systems such as Deepmind’s AlphaGo, was trained by using 1, 202 CPUs and 176 GPUs with the assistance of more than 100 scientists.

And yet, AlphaGo couldn’t write poetry or drive a car or do anything beyond the specific task of mastering one task. But this is not to diminish the work done at Deepmind nor any task-based machine learning system. This is merely to show the difference in capacity as well as capability between the brain and its artifical counterparts.

Google, Facebook, Amazon, Nvidia and Intel have become large AI companies over the past couple of years, because they have been able to make use of the vast amounts of data they have accumulated, and train their systems on this data for particular tasks. And beyond the training data, they have also been able to rely on computational power as well as cloud storage servers, to ensure that they can have many machines on hand to distribute the training load and therefore decrease the time it takes for learning to take place.

To my mind, the only way to compete against these vast companies, is to not rely on vast data, vast storage or vast computing power. It is more likely that there needs to be a shift towards brain based computing, a shift towards emulating the brain’s capacity and capabilities, not making analogies to it. And this is because the brain is multi-task oriented by nature. It has evolved to be able to learn, adapt, and inherit the traits of its ancestry both historical and social. Some functions in the brain go back millenia (language, visual system and the reptillian brain), while other functions have come as a result of social changes from paradigm shifts accross centuries in how we interact with each other. And this multi-task orientation, has allowed our species to become the dominant species on the planet.

Our cognitive capacities have allowed us to build, shape and mould the environment around us, in a far more long-lasting manner than our machines are yet to do. All this, from a 20 watt organ between our ears, showing just how efficient our natural and organic software is. The efficiency difference therefore goes two ways.

On one hand, while the machine learning systems can be trained over hours and weeks, the brain has taken well over 200, 000 years to improve itself to allow us to build these same systems. These systems are therefore more efficient in short term, time and practical intensive tasks. Yet on the other hand, the brain doesn’t need to run on servers or a distributed network to become proficient in promoting our place on earth and beyond. Newton invented calculus by himself, Einstein revolutioned our understanding of gravity, space-time, light and atomic structure through his inquisitiveness and mathematical prowess, and Elon Musk’s audacity will take us to Mars (with the help of SpaceX and his spaceships, of course).

The difference between the brain and machine learning systems is therefore the cost of computing power and increased time efficiency, against multi-task orientation efficiency and evolutionary software development. Considering both machine learning systems and the brain, it is clear to me that a symbiosis needs to be reached. Can we make learning algorthims that do not need to use vast amounts of data, storage and computing power, but still be as accurate and predictive as algorithims that do? Can we make learning algorithims that become multi-task oriented as they learn, so that they can be deployed in a variety of tasks and contexts? Only the future of machine learning and neuroscience will be able to answer these two questions.

--

--

I spend my days learning Spanish, coding. and how to make music, with the singular goal of becoming a philosopher engineer.