
Artificial intelligence is a commodity. In fact, extracting this commodity translates into billions of dollars in revenue gains for companies like Google, Baidu, and Facebook². Consequently, it has pushed computational costs up by 300x and increased the amount of researchers in the field 10x in the last 10 years³.
Today, research on AI compounds every year. Researchers make incremental changes to create more powerful and generalizable models by building on previous ones. However, the intelligence produced by these models is always lost, as users have to re-train these models on their own systems to replicate them.
Additionally, any value found by a combination of models working on the same modality (for example: text data, image data, etc.) is also lost. This is because models are inherently siloed to their environments. They are unable to share their knowledge with each other and so if any one model learns something that may be valuable to other models, it cannot share it with them to speed up their training.
Finally, today’s AI research field is inherently non-collaborative. The research approach of "obtain dataset, create model, beat present state-of-the-art, rinse, repeat" makes it so that there is a big barrier to entry to market for new researchers and for researchers with low computational resources. Most researchers cannot contribute to the development of large neural networks because conducting the necessary experiments would be too expensive for them. If this trend continues, eventually the only labs that can conduct AI research will be those with massive budgets.
The future of artificial intelligence (AI) will be determined by how much weight we put into collaboration.⁴ – John Suit (CTO) at Koda Inc.

Humans have always collaborated together to bring about the technological marvels of the world. The Apollo 11 landed on the moon because over 300,000 people collaborated on the Apollo project from surveyors, to the engineers, to the astronauts themselves. Why can’t AI also be a collaborative approach?
Perhaps a rephrasing of this question is required: How can we turn AI into a collaborative approach?
The answer is, quite simply, decentralization. Similarly to how the human brain is billions of inter-connected neurons sending signals to each other, or how the Internet is billions of inter-connected computers sharing information with each other, it should follow that AI should be built in the same way.
Enter Bittensor
Decentralized AI is a paradigm in which AI models collaborate together to achieve their goals or solve a complex set of challenges. Today, decentralized AI is slowly turning into the holy grail of collaborative intelligence due to its implications of free AI and knowledge sharing between models. One of the newest kids on the block is the Bittensor project.
Bittensor is a peer-to-peer marketplace that rewards the production of machine intelligence. It utilizes state-of-the-art techniques such as mixture of experts and knowledge distillation in order to create a peer-to-peer network that incentivizes knowledge production, where producers can sell their work, and consumers can buy this knowledge to make their own models better. This opens the door for collaborative intelligence, where researchers are incentivized to help each other produce more powerful models. Ultimately, this "cognitive economy" has the potential to open a new frontier for the advancement of AI as a whole.
The core idea behind Bittensor is the aggregation of the individual contributions of knowledge. Computers that contribute knowledge by building an AI model and training it on the Bittensor network are rewarded with Tao, bittensor’s currency. Others who wish to simply use or consume the knowledge can also get it using Tao and train their models on this knowledge.
Bittensor Structure
Bittensor’s design closely follows that of the human brain. Every computer running Bittensor is referred to as a Neuron. Each Neuron contains a model, a dataset, and a loss function. These Neurons work to optimize their loss function with respect to their input data and the other neurons that they are communicating with.

In keeping with the human brain parallel, each Bittensor Neuron has an Axon terminal to receive inputs from other neurons, and a Dendrite terminal to send inputs to other neurons.
In a nutshell, a Bittensor Neuron trains by sending a batch of inputs from its dataset to its neighbouring Neurons, and also runs the same batch through its own model. The neurons that receive this batch will run the input through their own local models, and send back the output. Our Neuron will then aggregate these outputs and update its gradients with respect to the loss of the remote neurons and our local loss.

Each Neuron in Bittensor is continuously mining for knowledge from other models. Running a Bittensor model is fairly straightforward, with a configuration that can be set up on a yaml file or via the command line.
Final Thoughts
The future of AI should be decentralized, full stop. We need to treat the knowledge created by AI as the ultimate commodity, the equivalent of a cognitive process in the human mind. Collaborating together to increase the breadth and depth of this knowledge is not only our calling as scientists, but our calling as humans as well.

The Bittensor network launched in January 2021, and is already up and running. You can simply install and run it right away using one of the models already built in the repository, or perhaps create your own model.
The Bittensor repository can be found here, you can also read through the documentation and whitepaper of the project. The developers are also active on the Bittensor Discord server if you wish to contact the developers, collaborate, or contribute to the project.
Join us, the frontier of AI belongs to you too.
References:
[1] Kyle Wiggers. "OpenAI’s massive GPT-3 model is impressive, but size isn’t everything" (2020). https://venturebeat.com/2020/06/01/ai-machine-learning-openai-gpt-3-size-isnt-everything/
[2] Stanford University HAL. "AI Index Report"(2019) https://hai.stanford.edu/research/ai-index-2019.
[3] Allen Institute for Artificial Intelligence. "Green AI" (2019). "https://arxiv.org/pdf/1907.10597.pdf"
[4] Inside Big Data. "Everything in its Right Place: The Potential of Decentralized AI" (2020). https://insidebigdata.com/2020/12/31/everything-in-its-right-place-the-potential-of-decentralized-ai/
[5] Bittensor. "Internet Scale Neural Networks" . (2021) https://www.bittensor.com/