Back to the Future: what AI has in common with time travel

Every fictional depiction of time travel raises two questions— how to travel to a different time and how to avoid changing the future, potentially eliminating the time traveller in the process, whilst there. AI predicts (or sees) possible futures but reacting to its predictions can change that future, potentially ruining the tool in the process.

Tim Gordon
Towards Data Science

--

Photo by Greg Rakozy on Unsplash

Time travel in to the past is a popular fictional concept, with two recurring issues. Firstly, how to get there (usually this cues a lot of sub-scientific prose). Secondly, a common theme is that the greatest risk in returning to the past is that you modify the future and, in so doing, create circumstances where the time traveller may not exist. This is sometimes known as the Grandfather paradox based on the premise of the time traveller killing their own progenitor.

Leaving aside the increasing complexities of the Terminator series of films what does this have to do with AI?

One of the key use cases for AI, or machine learning, is predictive maintenance. A model is built that — based on myriad sensor and historic data points — calculates the likelihood is of a given piece of machinery failing. The future can be predicted and something can then be done about it before it happens. This is a major use case well on its way to saving billions of dollars through increased machine uptime everywhere from factory floors to military jet engines. The challenge though is simple — once you change things based on your understanding of the future the ability of your model to predict this future craters.

Moreover the ripple effect can create second and third order impacts across the complex systems that AI typically monitors and ingests data from.

In timeline terms altering the present to change the future (by replacing that fuel pump) has a not dissimilar impact to changing something in the past that will impact the present. So we enter the world of recurring causal loops and ways to limit the risk.

Ironically one of the ways to handle the AI risk is essentially to manage multiple pathways with the data — so what happens to machines where predictive maintenance happened and where it did not. The potential futures fraction and expand.

Much the same usually happens in the second film or book — we enter the world of parrallel worlds and, ultimately, infinite universes. A task, if transferred to AI, beyond all but the most dedicated DevOps practitioner.

--

--

A little bit of politics, a little bit of AI. Co-Founder of Best Practice AI (bestpractice.ai), ex-various things