Seven trends from World Summit AI

Alessandro Mozzato
Towards Data Science
6 min readOct 16, 2018

--

Last week Amsterdam hosted the World Summit AI. During the two days AI experts, members of the industry, academia and the institutions gathered to share and discuss the hottest news and ideas on artificial intelligence. What was immediately clear is that the field is more exciting than ever and still growing at impressive rate. Here I present seven of the main topics that were discussed.

The World Summit AI introduction
  • Fair AI: with the spread of machine learning and its application in seemingly all industries it is becoming increasingly important to assure fairness in machine learning algorithms. While we know biases can exists in the data collected, both for intrinsic biases or due to data collection and generation, our machine learning model should be as unbiased as possible. These concerns have existed for many years, particularly in industries such as insurance and banking, but problems are becoming to surface everywhere. Only last week Amazon announced to be scrapping its new CV screening engine due to strong gender biases in it. Suggested solutions to control model biases are multiple. A big focus is currently being put on model interpretability, which we will expand in the point below. Another strong topic, particularly from Academia, is related to regulations. These regulations would be particularly focused on transparency in the models and constrains in fairness, both on the incoming data side and the outgoing predictions. In practice this could work as a new machine learning model on top of the biased one that would unbias the data incoming and the predictions outgoing. Some of these techniques are currently being explored for example using particular architectures of Generative Adversarial Networks.
An example of the structure for a fair ML model by Prof. Virginia Dignum
  • Model interpretability: The importance of being able to explain decision and predictions from Machine Learning models was a recurring topic for the whole event. The idea is to make models feel less of a black box. The primary reason for interpretability is model accountability. If fairness in AI is a concern, the immediate consequence is that models need to be accountable for the decision they take, there needs to be an explanation as to why a certain prediction is made. The two main routes to reach interpretability are technical and regulative. On the technical side, multiple frameworks have been developed to explain model predictions, such as LIME and ELI5. Many other companies, such as Alpha, are also working on developing new techniques that will hopefully lead to more transparent models. The other path towards model interpretability that was discussed at length was via regulations. This is especially a very active topic discussed and studied in academia. Some of the regulations are already planned and will soon be in place as part of the GDPR on explainable machine learning. Other approaches suggest to require models to be made open-source so that they can be examined. However, this seems quite a lot to ask, and, moreover, without the data used for training the model code might not mean much. The other reason for model interpretability is the possibility to use the model for more than only predictions. For example, in the case of predictive maintenance, predictions are made to know the state of the system and wether there is a need to intervene. However, with model explanation we can also gain more detailed information on the components that need reviews or the particular processes that are at risk. The development of new techniques as explained above will therefore also greatly benefits these types of applications.
  • From research to application: over the past few years research in AI has skyrocketed, fueled by powerhouses like Google and Facebook. Powerful new technique or revamped old techniques, coupled with an increase in computational power, have allowed to reach great results in fields like image classification and speech recognition, often surpassing human accuracy. Now, the general feeling from World Summit AI is that it’s time to shift the focus in AI from research to application, allowing these new techniques to be applied to gain value. This is suggested by many factors. First of all, the strive toward democratization of AI, which will be discussed in the next point. Secondly, the great focus put towards practical applications, highlighted in many of the talks. Great accomplishments where shown in finance, autonomous driving drone fleets and satellite imaginary. The healthcare industry is still perceived as little touched by the AI revolution and at the same time is considered as one of the best industries where AI applications could thrive.
  • AI democratization: spreading of AI skills was one of the biggest theme of the conference. Cassie Kozyrkov from Google, explained how they trained 17k (!!!) employees on Decision Intelligence, i.e. applied AI to solve business problems. This allows basically every team to have capabilities to implement Machine Learning driven features in their products. Democratization can also be achieved thanks to cloud services. Cloud providers such as IBM and Google allow to apply complex and powerful algorithms via a simple API call. This gives the opportunity to a much larger audience to apply new techniques fast and cheaply without deep ML knowledge or having to spend time to craft and train deep neural networks. AI democratization is also coupled with a new, broader concept of planning data science projects and forming teams to apply machine learning.
  • A multidisciplinary approach for teams: while the industry is focusing more and more on applied ML and democratizing AI it is also important to focus on how to get successful results, i.e. how to structure team to enable them to drive innovation. A key point is the need to a wide variety of skills. In particular, machine learning researchers need to be embedded in teams with business oriented experts, software engineers, data engineers and so on. Multidisciplinarity is the key to success in data science projects. This is because the application and selection of the best ML algorithm is only a part of the problem. Data pipeline, focus on the right business problem, product integration, are also equally important and need a wide variety of skills to be present in a project.
The future of AI, from Cassie Kozyrkov
  • AI oriented decision makers: With the great diffusion of AI in seemingly every industry and more and more parts of our lives we also need leaders and decision makers to be AI oriented. First of all this is key to success for machine learning projects and a successful diffusion of AI. Secondly, AI offers tools to analyze and gain insights from huge amounts of data, allowing decision makers to maker faster and more informed decisions. For example ING presented their AI driven investment tool. This tool allows to scan millions of assets and investments strategies, presenting a portfolio manager with an overview on an amount of trading opportunities previously unthinkable.
  • AI for good: Finally, a lot of attention was devoted to the importance to be able to use AI to focus on huge societal problems. Telefonica founded a moonshot company, Alpha, to focus on solving health issues, particularly helping people getting conscious control back in their everyday behavior. Professor Luciano Floridi from Oxford focused on the risks that AI can generate for humanity, and the planet, but also the huge potentials that AI could spark for everyone: make our lives better, emancipate us from boring and tedious and even dangerous work, enhance human capabilities, bring us closer.
AI dangers and opportunities, from Luciano Floridi presentation

Conclusions

World Summit AI has been really inspiring. The industry is clearly in a great shape and it seems to only be rising. AI future is definitely prosperous. There are nonetheless many challenges to tackle, uncertainties and problems in allowing everybody the right access to this amazing technology while protecting individuals and their privacy.

Acknowledgments: I want to thank booking.com for giving me the possibility to attend this great event.

--

--