Learning Transfer Learning

Transfer learning is the process of using skills and knowledge, that have been learned in one situation to solve a different, related problem.

Dr Barak Or
MetaOr Artificial Intelligence

--

Introduction

This concept is commonly studied in the field of machine learning, where it is used to refer to the practice of storing knowledge gained from solving one problem and applying it to a different,” related problem. Transfer learning is often viewed as a design methodology, as it involves applying previously learned information to new situations to improve the efficiency and effectiveness of the learning process. In other words, transfer learning allows individuals or machine learning algorithms to build upon their existing knowledge and skills to solve new problems.

Transfer learning involves taking knowledge and skills acquired in one context and applying them to a different, but related situation. For example, if you have learned how to recognize cars, that knowledge could be useful in learning how to recognize trucks. Similarly, if you have learned how to ride a motorbike, that knowledge may be transferable to learning how to ride an e-scooter. Transfer learning can also involve applying knowledge and skills from one domain to another, such as using math and statistics knowledge to learn deep learning.

In the picture below, a classical learning process is depicted on the left side, while on the right side, there is a learning model that has been trained for task A and can transfer its knowledge to improve the learning process for task B, even when only a small amount of data is available for task B.

Take for example the task of classifying between dogs and lions with a limited amount of examples. If we built a great model that knows how to classify between dogs and cats, we can use it as a basis for our new task.

Image by author

It transfers and leverages the knowledge from what it has learned in the past!

Advantages

In the context of machine learning, transfer learning can involve three main elements: the initial skill of the source model, the rate of improvement during training, and the converged skill of the trained model.

1. The initial skill of the source model refers to the level of proficiency or expertise that the model has before any further training or refinement. By increasing the initial skill of the source model, the model is able to start at a higher level of performance, which can make it easier to learn new tasks and achieve better results.

2. The rate of improvement during training refers to the speed at which the model improves its skill over time. If the source model is able to improve its skill more quickly during training, it will be able to learn new tasks more efficiently and effectively.

3. Finally, the converged skill of the trained model refers to the level of proficiency or expertise that the model has after it has reached a stable state of learning. If the trained model performs better after reaching a stable state of learning, it will be able to perform new tasks more effectively and with higher accuracy. Overall, by focusing on these three elements of transfer learning, it is possible to improve the efficiency and effectiveness of machine learning algorithms and achieve better results.

Image by author

When to use transfer learning

There are several scenarios in which transfer learning may be particularly useful:

  1. If there already exists a pre-trained model that is based on a similar task and has been trained on a large amount of data, it may be possible to use that model as a starting point for a new task. By starting with a model that has already learned some relevant knowledge, it may be possible to more quickly and effectively learn the new task. Taking existing trained models on a dataset like the ImageNet can be a good starting point.
  2. If there is not enough data available to train a model from scratch, transfer learning can be used to leverage the knowledge and skills learned by a pre-trained model in order to improve the performance of the new model. This can be particularly useful when dealing with tasks that require a large amount of data in order to learn effectively, as it allows the new model to build upon the knowledge and skills learned by the pre-trained model. The classification task we mentioned earlier of the limited amount of data (dogs/lions) is just one example that can be solved in this approach.

Breaking a myth: “you can’t do deep learning unless you have a million labelled examples for your task”: Transfer learning can be a useful technique for reducing the amount of labeled data that is needed for a particular task.

Common practice

Develop a model approach

  1. Select source task: The first step in transfer learning is to select a source task that is related to the task of interest and has a large amount of data available. It is important to ensure that there is some relationship between the input and output data in the source task and the task of interest in order to make the transfer of knowledge and skills more effectively.
  2. Develop source model: The next step is to develop a skillful model for the source task. It is important to ensure that the model is not a naïve model, as this will ensure that the model has learned some relevant features that can be transferred to the new task.
  3. Reuse model: Once the source model has been developed, it can be used as a starting point for a model on the new task of interest. This allows the new model to build upon the knowledge and skills learned by the source model, which can improve the efficiency and effectiveness of the learning process.
  4. Tune model (optional): In some cases, it may be necessary to adapt or refine the model in order to better fit the data available for the new task. This can involve adjusting the model’s parameters or adding additional layers to the model. Overall, by following these steps, it is possible to effectively implement transfer learning and achieve improved results in machine learning tasks.

Pre-trained Model Approach

  1. Select source model: The first step in this process is to choose a pre-trained source model that is related to the new task of interest. There are many pre-trained models available that can be used for transfer learning, so it is important to choose a model that is well-suited to the task at hand.
  2. Reuse model: the same as in developing a model.
  3. Tune model (optional): the same as in developing a model.

Some common uses for pre-trained models include:

  1. Standalone feature extractor: Pre-trained models can be used as standalone feature extractors, which means they can be used to pre-process images and extract relevant features that can be used as input for another model. This approach allows the pre-trained model to serve as an “expert” in identifying important features in the data, which can improve the performance of the downstream model.
  2. Integrated feature extractor: Pre-trained models can also be integrated into new models as feature extractors, with the layers of the pre-trained model being “frozen” during training. This means that the weights of the pre-trained model are not updated during training, and the model is used only to extract features from the data.
  3. Weight initialization: Pre-trained models can also be integrated into new models as a means of initializing the weights of the new model. In this case, the layers of the pre-trained model are trained in concert with the new model, allowing the new model to build upon the knowledge and skills learned by the pre-trained model.

Overall, pre-trained models can be a valuable resource in machine learning, as they allow models to build upon the knowledge and skills learned by other models and improve their performance on new tasks.

Image by author

Summary

Transfer learning is a machine learning technique that allows models to build upon their existing knowledge and skills to solve new problems. It involves transferring knowledge from one context or domain to another. In the field of machine learning, transfer learning involves improving the initial skill of the source model, the rate of improvement during training, and the converged skill of the trained model. Transfer learning is useful in scenarios where there is not enough data to train a network from scratch, a pre-trained network based on a similar task already exists, or when dealing with tasks that require a large amount of data. It can improve the efficiency and effectiveness of machine learning algorithms and allow one to leverage past knowledge to learn new things more quickly and effectively.

About the Author

Dr. Barak Or is a professional in the field of artificial intelligence and sensor fusion. He is a researcher, lecturer, and entrepreneur who has published numerous patents and articles in professional journals. ​Dr. Or leads the MetaOr Artificial Intelligence firm. He founded ALMA Tech. LTD holds patents in the field of AI and navigation. He has worked with Qualcomm as DSP and machine learning algorithms expert. He completed his Ph.D. in machine learning for sensor fusion at the University of Haifa, Israel. He holds M.Sc. (2018) and B.Sc. (2016) degrees in Aerospace Engineering and B.A. in Economics and Management (2016, Cum Laude) from the Technion, Israel Institute of Technology. He has received several prizes and research grants from the Israel Innovation Authority, the Israeli Ministry of Defence, and the Israeli Ministry of Economic and Industrial. In 2021, he was nominated by the Technion for “graduate achievements” in the field of High-tech.

Website www.metaor.ai Linkedin www.linkedin.com/in/barakor/ YouTube www.youtube.com/channel/UCYDidZ8GUzUy_tYtxvVjRiQ

More resources:

[1] A Gentle Introduction to Transfer Learning for Deep Learning by Jason Brownlee: Link

[2] Transfer learning and fine-tuning by the TensorFlow team: Link

[3] NIPS 2016 tutorial: “Nuts and bolts of building AI applications using Deep Learning” by Andrew Ng. Link

[4] A Comprehensive Hands-on Guide to Transfer Learning with Real-World Applications in Deep Learning by Dipanjan Sarkar. Link

[5] Oquab, M., Bottou, L., Laptev, I., & Sivic, J. (2014). Learning and transferring mid-level image representations using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1717–1724).

[6] Rozantsev, A., Salzmann, M., & Fua, P. (2018). Beyond sharing weights for deep domain adaptation. IEEE transactions on pattern analysis and machine intelligence.

--

--