Transfer learning is a machine learning technique that involves reusing a pre-trained model on a new task or domain.
The new task is different from the one in which the machine learning was originally trained on.
The pre-trained model acts as a starting point or a foundation for the new model
Transfer learning is useful when the new task has limited labeled data, and training a new model from scratch is not feasible.
Transfer learning is widely used in various fields such as natural language processing, computer vision, and speech recognition
There are two main types of transfer learning: fine-tuning and feature extraction.
In fine-tuning, the pre-trained model is further trained on the new task with a small learning rate and a reduced number of epochs.
In feature extraction only the last layer(s) of the model are replaced and trained on the new task.
Learn how Generative Adversarial Networks Are Changing the World