Related to multi-task learning, transfer learning refers to the concept of applying knowledge gained in one domain to other related domains. Humans excel at this, being able to recognize similarities between new problems and past experiences with ease. For instance, when children learn to walk, they take the experiences they have with crawling along with examples of watching their parents, or other children walk, and put them together. Before you know it, your kid is running at the speed of light (at least it feels like it).
Take image classification as an example. Imagine that you would like to classify a Lynx. As an incredibly shy felid, a limited number of pictures actually exist of it. On the other hand, there are numerous cat pictures online (haven't we all been watching cute cat videos in the dead of night?). Why not utilize an image classification algorithm trained on cats as a starting point for classifying lynx? Transfer learning is a long standing problem within the machine learning community, with initial work dating back to the 1980s (Silver 2008). In recent years the focus on knowledge transfer has grown, with Andrew Ng predicting at NIPS 2016 that transfer learning or multi-task learning will be the next driver of value in machine learning (Ng 2016).
There are several ways to address knowledge sharing, and they all attempt to really make use of the data used to train “data hungry” machine learning models (Adadi 2021). Multi-task learning (Caruana 1997) differs from transfer learning by attempting to learn multiple related tasks in parallel, using a shared model representation to transfer knowledge between tasks. It has seen significant results in various domains, including speech (Chen 2015), language (Collobert 2008), and image analysis (Wang 2009).
A diverse set of model architectures has been explored for multi-task learning (Zhang 2021), ranging from simple linear models to large convolutional networks. The strength lies in model architectures that are able to capture universal aspects across different problems, while simultaneously performing well on each individual task.
In an upcoming series of articles (stay tuned!) we will present multi-task learning for virtual flow metering (VFM). We will be highlighting how the method is able to address challenges faced by conventional data-driven VFMs, to better adhere to physical expectations, improve predictive performance, and reduce model maintenance requirements.