Abstract:
Transfer learning is an in-depth learning approach that uses an existing model to classify a new task. The approach involves using some target data on a fine-tuned model. Fine-tuning involves freezing and tuning layers within the network to reduce the model's poor adaptation to the target task. Poor domain adaptation is a significant issue in transfer learning due to differences in the distributions between the source and the target domains. There have been various approaches to tackling the problem using fine-tuning approaches, the most common being instance and feature-based approaches. Despite many documented fine-tuning approaches in transfer learning, achieving higher accuracy performance due to poor domain adaptation is still challenging. This research looks at a feature-instance-based approach where a subset of data objects with similar low-level characteristics is selected as the target dataset. The signed weight instances are used as a routing decision network to determine the filtering layers that must be frozen during training. The approach leads to developing a model that confers textural features in selecting target samples and the convolutional layers with the most positive number of feature map elements in the transfer learning process. The study uses image datasets from nine datasets: ChestX-ray8, CIFAR-10, MNIST, CIFAR-100, Fashion-MNIST, Stanford Dogs 120, Caltech 256, ISIC 2016 and MIT Indoor Scenes on five pre-trained models: VGG16, DenseNet169, MobileNetV2, InceptionV3 and ResNet50. The experimental approach provides better convolutional networks in the transfer learning process, with improvements of between 3.12% and 7.69% in selecting quality data points and 0.24% to 13.04% for selecting suitable layers. These improvements perform well compared to the standard accuracies and some previous studies.
Keywords: Transfer Learning, Domain Adaptation, Distance Measures, Features Conflation, and Layer Selection.