Feature-Instance Based Fine-Tuning In Transfer Learning Model

Show simple item record

dc.contributor.author Wanjiku, Raphael Ngigi
dc.date.accessioned 2024-10-15T08:47:15Z
dc.date.available 2024-10-15T08:47:15Z
dc.date.issued 2024-10-15
dc.identifier.citation WanjikuRN2024 en_US
dc.identifier.uri http://localhost/xmlui/handle/123456789/6497
dc.description PhD in Information Technology en_US
dc.description.abstract Transfer learning is an in-depth learning approach that uses an existing model to classify a new task. The approach involves using some target data on a fine-tuned model. Fine-tuning involves freezing and tuning layers within the network to reduce the model's poor adaptation to the target task. Poor domain adaptation is a significant issue in transfer learning due to differences in the distributions between the source and the target domains. There have been various approaches to tackling the problem using fine-tuning approaches, the most common being instance and feature-based approaches. Despite many documented fine-tuning approaches in transfer learning, achieving higher accuracy performance due to poor domain adaptation is still challenging. This research looks at a feature-instance-based approach where a subset of data objects with similar low-level characteristics is selected as the target dataset. The signed weight instances are used as a routing decision network to determine the filtering layers that must be frozen during training. The approach leads to developing a model that confers textural features in selecting target samples and the convolutional layers with the most positive number of feature map elements in the transfer learning process. The study uses image datasets from nine datasets: ChestX-ray8, CIFAR-10, MNIST, CIFAR-100, Fashion-MNIST, Stanford Dogs 120, Caltech 256, ISIC 2016 and MIT Indoor Scenes on five pre-trained models: VGG16, DenseNet169, MobileNetV2, InceptionV3 and ResNet50. The experimental approach provides better convolutional networks in the transfer learning process, with improvements of between 3.12% and 7.69% in selecting quality data points and 0.24% to 13.04% for selecting suitable layers. These improvements perform well compared to the standard accuracies and some previous studies. Keywords: Transfer Learning, Domain Adaptation, Distance Measures, Features Conflation, and Layer Selection. en_US
dc.description.sponsorship Dr. Lawrence Nderu, PhD JKUAT, Kenya Dr. Michael Kimwele, PhD JKUAT, Kenya   en_US
dc.language.iso en en_US
dc.publisher JKUAT-COPAS en_US
dc.subject Feature-Instance en_US
dc.subject Transfer Learning Model en_US
dc.subject Fine- Tunned Model en_US
dc.subject Learning Approach en_US
dc.title Feature-Instance Based Fine-Tuning In Transfer Learning Model en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Browse

My Account