Keywords: Visual Concept Pruning, Task-Specific Pruning, Dynamic Pruning, Transfer Learning
TL;DR: This paper introduces visual concept pruning to improve transfer learning efficiency and interpretability, achieving higher accuracy and reduced computational costs on ImageNet-V2, VTAB, and CIFAR-10/100.
Abstract: In this paper, we propose a novel methodology that combines task-specific pruning using concept bottleneck models with dynamic pruning during training via regularization . Our approach ensures that only task-relevant visual concepts are retained, leading to compact models that achieve superior performance while reducing computational costs. We evaluate our methodology on three widely used datasets: ImageNet-V2 , VTAB (Visual Task Adaptation Benchmark) , and CIFAR-10/100 . Experimental results demonstrate significant improvements in accuracy, model size, FLOPs, and inference time compared to baseline models and traditional global pruning methods. For instance, our methodology achieves a 56.2% reduction in model size and a 54.3% reduction in FLOPs while outperforming alternative approaches in terms of accuracy across all datasets. By focusing on task-specific visual concepts and integrating pruning into the training process, our methodology offers a scalable and efficient solution for transfer learning in diverse domains. These findings underscore the potential of visual concept pruning as a cornerstone for developing interpretable and resource-efficient deep learning models.
Submission Number: 20
Loading