Comparative Evaluation of Parameter-Efficient Fine-Tuning Strategies for Continual Image Classification

Authors

  • Nancy Agarwal Galgotias University India
  • Alok Singh Chauhan Galgotias University India
  • Patrick Bours Norwegian University of Science and Technology Norway

DOI:

https://doi.org/10.26877/asset.v8i2.2787

Keywords:

Catastrophic forgetting, Parameter-Efficient Fine-Tuning (PEFT), Continual Learning, Transfer Learning, Adapters, LoRA, Resnet-18, Resnet-50, CIFAR-100

Abstract

Catastrophic forgetting remains a major challenge in continual transfer learning, where performance on earlier tasks degrades after sequential adaptation. While full fine-tuning updates all parameters and achieves strong performance on new tasks, it is computationally expensive and prone to forgetting. This study compares parameter-efficient fine-tuning (PEFT) methods—adapters, additive learning, side-tuning, LoRA, and zero-initialized layers—against full fine-tuning on CIFAR-100 using a two-stage protocol: task-A (classes 0–49) followed by task-B (classes 50–99), evaluated on ResNet-18 and ResNet-50. Results are reported as mean ± standard deviation over three runs (n = 3), with retention measured using a Swapback-based recall method that distinguishes true forgetting (Δ). Across both architectures, all PEFT methods maintain task-A knowledge (Δ = 0.00), while full fine-tuning exhibits forgetting (Δ = 0.31 on ResNet-18; Δ = 0.20 on ResNet-50). PEFT methods achieve competitive task-B performance while updating only 0.22–4.49% of parameters. Notably, LoRA on ResNet-50 achieves the highest task-B accuracy (0.82) with only 0.93% parameter updates and no forgetting, slightly outperforming full fine-tuning (0.81). These findings highlight PEFT as an efficient and stable alternative for scalable continual transfer learning.

Author Biographies

  • Nancy Agarwal, Galgotias University

    School of Computer Applications and Technology, Galgotias University, Greater Noida 203201, India

  • Alok Singh Chauhan, Galgotias University

    School of Computer Applications and Technology, Galgotias University, Greater Noida 203201, India

  • Patrick Bours, Norwegian University of Science and Technology

    Department of Information Security and Communication Technology, Norwegian University of Science and Technology (NTNU), 2815 Gjøvik, Norway

References

[1] M. Gholizade, H. Soltanizadeh, M. Rahmanimanesh, and S. S. Sana, “A review of recent advances and strategies in transfer learning,” Int J SystAssur Eng Manag, vol. 16, no. 3, pp. 1123–1162, Mar. 2025, doi: 10.1007/s13198-024-02684-2.

[2] K. Weiss, T. M. Khoshgoftaar, and D. Wang, “A survey of transfer learning,” J Big Data, vol. 3, no. 1, p. 9, May 2016, doi: 10.1186/s40537-016-0043-6.

[3] E. Holton, L. Braun, J. A. Thompson, J. Grohn, and C. Summerfield, “Humans and neural networks show similar patterns of transfer and interference during continual learning,” Nat Hum Behav, pp. 1–15, Oct. 2025, doi: 10.1038/s41562-025-02318-y.

[4] Z. Ke, B. Liu, N. Ma, H. Xu, and L. Shu, “Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning,” in Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P. S. Liang, and J. W. Vaughan, Eds., Curran Associates, Inc., 2021, pp. 22443–22456. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2021/file/bcd0049c35799cdf57d06eaf2eb3cff6-Paper.pdf

[5] M. Perkonigget al., “Dynamic memory to alleviate catastrophic forgetting in continual learning with medical imaging,” Nat Commun, vol. 12, no. 1, p. 5678, Sep. 2021, doi: 10.1038/s41467-021-25858-z.

[6] Z. Chen and B. Liu, “Continual Learning and Catastrophic Forgetting,” in Lifelong Machine Learning, Z. Chen and B. Liu, Eds., Cham: Springer International Publishing, 2018, pp. 55–75. doi: 10.1007/978-3-031-01581-6_4.

[7] M. Serra-Perello and A. Ortiz, “Incremental Learning Methodologies for Addressing Catastrophic Forgetting: Analysis and Experimental Evaluation,” Journal of Artificial Intelligence Research, vol. 83, Aug. 2025, doi: 10.1613/jair.1.18405.

[8] M. De Lange et al., “A Continual Learning Survey: Defying Forgetting in Classification Tasks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 7, pp. 3366–3385, Jul. 2022, doi: 10.1109/TPAMI.2021.3057446.

[9] L. Wang, X. Zhang, H. Su, and J. Zhu, “A Comprehensive Survey of Continual Learning: Theory, Method and Application,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 46, no. 8, pp. 5362–5383, Aug. 2024, doi: 10.1109/TPAMI.2024.3367329.

[10] B. Wickramasinghe, G. Saha, and K. Roy, “Continual Learning: A Review of Techniques, Challenges, and Future Directions,” IEEE Transactions on Artificial Intelligence, vol. 5, no. 6, pp. 2526–2546, Jun. 2024, doi: 10.1109/TAI.2023.3339091.

[11] L. Wang et al., “Parameter-efficient fine-tuning in large language models: a survey of methodologies,” ArtifIntell Rev, vol. 58, no. 8, p. 227, May 2025, doi: 10.1007/s10462-025-11236-4.

[12] M. Weyssow, X. Zhou, K. Kim, D. Lo, and H. Sahraoui, “Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language Models,” ACM Trans. Softw. Eng. Methodol., vol. 34, no. 7, p. 204:1-204:25, Aug. 2025, doi: 10.1145/3714461.

[13] M. H. Zahweh, H. Nasrallah, M. Shukor, G. Faour, and A. J. Ghandour, “Empirical Study of PEFT Techniques for Winter-Wheat Segmentation,” Environmental Sciences Proceedings, vol. 29, no. 1, Nov. 2023, doi: 10.3390/ECRS2023-15833.

[14] K. Kansal, T. B. Chandra, and A. Singh, “ResNet-50 vs. EfficientNet-B0: Multi-Centric Classification of Various Lung Abnormalities Using Deep Learning,” Procedia Computer Science, vol. 235, pp. 70–80, Jan. 2024, doi: 10.1016/j.procs.2024.04.007.

[15] N. Houlsbyet al., “Parameter-Efficient Transfer Learning for NLP,” in Proceedings of the 36th International Conference on Machine Learning, K. Chaudhuri and R. Salakhutdinov, Eds., in Proceedings of Machine Learning Research, vol. 97. PMLR, Jun. 2019, pp. 2790–2799. [Online]. Available: https://proceedings.mlr.press/v97/houlsby19a.html

[16] M. Wei, M. Shi, and T. Vercauteren, “Enhancing surgical instrument segmentation: integrating vision transformer insights with adapter,” Int J CARS, vol. 19, no. 7, pp. 1313–1320, Jul. 2024, doi: 10.1007/s11548-024-03140-z.

[17] Z. Ye, F. Jiang, Q. Wang, K. Huang, and J. Huang, “IDEA: Image description enhanced CLIP-adapter for image classification,” Pattern Recognition, vol. 171, p. 112224, Mar. 2026, doi: 10.1016/j.patcog.2025.112224.

[18] L. Bogensperger, M. J. Ehrhardt, T. Pock, M. S. Salehi, and H. S. Wong, “An Adaptively Inexact Method for Bilevel Learning Using Primal–Dual-Style Differentiation,” J Math Imaging Vis, vol. 67, no. 5, p. 49, Aug. 2025, doi: 10.1007/s10851-025-01262-w.

[19] S. Li, G. Yuan, J. Chen, C. Tan, and H. Zhou, “Self-Supervised Learning for Solar Radio Spectrum Classification,” Universe, vol. 8, no. 12, Dec. 2022, doi: 10.3390/universe8120656.

[20] H. Wang et al., “A Context-Awareness and Hardware-Friendly Sparse Matrix Multiplication Kernel for CNN Inference Acceleration,” IEEE Transactions on Computers, vol. 74, no. 4, pp. 1182–1195, Apr. 2025, doi: 10.1109/TC.2024.3517745.

[21] H. Yu et al., “Efficient Side-Tuning for Remote Sensing: A Low-Memory Fine-Tuning Framework,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 18, pp. 11908–11925, 2025, doi: 10.1109/JSTARS.2025.3563641.

[22] N. M. Gardazi, A. Daud, M. K. Malik, A. Bukhari, T. Alsahfi, and B. Alshemaimri, “BERT applications in natural language processing: a review,” ArtifIntell Rev, vol. 58, no. 6, p. 166, Mar. 2025, doi: 10.1007/s10462-025-11162-5.

[23] Y. Mao et al., “A survey on LoRA of large language models,” Front. Comput. Sci., vol. 19, no. 7, p. 197605, Dec. 2024, doi: 10.1007/s11704-024-40663-9.

[24] S. Chen et al., “Enhancing Chinese comprehension and reasoning for large language models: an efficient LoRA fine-tuning and tree of thoughts framework.,” Journal of Supercomputing, vol. 81, no. 1, p. 1, Jan. 2025, doi: 10.1007/s11227-024-06499-7.

[25] C. Cao, Q. Dong, and Y. Fu, “ZITS++: Image Inpainting by Improving the Incremental Transformer on Structural Priors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 10, pp. 12667–12684, Oct. 2023, doi: 10.1109/TPAMI.2023.3280222.

Downloads

Published

2026-03-18