Call for Papers - Ongoing round of submission, notification and publication.
    
  
Home    |    Login or Register    |    Contact CSC
By Title/Keywords/Abstract   By Author
Browse CSC-OpenAccess Library.
  • HOME
  • LIST OF JOURNALS
  • AUTHORS
  • EDITORS & REVIEWERS
  • LIBRARIANS & BOOK SELLERS
  • PARTNERSHIP & COLLABORATION
Home   >   CSC-OpenAccess Library   >    Manuscript Information
Full Text Available
(no registration required)

(834.72KB)


-- CSC-OpenAccess Policy
-- Creative Commons Attribution NonCommercial 4.0 International License
>> COMPLETE LIST OF JOURNALS

EXPLORE PUBLICATIONS BY COUNTRIES

EUROPE
MIDDLE EAST
ASIA
AFRICA
.............................
United States of America
United Kingdom
Canada
Australia
Italy
France
Brazil
Germany
Malaysia
Turkey
China
Taiwan
Japan
Saudi Arabia
Jordan
Egypt
United Arab Emirates
India
Nigeria
Interpretable Image Classification Using Attribute-Based KNN with Handcrafted Visual and Spatial Features
Muhammad Ismail, Zulfiqar Ali
Pages - 44 - 60     |    Revised - 01-10-2025     |    Published - 31-10-2025
Published in International Journal of Image Processing (IJIP)
Volume - 18   Issue - 3    |    Publication Date - October 2025  Table of Contents
MORE INFORMATION
References   |   Abstracting & Indexing
KEYWORDS
Image Classification, Attribute-Based KNN, Handcrafted Features, Spatial Attributes, Interpretable Machine Learning.
ABSTRACT
Image classification remains a fundamental challenge in computer vision with applications in retrieval, recognition, and scene understanding. This study introduces a transparent and interpretable framework for image classification using the K-Nearest Neighbors (KNN) algorithm. The approach leverages handcrafted visual features—color, pattern, shape, and texture— together with spatial attributes derived from bounding box coordinates. These features are encoded in a ternary scheme to represent presence, absence, or uncertainty, enabling consistent similarity comparisons. The proposed model was systematically evaluated under varying kvalues, multiple distance metrics (Euclidean, Cityblock, Cosine, and Correlation), and alternative decision rules (Nearest, Consensus, Random). Experimental results demonstrate that the choice of distance metric and neighborhood size significantly affects performance, with the Cityblock metric and k = 1 yielding the highest accuracy. Importantly, the framework scales effectively to larger datasets while maintaining strong interpretability, offering a balanced alternative to opaque deep learning models. These findings highlight the potential of attribute-based KNN as a lightweight, human-understandable solution for image classification in both research and resource-constrained practical applications.
REFERENCES
Alangari, A., Abdar, M., & Lin, J. (2023). Explainable artificial intelligence in computer vision: Taxonomy, evaluation metrics, and future directions. ACM Computing Surveys, 55(12), 1-37. https://doi.org/10.1145/3610245
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., & Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. *Information Fusion, 58*, 82-115. https://doi.org/10.1016/j.inffus.2019.12.012
Bishop, C. M. (2006). Pattern recognition and machine learning. Springer.
Chen, J., Xu, X., Wang, Y., & Luo, J. (2022). Interpretable attribute-based representations for fine-grained visual recognition. Pattern Recognition, 122, 108309. https://doi.org/10.1016/j.patcog.2021.108309
Cover, T., & Hart, P. (1967). Nearest neighbor pattern classification. IEEE Transactions on Information Theory, 13(1), 21-27. https://doi.org/10.1109/TIT.1967.1053964
Doerrich, S., Hüllermeier, E., & Waegeman, W. (2024). Attribute-based explanations for deep image classification. Pattern Recognition, 152, 110397. https://doi.org/10.1016/j.patcog.2024.110397
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. https://doi.org/10.48550/arXiv.1702.08608
Duda, R. O., Hart, P. E., & Stork, D. G. (2000). Pattern classification (2nd ed.). Wiley.
González, R. C., & Woods, R. E. (2018). Digital image processing (4th ed.). Pearson.
Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning: Data mining, inference, and prediction (2nd ed.). Springer. https://doi.org/10.1007/978-0-387-84858-7
Jamali, M., Rahmani, H., & Mian, A. (2024). Revisiting KNN: Efficient neighbor selection and distance aggregation for large-scale image classification. IEEE Transactions on Image Processing, 33, 1123-1136. https://doi.org/10.1109/TIP.2023.3345678
Li, Z., Wang, C., & Zhang, H. (2021). Multi-label image classification with adaptive KNN and feature selection. Neurocomputing, 427, 92-104. https://doi.org/10.1016/j.neucom.2020.11.045
MathWorks. (2023). MATLAB documentation: Classification using K-Nearest Neighbors. The MathWorks, Inc.
MathWorks. (2023). Statistics and Machine Learning Toolbox: User’s Guide (R2023a). The MathWorks, Inc.
McCann, S., & Lowe, D. G. (2012). Local naive Bayes nearest neighbor for image classification. CVPR. https://arxiv.org/pdf/1112.0059.pdf
Nematollahi, M. A., Ghaffari, H., & Samadzadeh, S. (2023). Texture-based radiomics features for medical image classification: A comprehensive review. Computerized Medical Imaging and Graphics, 103, 102180. https://doi.org/10.1016/j.compmedimag.2022.102180
Nematollahi, M. S., et al. (2023). Deep versus handcrafted tensor radiomics features. Diagnostics, 13(10), 1696. https://www.mdpi.com/2075-4418/13/10/1696
Nguyen, A., Doshi, K., & Yosinski, J. (2022). Deep exemplar-based explanations for image classification. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 2451-2460. https://doi.org/10.1609/aaai.v36i3.20053
Nguyen, G., Taesiri, M. R., & Nguyen, A. (2022). Visual correspondence-based explanations improve AI robustness and human-AI team accuracy. NeurIPS. https://arxiv.org/abs/2208.00780
Norrenbrock, C., Ghosal, S., & Seibold, H. (2023). Interpretable machine learning for image classification: A review of state-of-the-art methods. Information Fusion, 96, 101-123. https://doi.org/10.1016/j.inffus.2023.07.014
Prati, A., et al. (2022). Hand-crafted and learned feature aggregation for visual marble classification. Journal of Imaging, 8(7), 191. https://www.mdpi.com/2313-433X/8/7/191
Radhakrishnan, A., Durham, C., Soylemezoglu, A., & Uhler, C. (2017). PatchNet: Interpretable neural networks for image classification. arXiv:1705.08078. https://arxiv.org/abs/1705.08078
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-CAM: Visual explanations from deep networks via gradient-based localization. ICCV. https://openaccess.thecvf.com/content_ICCV_2017/papers/Selvaraju_Grad-CAM_Visual_Explanations_ICCV_2017_paper.pdf
Tsoumakas, G., & Katakis, I. (2007). Multi-label classification: An overview. International Journal of Data Warehousing and Mining.
Walia, E., & Baboo, S. (2020). Handcrafted and deep feature fusion for image classification. Pattern Recognition Letters.
Walia, E., & Baboo, S. S. (2020). Interpretable visual recognition using human-centric attributes: A survey. Artificial Intelligence Review, 53(8), 6095-6133. https://doi.org/10.1007/s10462-020-09822-6
Wang, J., Li, H., Chen, Z., & Xu, H. (2019). Bag-of-visual-words for image retrieval. Springer.
Xu, W., Xian, Y., Wang, J., Schiele, B., & Akata, Z. (2020). Attribute prototype network for zero-shot learning. NeurIPS.
Xu, W., Xian, Y., Wang, J., Schiele, B., & Akata, Z. (2022). Attribute prototype network for any-shot learning. arXiv:2204.01208. https://arxiv.org/abs/2204.01208
Xu, Y., & Zhang, H. (2023). An improved multilabel k-nearest neighbor algorithm based on value and weight. Computation, 11(2), 32. https://www.mdpi.com/2079-3197/11/2/32
Yadav, A., et al. (2025). Handcrafted feature and deep features based image classification using machine learning models. National Academy Science Letters. https://ouci.dntb.gov.ua/en/works/lxLrk2G2/
Yadav, R., Singh, A., & Kumar, V. (2025). Handcrafted and hybrid descriptors for robust texture classification under varying conditions. Multimedia Tools and Applications, 84, 12145-12167. https://doi.org/10.1007/s11042-025-17432-2
Zhang, M.-L., & Zhou, Z.-H. (2014). A review on multi-label learning algorithms. IEEE TKDE.
Zhang, X., et al. (2020). Deep CNN for image classification. Applied Sciences.
Zhang, X., Wang, Y., & Liu, H. (2024). Hybrid interpretable deep learning: Combining semantic attributes with embeddings for robust image recognition. IEEE Transactions on Artificial Intelligence, 5(3), 450-463. https://doi.org/10.1109/TAI.2024.3356782
Zhang, Z., & Ma, Y. (2012). Ensemble machine learning: Methods and applications. Springer.
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. CVPR.
MANUSCRIPT AUTHORS
Mr. Muhammad Ismail
Department of Computer Science, NFC Institute of Engineering and Fertilizer Research (NFC-IEFR), Faisalabad, 38000 - Pakistan
muhammad.ismail@iefr.edu.pk
Mr. Zulfiqar Ali
Department of Computer Science, NFC Institute of Engineering and Fertilizer Research (NFC-IEFR), Faisalabad, 38000 - Pakistan


CREATE AUTHOR ACCOUNT
 
LAUNCH YOUR SPECIAL ISSUE
View all special issues >>
 
PUBLICATION VIDEOS
 
You can contact us anytime since we have 24 x 7 support.
Join Us|List of Journals|
    
Copyrights © 2025 Computer Science Journals (CSC Journals). All rights reserved. Privacy Policy | Terms of Conditions