Call for Papers - Ongoing round of submission, notification and publication.
    
  
Home    |    Login or Register    |    Contact CSC
By Title/Keywords/Abstract   By Author
Browse CSC-OpenAccess Library.
  • HOME
  • LIST OF JOURNALS
  • AUTHORS
  • EDITORS & REVIEWERS
  • LIBRARIANS & BOOK SELLERS
  • PARTNERSHIP & COLLABORATION
Home   >   CSC-OpenAccess Library   >    Manuscript Information
Full Text Available
(no registration required)

(591.71KB)


-- CSC-OpenAccess Policy
-- Creative Commons Attribution NonCommercial 4.0 International License
>> COMPLETE LIST OF JOURNALS

EXPLORE PUBLICATIONS BY COUNTRIES

EUROPE
MIDDLE EAST
ASIA
AFRICA
.............................
United States of America
United Kingdom
Canada
Australia
Italy
France
Brazil
Germany
Malaysia
Turkey
China
Taiwan
Japan
Saudi Arabia
Jordan
Egypt
United Arab Emirates
India
Nigeria
Extended Density-aware Cross-scale Transformer for Multimodal Atmospheric Degradation in Robust Object Classification
Fiston Oshasha Oshasha, Francklin Mwamba Kande, Saint Jean Djungu, Muka Kabeya Arsene, Jacques IloloIpan, Ruben Mfunyi Kabongo
Pages - 210 - 233     |    Revised - 15-12-2025     |    Published - 31-12-2025
Published in International Journal of Computer Science and Security (IJCSS)
Volume - 19   Issue - 5    |    Publication Date - December 2025  Table of Contents
MORE INFORMATION
References   |   Abstracting & Indexing
KEYWORDS
Computer Vision, Atmospheric Degradation, Transformer Architecture, Multi-modal Learning, Robust Classification, Weather Conditions, Density-aware Networks.
ABSTRACT
Real world computer vision systems face significant performance degradation under adverse conditions. Building on our previous EDCST framework for fog-degraded imagery, this work introduces EDCST-MM (Multi-Modal), an extended architecture capable of handling 16 atmospheric and visual degradation conditions simultaneously. Unlike traditional vision systems that require condition-specific models, EDCST-MM leverages unified density-aware encoding, cross-scale feature fusion, and adaptive transformer blocks to achieve robust classification across fog, rain, darkness, blur, and noise scenarios.

This work addresses the fundamental research question: Can a unified deep learning architecture handle diverse atmospheric and visual degradations without requiring condition-specific models or pre-processing restoration pipelines, while maintaining both robustness and computational efficiency for real-world deployment?

Evaluated on the CODaN dataset, the model reaches an average accuracy of 92.78%, representing an 18.6% improvement over the best baseline (DeiT-S: 74.2%). The framework demonstrates exceptional robustness on atmospheric degradations (fog: 98.24%, rain: 97.73%, darkness: 97.64%) and strong performance under visual degradations (blur: 95.22%, structured noise: 85.90%). Accuracy remains above 95% on 13 of 16 conditions, though Gaussian noise remains challenging (47.80%).

These results validate the effectiveness of our multi-condition density encoding and conditionaware attention mechanisms while maintaining computational efficiency (21.3M parameters, 12ms GPU inference). EDCST-MM thus establishes a clear advance over existing approaches and represents a practical step toward deploying robust vision systems in real-world multidegraded environments.
REFERENCES
Ancuti, C. O., Ancuti, C., Timofte, R., & De Vleeschouwer, C. (2018). O-HAZE: A dehazing benchmark with real hazy and haze-free outdoor images. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (pp. 754-762). https://doi.org/10.1109/CVPRW.2018.00119
Ancuti, C. O., Ancuti, C., Timofte, R., & De Vleeschouwer, C. (2020). NH-HAZE: An image dehazing benchmark with non-homogeneous hazy and haze-free images. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (pp. 444-445). https://doi.org/10.1109/CVPRW50498.2020.00230
Anwar, S., & Barnes, N. (2020). Densely residual Laplacian super-resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(3), 1192-1204. https://doi.org/10.1109/TPAMI.2020.3021732
Arjovsky, M., Bottou, L., Gulrajani, I., & Lopez-Paz, D. (2019). Invariant risk minimization (arXiv:1907.02893). arXiv. https://arxiv.org/abs/1907.02893
Atrey, P. K., Hossain, M. A., El Saddik, A., &Kankanhalli, M. S. (2010). Multimodal fusion for multimedia analysis: A survey. Multimedia Systems, 16(6), 345-379. https://doi.org/10.1007/s00530-010-0182-0
Bai, Y., Mei, J., Yuille, A. L., & Xie, C. (2021). Are transformers more robust than CNNs? In Advances in Neural Information Processing Systems 34 (pp. 26831-26843). Curran Associates, Inc.
Baltrusaitis, T., Ahuja, C., & Morency, L. P. (2019). Multimodal machine learning: A survey and taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2), 423-443. https://doi.org/10.1109/TPAMI.2018.2798607
Berman, D., Treibitz, T., & Avidan, S. (2016). Non-local image dehazing. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1674-1682). https://doi.org/10.1109/CVPR.2016.185
Bhojanapalli, S., Chakrabarti, A., Glasner, D., Li, D., Unterthiner, T., & Veit, A. (2021). Understanding robustness of transformers for image classification. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (pp. 10231-10241). https://doi.org/10.1109/ICCV48922.2021.01007
Bijelic, M., Gruber, T., Mannan, F., Kraus, F., Ritter, W., Dietmayer, K., & Heide, F. (2020). Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11682-11692). https://doi.org/10.1109/CVPR42600.2020.01170
Cao, H., Wang, Y., Chen, J., Jiang, D., Zhang, X., Tian, Q., & Wang, M. (2022). Swin-Unet: Unet-like pure transformer for medical image segmentation. In Proceedings of the 2022 European Conference on Computer Vision Workshops (pp. 205-218). Springer. https://doi.org/10.1007/978-3-031-25066-8_9
Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., &Zagoruyko, S. (2020). End-to-end object detection with transformers. In Proceedings of the 2020 European Conference on Computer Vision (pp. 213-229). Springer. https://doi.org/10.1007/978-3-030-58452-8_13
Chen, C., Chen, Q., Xu, J., & Koltun, V. (2018). Learning to see in the dark. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3291-3300). https://doi.org/10.1109/CVPR.2018.00347
Chen, L., Chu, X., Zhang, X., & Sun, J. (2022). Simple baselines for image restoration. In Proceedings of the 2022 European Conference on Computer Vision (pp. 17-33). Springer. https://doi.org/10.1007/978-3-031-20071-7_2
Cubuk, E. D., Zoph, B., Mane, D., Vasudevan, V., & Le, Q. V. (2019). AutoAugment: Learning augmentation strategies from data. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 113-123). https://doi.org/10.1109/CVPR.2019.00020
Dai, D., Sakaridis, C., Hecker, S., & Van Gool, L. (2020). Curriculum model adaptation with synthetic and real data for semantic foggy scene understanding. International Journal of Computer Vision, 128(5), 1182-1204. https://doi.org/10.1007/s11263-019-01182-4
Dai, Z., Liu, H., Le, Q. V., & Tan, M. (2021). CoAtNet: Marrying convolution and attention for all data sizes. In Advances in Neural Information Processing Systems 34 (pp. 3965-3977). Curran Associates, Inc.
Dodge, S., & Karam, L. (2017). A study and comparison of human and deep learning recognition performance under visual distortions. In Proceedings of the 2017 26th International Conference on Computer Communication and Networks (pp. 1-7). https://doi.org/10.1109/ICCCN.2017.8038465
Dong, X., Bao, J., Chen, D., Zhang, W., Yu, N., Yuan, L., Chen, D., & Guo, B. (2022). CSWin transformer: A general vision transformer backbone with cross-shaped windows. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 12124-12134). https://doi.org/10.1109/CVPR52688.2022.01181
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., &Houlsby, N. (2021). An image is worth 16x16 words: Transformers for image recognition at scale. In Proceedings of the 9th International Conference on Learning Representations. https://openreview.net/forum?id=YicbFdNTTy
Fattal, R. (2014). Dehazing using color-lines. ACM Transactions on Graphics, 34(1), Article 13. https://doi.org/10.1145/2651362
Geirhos, R., Jacobsen, J. H., Michaelis, C., Zemel, R., Brendel, W., Bethge, M., & Wichmann, F. A. (2020). Shortcut learning in deep neural networks. Nature Machine Intelligence, 2(11), 665-673. https://doi.org/10.1038/s42256-020-00257-z
Geirhos, R., Temme, C. R., Rauber, J., Schütt, H. H., Bethge, M., & Wichmann, F. A. (2018). Generalisation in humans and deep neural networks. In Advances in Neural Information Processing Systems 31 (pp. 7549-7561). Curran Associates, Inc.
He, K., Sun, J., & Tang, X. (2011). Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(12), 2341-2353. https://doi.org/10.1109/TPAMI.2010.168
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (pp. 770-778). https://doi.org/10.1109/CVPR.2016.90
Hendrycks, D., & Dietterich, T. (2019). Benchmarking neural network robustness to common corruptions and perturbations. In Proceedings of the 7th International Conference on Learning Representations. https://openreview.net/forum?id=HJz6tiCqYm
Howard, A., Sandler, M., Chu, G., Chen, L.-C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., Le, Q. V., & Adam, H. (2019). Searching for MobileNetV3. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (pp. 1314-1324). https://doi.org/10.1109/ICCV.2019.00140
Jaegle, A., Borgeaud, S., Alayrac, J.-B., Doersch, C., Ionescu, C., Ding, D., Koppula, S., Zoran, D., Brock, A., Shelhamer, E., Hénaff, O., Botvinick, M. M., Zisserman, A., Vinyals, O., & Carreira, J. (2022). Perceiver IO: A general architecture for structured inputs & outputs. In Proceedings of the 10th International Conference on Learning Representations. https://openreview.net/forum?id=fILj7WpI-g
Jaegle, A., Gimeno, F., Brock, A., Vinyals, O., Zisserman, A., & Carreira, J. (2021). Perceiver: General perception with iterative attention. In Proceedings of the 38th International Conference on Machine Learning (pp. 4651-4664). PMLR. https://proceedings.mlr.press/v139/jaegle21a.html
Kamann, C., & Rother, C. (2020). Benchmarking the robustness of semantic segmentation models with respect to common corruptions. International Journal of Computer Vision, 129(2), 462-483. https://doi.org/10.1007/s11263-020-01383-2
Kar, A., Prakash, A., Liu, M.-Y., Cameracci, E., Yuan, J., Rusiniak, M., Acuna, D., Torralba, A., & Fidler, S. (2019). Meta-Sim: Learning to generate synthetic datasets. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (pp. 4551-4560). https://doi.org/10.1109/ICCV.2019.00465
Kar, O. F., Yeo, T., Atanov, A., & Zamir, A. (2022). 3D common corruptions and data augmentation. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 18631-18641). https://doi.org/10.1109/CVPR52688.2022.01808
Katharopoulos, A., Vyas, A., Pappas, N., & Fleuret, F. (2020). Transformers are RNNs: Fast autoregressive transformers with linear attention. In Proceedings of the 37th International Conference on Machine Learning (pp. 5156-5165). PMLR. https://proceedings.mlr.press/v119/katharopoulos20a.html
Kenk, M. A., &Hassaballah, M. (2020). DAWN: Vehicle detection in adverse weather nature dataset (arXiv:2008.05402). arXiv. https://arxiv.org/abs/2008.05402
Koh, P. W., Sagawa, S., Marklund, H., Xie, S. M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R. L., Gao, I., Lee, T., David, E., Stavness, I., Guo, W., Earnshaw, B., Haque, I., Beery, S. M., Leskovec, J., Kundaje, A., ... Liang, P. (2021). WILDS: A benchmark of in-the-wild distribution shifts. In Proceedings of the 38th International Conference on Machine Learning (pp. 5637-5664). PMLR. https://proceedings.mlr.press/v139/koh21a.html
Kupyn, O., Martyniuk, T., Wu, J., & Wang, Z. (2019). DeblurGAN-v2: Deblurring (orders-of-magnitude) faster and better. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (pp. 8878-8887). https://doi.org/10.1109/ICCV.2019.00897
Li, B., Peng, X., Wang, Z., Xu, J., & Feng, D. (2017). AOD-Net: All-in-one dehazing network. In Proceedings of the 2017 IEEE International Conference on Computer Vision (pp. 4770-4778). https://doi.org/10.1109/ICCV.2017.511
Li, B., Ren, W., Fu, D., Tao, D., Feng, D., Zeng, W., & Wang, Z. (2019). Benchmarking single-image dehazing and beyond. IEEE Transactions on Image Processing, 28(1), 492-505. https://doi.org/10.1109/TIP.2018.2867951
Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., & Hoi, S. C. H. (2021). Align before fuse: Vision and language representation learning with momentum distillation. In Advances in Neural Information Processing Systems 34 (pp. 9694-9705). Curran Associates, Inc.
Li, R., Pan, J., Li, Z., & Tang, J. (2020). Single image deblurring via implicit motion estimation. IEEE Transactions on Image Processing, 29, 6452-6463. https://doi.org/10.1109/TIP.2020.2994399
Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., & Timofte, R. (2021). SwinIR: Image restoration using Swin transformer. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (pp. 1833-1844). https://doi.org/10.1109/ICCVW54120.2021.00210
Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., &Zitnick, C. L. (2014). Microsoft COCO: Common objects in context. In Proceedings of the 2014 European Conference on Computer Vision (pp. 740-755). Springer. https://doi.org/10.1007/978-3-319-10602-1_48
Liu, X., Ma, Y., Shi, Z., & Chen, J. (2019). GridDehazeNet: Attention-based multi-scale network for image dehazing. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (pp. 7314-7323). https://doi.org/10.1109/ICCV.2019.00741
Liu, Z., Hu, H., Lin, Y., Yao, Z., Xie, Z., Wei, Y., Ning, J., Cao, Y., Zhang, Z., Dong, L., Wei, F., & Guo, B. (2022). Swin transformer V2: Scaling up capacity and resolution. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 12009-12019). https://doi.org/10.1109/CVPR52688.2022.01170
Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., & Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (pp. 10012-10022). https://doi.org/10.1109/ICCV48922.2021.00986
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2018). Towards deep learning models resistant to adversarial attacks. In Proceedings of the 6th International Conference on Learning Representations. https://openreview.net/forum?id=rJzIBfZAb
Michaelis, C., Mitzkus, B., Geirhos, R., Rusak, E., Bringmann, O., Ecker, A. S., Bethge, M., & Brendel, W. (2019). Benchmarking robustness in object detection: Autonomous driving when winter is coming (arXiv:1907.07484). arXiv. https://arxiv.org/abs/1907.07484
Mintun, E., Kirillov, A., & Xie, S. (2021). On interaction between augmentations and corruptions in natural corruption robustness. In Advances in Neural Information Processing Systems 34 (pp. 3571-3583). Curran Associates, Inc
Ngiam, J., Khosla, A., Kim, M., Nam, J., Lee, H., & Ng, A. Y. (2011). Multimodal deep learning. In Proceedings of the 28th International Conference on Machine Learning (pp. 689-696). Omnipress.
Oshasha, F., Mwamba, F., Djungu, S. J., & Mulenda, N. K. (2025). EDCST: Enhanced density-aware cross-scale transformer for robust object classification under atmospheric fog conditions. SSRN Electronic Journal. Advance online publication. https://doi.org/10.2139/ssrn.5773267
Qin, X., Wang, Z., Bai, Y., Xie, X., & Jia, H. (2020). FFA-Net: Feature fusion attention network for single image dehazing. In Proceedings of the 34th AAAI Conference on Artificial Intelligence (pp. 11908-11915). AAAI Press. https://doi.org/10.1609/aaai.v34i07.6865
Quinonero-Candela, J., Sugiyama, M., Schwaighofer, A., & Lawrence, N. D. (2009). Dataset shift in machine learning. MIT Press.
Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., &Sutskever, I. (2021). Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning (pp. 8748-8763). PMLR. https://proceedings.mlr.press/v139/radford21a.html
Recht, B., Roelofs, R., Schmidt, L., & Shankar, V. (2019). Do ImageNet classifiers generalize to ImageNet? In Proceedings of the 36th International Conference on Machine Learning (pp. 5389-5400). PMLR. https://proceedings.mlr.press/v97/recht19a.html
Rosenfeld, E., Ravikumar, P., & Risteski, A. (2021). The risks of invariant risk minimization. In Proceedings of the 9th International Conference on Learning Representations. https://openreview.net/forum?id=BbNIbVPJ-42
Sagawa, S., Koh, P. W., Hashimoto, T. B., & Liang, P. (2020). Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. In Proceedings of the 8th International Conference on Learning Representations. https://openreview.net/forum?id=ryxGuJrFvS
Sakaridis, C., Dai, D., & Van Gool, L. (2018). Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision, 126(9), 973-992. https://doi.org/10.1007/s11263-018-1072-8
Sakaridis, C., Dai, D., & Van Gool, L. (2021). ACDC: The adverse conditions dataset with correspondences for semantic driving scene understanding. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (pp. 10765-10775). https://doi.org/10.1109/ICCV48922.2021.01059
Steiner, A., Kolesnikov, A., Zhai, X., Wightman, R., Uszkoreit, J., & Beyer, L. (2021). How to train your ViT? Data, augmentation, and regularization in vision transformers (arXiv:2106.10270). arXiv. https://arxiv.org/abs/2106.10270
Tan, M., & Le, Q. (2019). EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning (pp. 6105-6114). PMLR. https://proceedings.mlr.press/v97/tan19a.html
Taori, R., Dave, A., Shankar, V., Carlini, N., Recht, B., & Schmidt, L. (2020). Measuring robustness to natural distribution shifts in image classification. In Advances in Neural Information Processing Systems 33 (pp. 18583-18599). Curran Associates, Inc.
Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., &Jégou, H. (2021). Training data-efficient image transformers & distillation through attention. In Proceedings of the 38th International Conference on Machine Learning (pp. 10347-10357). PMLR. https://proceedings.mlr.press/v139/touvron21a.html.
Tremblay, J., Prakash, A., Acuna, D., Brophy, M., Jampani, V., Anil, C., To, T., Cameracci, E., Boochoon, S., & Birchfield, S. (2018). Training deep networks with synthetic data: Bridging the reality gap by domain randomization. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 969-977). https://doi.org/10.1109/CVPRW.2018.00143
Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., & Madry, A. (2019). Robustness may be at odds with accuracy. In Proceedings of the 7th International Conference on Learning Representations. https://openreview.net/forum?id=SyxAb30cY7
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems 30 (pp. 5998-6008). Curran Associates, Inc.
Wang, W., Xie, E., Li, X., Fan, D.-P., Song, K., Liang, D., Lu, T., Luo, P., &Shao, L. (2021). Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (pp. 568-578). https://doi.org/10.1109/ICCV48922.2021.00061
Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., & Li, H. (2022). Uformer: A general U-shaped transformer for image restoration. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 17683-17693). https://doi.org/10.1109/CVPR52688.2022.01716
Wu, H., Xiao, B., Codella, N., Liu, M., Dai, X., Yuan, L., & Zhang, L. (2021). CvT: Introducing convolutions to vision transformers. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (pp. 22-31). https://doi.org/10.1109/ICCV48922.2021.00009
Xie, C., Wu, Y., van der Maaten, L., Yuille, A. L., & He, K. (2019). Feature denoising for improving adversarial robustness. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 501-509). https://doi.org/10.1109/CVPR.2019.00059
Xu, Z., Liu, D., Yang, J., Raffel, C., & Niethammer, M. (2020). Robust and generalizable visual representation learning via random convolutions. In Proceedings of the 8th International Conference on Learning Representations. https://openreview.net/forum?id=BVSM0x3EDK6
Yuan, K., Guo, S., Liu, Z., Zhou, A., Yu, F., & Wu, W. (2021). Incorporating convolution designs into visual transformers. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (pp. 579-588). https://doi.org/10.1109/ICCV48922.2021.00062
Yun, S., Han, D., Oh, S. J., Chun, S., Choe, J., & Yoo, Y. (2019). CutMix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (pp. 6023-6032). https://doi.org/10.1109/ICCV.2019.00612
Zamir, S. W., Arora, A., Gupta, S., Khan, S., Sun, G., Khan, F. S., Zhu, F., Shao, L., Xia, G.-S., & Yang, M.-H. (2022). Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 5728-5739). https://doi.org/10.1109/CVPR52688.2022.00564
Zhang, H., & Patel, V. M. (2021). Density-aware single image de-raining using a multi-stream dense network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(9), 3080-3095. https://doi.org/10.1109/TPAMI.2018.2869722
Zhang, J., Niu, Y., Zhang, J., Gu, S., Timofte, R., & Zuo, W. (2020). NTIRE 2020 challenge on perceptual extreme super-resolution: Methods and results. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (pp. 492-493). https://doi.org/10.1109/CVPRW50498.2020.00061
Zhang, K., Zuo, W., Chen, Y., Meng, D., & Zhang, L. (2017). Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Transactions on Image Processing, 26(7), 3142-3155. https://doi.org/10.1109/TIP.2017.2662206
Zhao, H., Gallo, O., Frosio, I., & Kautz, J. (2017). Loss functions for image restoration with neural networks. IEEE Transactions on Computational Imaging, 3(1), 47-57. https://doi.org/10.1109/TCI.2016.2644865
Zhu, Q., Mai, J., & Shao, L. (2015). A fast single image haze removal algorithm using color attenuation prior. IEEE Transactions on Image Processing, 24(11), 3522-3533. https://doi.org/10.1109/TIP.2015.2446191
MANUSCRIPT AUTHORS
Dr. Fiston Oshasha Oshasha
General Commissariat for Atomic Energy, Regional Center for Nuclear Studies of Kinshasa, Kinshasa - Democratic Republic of the Con
fiston.oshasha.oshasha@cgea-rdc.org
Mr. Francklin Mwamba Kande
Health Sciences Research Institute, Kinshasa - Democratic Republic of Congo
Mr. Saint Jean Djungu
Center for Research in Applied Computing, Kinshasa - Democratic Republic of the Con
Mr. Muka Kabeya Arsene
General Commissariat for Atomic Energy, Regional Center for Nuclear Studies of Kinshasa, Kinshasa - Democratic Republic of the Con
Mr. Jacques IloloIpan
Faculty of Science and Technology, University of Kinshasa, Kinshasa - Democratic Republic of the Con
Mr. Ruben Mfunyi Kabongo
Faculty of Science and Technology, University of Kinshasa, Kinshasa - Democratic Republic of the Con


CREATE AUTHOR ACCOUNT
 
LAUNCH YOUR SPECIAL ISSUE
View all special issues >>
 
PUBLICATION VIDEOS
 
You can contact us anytime since we have 24 x 7 support.
Join Us|List of Journals|
    
Copyrights © 2025 Computer Science Journals (CSC Journals). All rights reserved. Privacy Policy | Terms of Conditions