[1] |
Holzinger A. The next frontier:AI we can really trust[C]. Bilbao:Proceedings of the European Conference on Machine Learning Principles and Practice of Knowledge Discovery in Databases, 2021:427-440.
|
[2] |
左斌, 李菲菲. 基于注意力机制和Inf-Net的新冠肺炎图像分割方法[J]. 电子科技, 2023, 36(2):22-28.
|
|
Zuo Bin, Li Feifei. An effective segmentation method for COVID-19 CT image based on attention mechanism and Inf-Net[J]. Electronic Science and Technology, 2023, 36(2):22-28.
|
[3] |
赵晋, 李菲菲. 一种基于GAN的轻量级水墨画风格迁移模型[J]. 电子科技, 2023, 36(2):81-86.
|
|
Zhao Jin, Li Feifei. A GAN-based lightweight style transfer model for ink painting[J]. Electronic Science and Technology, 2023, 36(2):81-86.
|
[4] |
Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6):84-90.
|
[5] |
Ren S, He K, Girshick R, et al. Faster R-CNN:Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6):1137-1149.
|
[6] |
He K, Gkioxari G, Dollár P, et al. Mask R-CNN[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(2):386-397.
doi: 10.1109/TPAMI.2018.2844175
pmid: 29994331
|
[7] |
Zhang C, Bengio S, Hardt M, et al. Understanding deep learning (still) requires rethinking generalization[J]. Communications of the ACM, 2021, 64(3):107-115.
|
[8] |
Patrini G, Rozza A, Menon A K, et al. Making deep neural networks robust to label noise:A loss correction approach[C]. Honolulu:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017:1944-1952.
|
[9] |
Tanaka D, Ikami D, Yamasaki T, et al. Joint optimization framework for learning with noisy labels[C]. Salt Lake City:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:5552-5560.
|
[10] |
Han B, Yao Q, Yu X, et al. Coteaching:Robust training of deep neural networks with extremely noisy labels[C]. Montreal:Proceedings of the Conference on Neural Information Processing Systems, 2018:8527-8537.
|
[11] |
Wei H, Feng L, Chen X, et al. Combating noisy labels by agreement:A joint training method with Coregularization[C]. Seattle:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2020:13726-13735.
|
[12] |
Li J, Socher R, Hoi S C H. DivideMix:Learning with noisy labels as semi-supervised learning[C]. Addis Ababa:Proceedings of the International Conference on Learning Representations, 2020:146-167.
|
[13] |
Nishi K, Ding Y, Rich A, et al. Augmentation strategies for learning with noisy labels[C]. Online:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021:8022-8031.
|
[14] |
Karim N, Rizve M N, Rahnavard N, et al. Unicon:Combating label noise through uniform selection and contrastive learning[C]. New Orleans:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2022:9666-9676.
|
[15] |
Jiang L, Zhou Z, Leung T, et al. Mentornet:Learning data-driven curriculum for very deep neural networks on corrupted labels[C]. Stockholm:International Conference on Machine Learning, 2018:2304-2313.
|
[16] |
Malach E, Shalev-Shwartz S. Decoupling "when to upd-ate" from "how to update"[C]. Long Beach:Proceedings of the Conference on Neural Information Processing Systems, 2017:961-971.
|
[17] |
Arplt D, Jastrzçbskl S, Bailas N, et al. A closer look at memorization in deep networks[C]. Sydney:Proceedings of the International Conference on Machine Learning, 2017:233-242.
|
[18] |
Le Cun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11):2278-2324.
|
[19] |
Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images[J]. Handbook of Systemic Autoimmune Diseases, 2009, 1(4):7-19.
|
[20] |
Song H, Kim M, Lee J G. Selfie:Refurbishing unclean samples for robust deep learning[C]. Sydney:Proceedings of the International Conference on Machine Learning, 2019:5907-5915.
|
[21] |
Reed S, Lee H, Anguelov D, et al. Training deep neural networks on noisy labels with bootstrapping[C]. San Diego:Proceedings of the International Conference on Learning Representations, 2015:1171-1189.
|
[22] |
Goldberger J, Ben-Reuven E. Training deep neural-networks using a noise adaptation layer[C]. Toulon:Proceedings of the International Conference on Learning Representations, 2017:478-489.
|