Published December 21, 2021 | Version v1
Publication Open

Fruit-classification model resilience under adversarial attack

  • 1. Bahria University

Description

Abstract An accurate and robust fruit image classifier can have a variety of real-life and industrial applications including automated pricing, intelligent sorting, and information extraction. This paper demonstrates how adversarial training can enhance the robustness of fruit image classifiers. In the past, research in deep-learning-based fruit image classification has focused solely on attaining the highest possible accuracy of the model used in the classification process. However, even the highest accuracy models are still susceptible to adversarial attacks which pose serious problems for such systems in practice. As a robust fruit classifier can only be developed with the aid of a fruit image dataset consisting of fruit images photographed in realistic settings (rather than images taken in controlled laboratory settings), a new dataset of over three thousand fruit images belonging to seven fruit classes is presented. Each image is carefully selected so that its classification poses a significant challenge for the proposed classifiers. Three Convolutional Neural Network (CNN)-based classifiers are suggested: 1) IndusNet , 2) fine-tuned VGG16 , and 3) fine-tuned MobileNet . Fine-tuned VGG16 produced the best test set accuracy of 94.82% compared to the 92.32% and the 94.28% produced by the other two models, respectively. Fine-tuned MobileNet has proved to be the most efficient model with a test time of 9 ms/step compared to the test times of 28 ms/step and 29 ms/step for the other two models. The empirical evidence presented demonstrates that adversarial training enables fruit image classifiers to resist attacks crafted through the Fast Gradient Sign Method (FGSM), while simultaneously improving classifiers' robustness against other noise forms including 'Gaussian', 'Salt and pepper' and 'Speckle'. For example, when the amplitude of the perturbations generated through the Fast Gradient Sign Method (FGSM) was kept at 0.1, adversarial training improved the fine-tuned VGG16's performance on adversarial images by around 18% (i.e., from 76.6% to 94.82%), while simultaneously improving the classifier's performance on fruit images corrupted with 'salt and pepper' noise by around 8% (i.e., from 69.82% to 77.85%). Other reported results also follow this pattern and demonstrate the effectiveness of adversarial training as a means of enhancing the robustness of fruit image classifiers.

⚠️ This is an automatic machine translation with an accuracy of 90-95%

Translated Description (Arabic)

يمكن أن يحتوي مصنف صور الفاكهة الدقيق والقوي على مجموعة متنوعة من التطبيقات الواقعية والصناعية بما في ذلك التسعير الآلي والفرز الذكي واستخراج المعلومات. توضح هذه الورقة كيف يمكن للتدريب التخاصمي أن يعزز متانة مصنفات صور الفاكهة. في الماضي، ركزت الأبحاث في تصنيف صورة الفاكهة القائم على التعلم العميق فقط على تحقيق أعلى دقة ممكنة للنموذج المستخدم في عملية التصنيف. ومع ذلك، حتى أعلى نماذج الدقة لا تزال عرضة للهجمات العدائية التي تشكل مشاكل خطيرة لمثل هذه الأنظمة في الممارسة العملية. نظرًا لأنه لا يمكن تطوير مصنف فاكهة قوي إلا بمساعدة مجموعة بيانات صورة فاكهة تتكون من صور فاكهة تم تصويرها في إعدادات واقعية (بدلاً من الصور التي تم التقاطها في إعدادات مختبرية خاضعة للرقابة)، يتم تقديم مجموعة بيانات جديدة تضم أكثر من ثلاثة آلاف صورة فاكهة تنتمي إلى سبع فئات فاكهة. يتم اختيار كل صورة بعناية بحيث يشكل تصنيفها تحديًا كبيرًا للمصنفات المقترحة. يقترح ثلاثة مصنفات قائمة على الشبكة العصبية الالتفافية (CNN): 1) IndusNet، 2) VGG16 المضبوط بدقة، و 3) MobileNet المضبوط بدقة. أنتج VGG16 المضبوط بدقة أفضل دقة لمجموعة الاختبار بنسبة 94.82 ٪ مقارنة بـ 92.32 ٪ و 94.28 ٪ التي أنتجها النموذجان الآخران، على التوالي. أثبتت MobileNet التي تم ضبطها بدقة أنها النموذج الأكثر كفاءة مع وقت اختبار قدره 9 مللي ثانية/خطوة مقارنة بأوقات الاختبار البالغة 28 مللي ثانية/خطوة و 29 مللي ثانية/خطوة للنموذجين الآخرين. توضح الأدلة التجريبية المقدمة أن التدريب العدائي يمكّن مصنفات صور الفاكهة من مقاومة الهجمات المصممة من خلال طريقة إشارة التدرج السريع (FGSM)، مع تحسين قوة المصنفات في الوقت نفسه ضد أشكال الضوضاء الأخرى بما في ذلك "Gaussian" و "Salt and Pepper" و "Speckle". على سبيل المثال، عندما تم الحفاظ على سعة الاضطرابات الناتجة من خلال طريقة إشارة التدرج السريع (FGSM) عند 0.1، أدى التدريب العدائي إلى تحسين أداء VGG16 المضبوط بدقة على الصور العدائية بنحو 18 ٪ (أي من 76.6 ٪ إلى 94.82 ٪)، مع تحسين أداء المصنف في الوقت نفسه على الصور المثمرة التالفة مع ضوضاء "الملح والفلفل" بنحو 8 ٪ (أي من 69.82 ٪ إلى 77.85 ٪). كما تتبع النتائج الأخرى المبلغ عنها هذا النمط وتظهر فعالية التدريب التخاصمي كوسيلة لتعزيز متانة مصنفات صور الفاكهة.

Translated Description (English)

Abstract An accurate and robust fruit image classifier can have a variety of real-life and industrial applications including automated pricing, intelligent sorting, and information extraction. This paper demonstrates how adversarial training can enhance the robustness of fruit image classifiers. In the past, research in deep-learning-based fruit image classification has focused solely on attaining the highest possible accuracy of the model used in the classification process. However, even the highest accuracy models are still susceptible to adversarial attacks which pose serious problems for such systems in practice. As a robust fruit classifier can only be developed with the aid of a fruit image dataset consisting of fruit images photographed in realistic settings (rather than images taken in controlled laboratory settings), a new dataset of over three thousand fruit images belonging to seven fruit classes is presented. Each image is carefully selected so that its classification poses a significant challenge for the proposed classifiers. Three Convolutional Neural Network (CNN)-based classifiers are suggested: 1) IndusNet , 2) fine-tuned VGG16 , and 3) fine-tuned MobileNet . Fine-tuned VGG16 produced the best test set accuracy of 94.82% compared to the 92.32% and the 94.28% produced by the other two models, respectively. Fine-tuned MobileNet has proven to be the most efficient model with a test time of 9 ms/step compared to the test times of 28 ms/step and 29 ms/step for the other two models. The empirical evidence presented demonstrates that adversarial training enables fruit image classifiers to resist crafted attacks through the Fast Gradient Sign Method (FGSM), while simultaneously improving classifiers' robustness against other noise forms including 'Gaussian', 'Salt and pepper' and 'Speckle'. For example, when the amplitude of the perturbations generated through the Fast Gradient Sign Method (FGSM) was kept at 0.1, adversarial training improved the fine-tuned VGG16's performance on adversarial images by around 18% (i.e., from 76.6% to 94.82%), while simultaneously improving the classifier's performance on fruit images corrupted with 'salt and pepper' noise by around 8% (i.e., from 69.82% to 77.85%). Other reported results also follow this pattern and demonstrate the effectiveness of adversarial training as a means of enhancing the robustness of fruit image classifiers.

Translated Description (French)

Abstract An précisate and robust fruit image classifier can have a variety of real-life and industrial applications including automated pricing, intelligent sorting, and information extraction. This paper demonstrates how adversarial training can enhance the robustness of fruit image classifiers. In the past, research in deep-learning-based fruit image classification has focused solely on attaining the highest possible accuracy of the model used in the classification process. However, even the highest accuracy models are still susceptible to adversarial attacks which pose serious problems for such systems in practice. As a robust fruit classifier can only be developed with the aid of a fruit image dataset consisting of fruit images photographed in realistic settings (rather than images taken in controlled laboratory settings), a new dataset of over three thousand fruit images belonging to seven fruit classes is presented. Each image is carefully selected so that its classification poses a significant challenge for the proposed classifiers. Three Convolutional Neural Network (CNN)-based classifiers are suggested : 1) IndusNet , 2) fine-tuned VGG16 , and 3) fine-tuned MobileNet . Fine-tuned VGG16 produced the best test set accuracy of 94.82% compared to the 92.32% and the 94.28% produced by the other two models, respectively. Fine-tuned MobileNet has provd to be the most efficient model with a test time of 9 ms/step compared to the test times of 28 ms/step and 29 ms/step for the other two models. The empirical evidence presented demonstrates that adversarial training enables fruit image classifiers to resist attacks crafted through the Fast Gradient Sign Method (FGSM), while simultaneously improving classifiers' robustness against other noise forms including 'Gaussian', 'Salt and pepper' and 'Speckle'. For example, when the amplitude of the perturbations generated through the Fast Gradient Sign Method (FGSM) was kept at 0.1, adversarial training improved the fine-tuned VGG16 's performance on adversarial images by around 18% (i.e., from 76.6% to 94.82%), while simultaneously improving the classifier's performance on fruit images corrupted with' salt and pepper 'noise by around 8% (i.e., from 69.82% to 77.85%). Other reported results also follow this pattern and demonstrate the effectiveness of adversarial training as a means of enhancing the robustness of fruit image classifiers.

Translated Description (Spanish)

Abstract An accurate and robust fruit image classifier can have a variety of real-life and industrial applications including automated pricing, intelligent sorting, and information extraction. This paper demonstrates how adversarial training can enhance the robustness of fruit image classifiers. In the past, research in deep-learning-based fruit image classification has focused solely on attaining the highest possible accuracy of the model used in the classification process. However, even the highest accuracy models are still susceptible to adversarial attacks which pose serious problems for such systems in practice. As a robust fruit classifier can only be developed with the aid of a fruit image dataset consisting of fruit images photographed in realistic settings (rather than images taken in controlled laboratory settings), a new dataset of over three thousand fruit images belonging to seven fruit classes is presented. Each image is carefully selected so that its classification poses a significant challenge for the proposed classifiers. Three Convolutional Neural Network (CNN)-based classifiers are suggested: 1) IndusNet , 2) fine-tuned VGG16 , and 3) fine-tuned MobileNet . Fine-tuned VGG16 produced the best test set accuracy of 94.82% compared to the 92.32% and the 94.28% produced by the other two models, respectively. Fine-tuned MobileNet has proved to be the most efficient model with a test time of 9 ms/step compared to the test times of 28 ms/step and 29 ms/step for the other two models. The empirical evidence presented demonstrates that adversarial training enables fruit image classifiers to resist attacks crafted through the Fast Gradient Sign Method (FGSM), while simultaneously improving classifiers' robustness against other noise forms including 'Gaussian', 'Salt and pepper' and 'Speckle'. For example, when the amplitude of the perturbations generated through the Fast Gradient Sign Method (FGSM) was kept at 0.1, adversarial training improved the fine-tuned VGG16's performance on adversarial images by around 18% (i.e., from 76.6% to 94.82%), while simultaneously improving the classifier's performance on fruit images corrupted with 'salt and pepper' noise by around 8% (i.e., from 69.82% to 77.85%). Other reported results also follow this pattern and demonstrate the effectiveness of adversarial training as a means of enhancing the robustness of fruit image classifiers.

Files

s42452-021-04917-6.pdf.pdf

Files (13.0 MB)

⚠️ Please wait a few minutes before your translated files are ready ⚠️ Note: Some files might be protected thus translations might not work.
Name Size Download all
md5:25bfe5cd682ab5099d063ddc2d3ddcd0
13.0 MB
Preview Download

Additional details

Additional titles

Translated title (Arabic)
مرونة نموذج تصنيف الفاكهة تحت هجوم الخصومة
Translated title (English)
Fruit-classification model resilience under adversarial attack
Translated title (French)
Fruit-classification model resilience under adversarial attack
Translated title (Spanish)
Fruit-clasification model resilience under adversarial attack

Identifiers

Other
https://openalex.org/W4200611412
DOI
10.1007/s42452-021-04917-6

GreSIS Basics Section

Is Global South Knowledge
Yes
Country
Pakistan

References

  • https://openalex.org/W1912982817
  • https://openalex.org/W1949503316
  • https://openalex.org/W1958968158
  • https://openalex.org/W1963939054
  • https://openalex.org/W1991847244
  • https://openalex.org/W2060117429
  • https://openalex.org/W2091548424
  • https://openalex.org/W2108598243
  • https://openalex.org/W2166828762
  • https://openalex.org/W2180612164
  • https://openalex.org/W2499334499
  • https://openalex.org/W2535873859
  • https://openalex.org/W2543927648
  • https://openalex.org/W2758007480
  • https://openalex.org/W2798302089
  • https://openalex.org/W2887037551
  • https://openalex.org/W2888630264
  • https://openalex.org/W2889601771
  • https://openalex.org/W2891621551
  • https://openalex.org/W2897350321
  • https://openalex.org/W2919115771
  • https://openalex.org/W2960224263
  • https://openalex.org/W2962700793
  • https://openalex.org/W2962761044
  • https://openalex.org/W2963108767
  • https://openalex.org/W2963912358
  • https://openalex.org/W2975191341
  • https://openalex.org/W2976984878
  • https://openalex.org/W2977084122
  • https://openalex.org/W2979544394
  • https://openalex.org/W2980620524
  • https://openalex.org/W2980931795
  • https://openalex.org/W2996480162
  • https://openalex.org/W2998310459
  • https://openalex.org/W3014697957
  • https://openalex.org/W3017390625
  • https://openalex.org/W3020238878
  • https://openalex.org/W3024956434
  • https://openalex.org/W3026708784
  • https://openalex.org/W3036664904
  • https://openalex.org/W3040455971
  • https://openalex.org/W3048391978
  • https://openalex.org/W3094344483
  • https://openalex.org/W3103557498
  • https://openalex.org/W3111114947
  • https://openalex.org/W3111818035
  • https://openalex.org/W3120820701
  • https://openalex.org/W3187289500