Conferences

JNIC 24

Cybersecurity
AI

Sevilla, Spain

From May 27, 2024 to May 29, 2024

At JNIC 2024 in Sevilla, I presented the paper Evaluating Robustness of Machine Learning Models against Adversarial Attacks: Techniques, Countermeasures, and Performance Analysis, focusing on the security and resilience of machine learning systems in adversarial environments.

The work explores how ML models deployed in critical domains can be manipulated through adversarial attacks, which are carefully crafted inputs designed to mislead model predictions. In this study, I carried out a comprehensive evaluation of model robustness by analysing the impact of multiple attack techniques, including FGSM, HopSkipJump, and Carlini and Wagner, against a baseline deep neural network.

A key contribution of the research is the introduction of novel robustness metrics such as Accuracy under Attack, Attack Success Rate, Robustness Margin, and Confidence Score Stability. These metrics provide a multidimensional framework for assessing model resilience. The study also evaluates the effectiveness of several defence strategies, including Adversarial Training, Feature Squeezing, and Defensive Distillation, highlighting their strengths and limitations in mitigating adversarial threats.

The results provide valuable insights into the vulnerabilities of current machine learning systems and reinforce the need for more robust, secure, and trustworthy AI models, especially as their adoption continues to grow across critical applications.