[Submitted on 7 Apr 2024]

View a PDF of the paper titled Quantifying AI Vulnerabilities: A Synthesis of Complexity, Dynamical Systems, and Game Theory, by B Kereopa-Yorke

View PDF

Abstract:The rapid integration of Artificial Intelligence (AI) systems across critical domains necessitates robust security evaluation frameworks. We propose a novel approach that introduces three metrics: System Complexity Index (SCI), Lyapunov Exponent for AI Stability (LEAIS), and Nash Equilibrium Robustness (NER). SCI quantifies the inherent complexity of an AI system, LEAIS captures its stability and sensitivity to perturbations, and NER evaluates its strategic robustness against adversarial manipulation. Through comparative analysis, we demonstrate the advantages of our framework over existing techniques. We discuss the theoretical and practical implications, potential applications, limitations, and future research directions. Our work contributes to the development of secure and trustworthy AI technologies by providing a holistic, theoretically grounded approach to AI security evaluation. As AI continues to advance, prioritising and advancing AI security through interdisciplinary collaboration is crucial to ensure its responsible deployment for the benefit of society.

Submission history

From: Benjamin Kereopa-Yorke Mr [view email]
[v1]
Sun, 7 Apr 2024 07:05:59 UTC (412 KB)



Source link