Abstract
The integration of artificial intelligence (AI) in aviation maintenance has revolutionized fault detection, predictive maintenance (PdM), and operational efficiency. However, the adoption of AI introduces critical challenges related to algorithmic transparency, accountability, and displacement of human expertise. This study examines AI's impact on aviation maintenance beyond its efficiency gains, focusing on the systemic risks arising from automation, potential security loopholes, and gaps in existing regulatory oversight. By integrating newly available industry reports, regulatory guidelines, and empirical findings, this study systematically categorizes tangible and intangible harms, differentiating between realized AI failures (harm events) and potential risks (harm issues), particularly in predictive maintenance, cybersecurity vulnerabilities, and compliance challenges. This study investigates how AI impacts decision-making from an ethical perspective, assesses the security vulnerabilities inherent in AI-driven maintenance, and evaluates the adequacy of current regulatory frameworks in addressing AI-related risks. By addressing these gaps, this study expands the discussion on AI-related ethical risks, broadens the discourse on security risks by leveraging the CSET AI Harm Framework, and proposes a structured AI governance framework for AI adoption in high-risk aviation environments that integrates ethical, security, and regulatory considerations to enhance accountability and risk mitigation strategies. The findings reveal that the successful implementation of AI in aviation maintenance requires a fundamental shift in how the industry understands, manages, and controls risks, necessitating updated certification methodologies, enhanced risk assessment protocols, and AI-specific aviation safety standards.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.