Abstract
Generative artificial intelligence (GenAI) is rapidly being embedded into corporate reporting workflows, yet its implications for financial reporting quality and auditability remain insufficiently understood. This paper examines how GenAI models can be used to automate financial reporting narratives—such as Management’s Discussion and Analysis (MD&A) and risk disclosures—and evaluates their effects on disclosure quality, transparency, and assurance. The study employs an experimental mixed‑methods design, comparing human‑authored, GenAI‑generated, and human‑edited GenAI narratives based on a large sample of corporate reports. Text‑analytic techniques (readability indices, sentiment and topic analysis, and red‑flag indicators) are combined with explainable AI methods to assess both the content produced by GenAI and the traceability of underlying decision processes. The findings indicate that GenAI can substantially improve readability and linguistic consistency while reducing boilerplate, but also introduces new risks related to hallucinated details, optimistic bias, and potential masking of earnings‑management signals. Explainability tools partially mitigate these concerns by providing auditable evidence of how inputs shape outputs, yet do not fully resolve issues of accountability and professional scepticism. Overall, the paper contributes empirical evidence and a governance framework for responsibly integrating GenAI into financial reporting and auditing, offering practical guidance for preparers, auditors, and regulators seeking to harness automation without compromising reliability or trust.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
