Abstract
The paper examines the boundaries of gendered voice in virtual assistant responses. It intends to examine the gender biases these virtual assistants display by looking at popular systems such as Siri, Alexa, Cortana and Google Assistant which are known for using female voices in their choice of assistants. The study explores how gender bias in VA design affects user experience using both quantitative and qualitative methods of study. It examines how Virtual assistants can be programmed and designed for inclusive interaction to challenge societal gender norms. By drawing from diverse fields such as pragmatics, sociolinguistics, and gender studies to analyze how these assistants encode and reproduce gender bias. The present study also highlights the importance of linguistic equity in digital interaction.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.