Abstract
The study conducted explains the moral responsibilities of algorithms involved in the view count notifications of LinkedIn profiles while placing emphasis on the variation caused during or after buying a premium. Utilizing the Trust model by Mayer et al. (1995) and dark patterns theory as a foundation, the study employs longitudinal surveys, qualitative interviews, and comparative analysis to assess LinkedIn’s practices against Facebook, TikTok, Netflix and Spotify. The findings indicate that the platforms have serious ethical issues that stem from algorithmic manipulations and trust issues. This research advocates for increasing user knowledge of enhancement of algorithms, improvement of ethically designed AI, and ensuring compliance to such laws as GDPR. The study proposes algorithmic audits, transparency dashboards and user empowerment tools. Finally, this section concludes with future avenues for research aimed at increasing fairness and trust on digital platforms.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.