In the datafied world we inhabit, information has emerged as the foundation layer for governance, economic activity, healthcare, education and a lot more. With the proliferation of sensors, algorithms, and connected devices, the capacity to collect, process, and apply data at scale has increased dramatically. This shift has triggered profound ethical debates about surveillance, bias and manipulation, igniting fears that too much power is being ceded to data itself.
This discussion centres around the assertion that such concerns, while valid, often misplace the locus of ethical responsibility. Data is not inherently good or bad and can be considered to be a neutral resource. Its ethical valence emerges from how it is collected, structured, and applied, specifically by whom, under what conditions and to what ends. This position is explored by examining global examples where data has either empowered societies or been used to reinforce inequality and control, as well as interrogating the conditions under which its neutrality is compromised. A “devil’s advocate” lens is adopted to explore the limits of data neutrality: Can any dataset be truly objective? Can infrastructure built on inherently biased social systems ever be considered neutral in impact? These questions are evaluated through multiple lenses, including algorithmic design, data governance, power asymmetries, and public accountability.
To transition from theoretical framing to applied contexts, this section presents a series of international case examples illustrating how data systems can both serve the public good and reinforce systemic control. The discussion includes examples of successful public-good-oriented data systems, including Türkiye’s egovernment platform, and calls for data ethics frameworks rooted in transparency, human rights and inclusivity. The paper ultimately argues that data must be governed, and not feared, for it to effectively serve the public good.
Download the Report Section