Analyses from the many global civil society organisations which contributed to the Spotlight on Sustainable Development 2019 make it clear that to meaningfully tackle the obstacles and contradictions in the implementation of the 2030 Agenda and Sustainable Development Goals needs more sweeping, holistic shifts in how and where power is vested.
The current status of Artificial Intelligence governance must be reshaped or it will contribute to more being left behind. The risks and shifts are outlined by Cecilia Alemany of Development Alternatives with Women for a New Era (DAWN) and Anita Gurumurthy of IT For Change (ITFC) in a chapter on Governance of Data and Artificial Intelligence.
They propose the United Nations as the forum where AI must be understood and governed as a crucial condition for human rights, democracy, peace and sustainable development. But it has to be led by governments with broader participation, ensuring that it is not directed by platform companies’ interests, and that it is not regulated only as a matter of e-commerce or trade as currently seems to be the case. Digital intelligence, generated from social interactions data (of people and things in a networked data environment) to produce profit marks a shift in the foundational structures of society and economy, requiring a new governance model.
Concerns about the inherent biases in AI and consequences for fundamental rights, including the right to equality and non-discrimination, are being widely flagged today by civil rights groups. Employees of big digital corporations have also raised their voices against the weaponization of cyberspace through a state-corporate nexus. Alemany and Gurumurthy flag that it is vital to recognise that data and AI Governance needs a more comprehensive approach that addresses the individual and structural underpinnings of equality and justice.
Many public policy decisions that shape citizens’ everyday experience are found not in legislative norms but in software codes and AI made by scientists and innovators in private (and monopolistic) settings. Policy-makers have not yet grasped of the risks of delegating public and private decision-making to AI and ML. All countries need to understand the impact of deep learning and intelligent prediction models in public policy design and response to get the benefits and reduce risks. Good policy can ensure that this can be the beginning of a ‘golden age’ of social sciences, a coming together of contextual complexities and statistical interpretations at a new level.
IT for Change calls for “an agile legal and policy framework to curb platform excess”, to govern the new economy and its effects on the society, citizens, institutions and politics in this digital era. States should mandate “that private platforms in key sectors share critical data they collect with state agencies, with safeguards for protecting user and citizen privacy. To support essential public services, like city transport or health care, companies must be obligated to open up such public data for the public interest”.
The UN High-Level Panel on Digital Cooperation is led by two chairs (common practice), but, uncommonly, both of them come from two of the most important transnational corporations of the digital economy. However public leadership has not been ensured. If the international community continues to merely observe how monopolies are owning people’s data and using AI without any correction to their abusive practices and biases, existing structural asymmetries will be reproduced also in the way data and AI will be governed or ungoverned.
The international community needs to work towards an overall paradigm shift where there is a convergence of the liberal paradigm (open AI, open internet, etc.) with a more progressive paradigm (communitization of the digital world) based on human rights and a clear norm setting on digital rights and obligations.