fbpx

Governance of Data and Artificial Intelligence

Download PDF version

With the use of AI, machine learning, and deep learning, digital spaces produce and reproduce structural discrimination


Platform corporations, with their machine learning models, reproduce structural gender and racial discriminations. There are very few options to stop them unless we update regulatory frameworks to protect our rights.

Public awareness and detailed knowledge about how algorithms are shaped by biases and political or economic interests and how this impacts women’s opportunities and policy making are currently missing.

In this article, Cecelia Alemany explores how growing digitalization is shaping public policy spaces and shifting the ways in which we understand our civic, political, economic, social and cultural rights.


The ¨Data Revolution¨ has been at the centre of the Sustainable Development Goals’ negotiation process. Having access to massive amounts of data is transforming our own capacity to plan, design, and implement development and public policies in general. The digital revolution and its economic implications are affecting the ways we live, work, study, and socialize as well as our own capacity to vote and own our democracies. It s also affecting the way we trade, invest and financing for development.

Data governance is still a challenge per se, and it is increasingly about Artificial Intelligence (AI) control and regulation.

Today, a very short list of platform companies with their monopolies are the ones in control globally. Among transnational corporations, platform corporations are among the most powerful, and they are making profits based on their algorithmsmachine learningdeep learning (a subfield of machine learning) models and our data. As Pedro Domingos points out in The Master Algorithm, these corporations “can do things with our data and their models that are not in your interest, and you have no way to stop it.

Our data is ¨their¨ new asset and they profit from our permanent data production, taking advantages of the lack of international rules that govern this new economy—the digital economy that impacts all productive sectors. Data regulation, algorithms’ transparency, AI, and machine learning should be better understood by all of us working on civic, political, economic, social and cultural rights since they are affecting our recognized rights. The international system, highly influenced by corporate interests, is looking for tricky solutions that may bypass our rights and affect developing countries’ right to development and policy space. Public awareness and detailed knowledge about how algorithms are shaped by biases and political or economic interests are missing .

There is growing evidence that machine learning uses existing data, search results, and users’ experience and reproduces structural discrimination through discriminatory results. Today, marginalized groups not only face discrimination in the real economy and society but also in the digital economy and society, which is exacerbated due to the lack of regulation and machine learning.

There is growing evidence that machine learning uses existing data, search results and users’ experience and reproduces structural discrimination through discriminatory results.

Numerous platform companies such as Airbnb, Etsy, CustomMade, among others, induce discrimination. For example, racial discrimination on Airbnb has been studied by Luca and others using identical profiles with different names. Airbnb hosts who used names typically associated the Afro American community had 16 percent fewer opportunities to rent. The same approach was applied by Farajallah and others in BlablaCar, where drivers with Muslim or Arab origins and characteristics saw a 20 percent lower demand than other drivers with French names. Additionally, the first group received lower payments.

A study on Google searches in the US by Sweeney showed that Afro American name searches induced advertising on detention reports but it was not the case when using typical white American names. This phenomenon was confirmed on many other platforms. In sum, according to Fisman and Luca, at least two factors enable discrimination: ethnic markers (pictures or names) and users’ ability to decide who they will select or deal with. These discriminatory factors are often built into the platform design.

There is also growing evidence that women who are online platform workers face several discriminations solely on account of their gende. A few companies are making efforts to study if men have better scores or receive better offers and payments than women on the same platform.

Designers and platform companies have to acknowledge that this discrimination is happening based on their models. They should be compelled to analyze if they are inducing new forms of discrimination online and correct it on a case-by-case basis. It is becoming publicly accepted that machine learning and deep learning incorporate discrimination and reproduce it.

Governments have to increase efforts to regulate data, AI, and machine learning discriminatory practices, and ensure legal frameworks that prohibits the use of gender, ethnic, or religious individual information in data collection and treatment. But this should also be a question of regulation from the rights’ perspective at the multilateral level. We cannot yield to the interests of big platforms and developed countries that are lobbying to get unfair e-commerce rules resolved at the World Trade Organisation. 

Even assuming that all public service negotiators from the Global South understand structural inequalities and risks, and assuming that they negotiate to put the public interest before corporate interests, it is hardly possible to accept that the outcomes of this negotiation will be made on equal terms, or that it will correct monopoly of transnational platform corporations, or that it will consider how human rights and women’s human rights may be under threat by AI non-regulation.

It is almost impossible that an algorithm ensures equality by default. Algorithms should be tested and corrected and all forms of discrimination, that violate basic rights found in most countries’ constitutions, should be eliminated by respective states. We need to better understand how existing international human rights standards and obligations and national legal frameworks can protect us from digital discrimination and its widespread presence through incorporation of AI in business and public policy.

Indeed, ¨policy space¨ for owned public policies and accountability of decision-making are also being shifted by AI and its biases, based on arguments of productivity, effectiveness, and value for money. We still need to better understand how discrimination operates in public policy that is designed on the basis of AI, machine learning, and deep learning.

With the same data and same development or social outcomes’ goals, we may end up with very different policy responses and results depending on the design of the algorithm used and machine learning effects.

When policies or decisions are ¨data based¨, they are more easily accepted or respected as ¨the best¨ solution based on a reality represented by data. But, in fact, they are algorithm-based and ¨the best¨ solution is relying not only on data but also, and mainly, on the algorithm subjectivity.

We are already in an era of algorithmic and deep learning-based public policies even in developing countries, which has broader implications than algorithmic marketing. With the same data and same development or social outcome goals, we may end up with very different policy responses and results depending on the design of the algorithm used and machine learning effects.

This article is an abridged version of a chapter penned by the author along with Anita Gurumurthy (of IT for Change) for the Spotlight on Sustainable Development 2019 report .