Artificial intelligence and frontier technologies are not new in the field of research but are certainly experiencing one of their biggest springs with the amplification of the datafication of private and public sectors, at the international and national level, to the point that data is considered as ‘the lifeblood of decision-making and the raw material for accountability [because] without high-quality data providing the right information on the right things at the right time; designing, monitoring and evaluating effective policies becomes almost impossible.’ (UN Secretary-General’s Independent Expert Advisory Group on a Data Revolution for Sustainable Development, 2014).
Datafication (Cukier and Mayer-Schöenberger, 2013) is the concept according to which every aspect of human interaction and our lives can be transformed into data -leading to better apprehension, understanding, analysis, and therefore better-managed issues. Datafication supports and has reinforced the results-based management reform of the international community to improve its speed, impact and efficiency. It also enables different agencies to better coordinate their actions and to better include local populations.
For instance, the UNCHR and the WFP have developed an IT programme maximizing food assistance for refugees. DPPA has also tested 3-hour real-time live interactions with large groups of the Yemeni population using their own dialects and languages, enabling the participation of the population in the peace process.
With the increased complexity and ramification of issues in international development, humanitarian assistance and conflicts, data and AI applications can also highlight patterns that were not visible or that were underestimated before. Thus, it can re-direct political actions and programmes.
As said before, these highs of innovation also come with myths attached, pitfalls and risks particularly for the unheard, unrepresented or misrepresented.
Behind the datafication and the information society, there is a political and social construct that should be considered. First, the objectivation and rationality of the data system are relative. Second, data creates new segregations between people and creates new norms that should not be automatically accepted as the truth especially due to the opacity of the algorithms’ parameters (black-box). Third, some groups are naturally excluded from datafication: either because they do not have access to the digital world or their dialect is insufficiently represented in the data; or their culture and social artefacts are orally transmitted and therefore not computerized in data; or because the computerization of some of the data reflects pre-conceived visions, cultures and backgrounds of the humans behind the machines, from coders to chief executive officers.
One example is the case of gender-neutral language in some cultures. If you use some AI translation tools e.g., Google Translate, the neutrality is lost, or the gender attribution is reversed.
Another example is the representation of the indigenous community. Indigenous peoples represent more than 6% of the world’s population across 90 countries and are estimated to protect 80% of the world’s biodiversity (ITU news, 2020). They also represent 7,000 languages with 40% of these being endangered. Yet, 86% globally work in the informal economy and 47% in employment have limited or no digital skills, and as a result no substantial digital footprint. Consequently, their ancestral knowledge, their culture and the reflection of their concerns and challenges are very likely not to be represented in the mere data, which is creating new social constructs and is supposed to reflect the diversity of the world.
Simply said, innovation is not inclusive per nature. It is inclusive by design.
Inclusive innovation – whether gender-sensitive or minority-sensitive – is a political choice and is economically justified. Policies, programmes and funding should mirror this reality and insist on having a more inclusive approach.
Limiting digital divides to accessibility is, however, an over-simplification of a multi-layered problem that includes the integrity of the data ecosystem, ethics and law, environmental, social and societal implications, design thinking, measurement and procurement.
In a soon-to-be-published collective book entitled ‘Impact of Women’s Empowerment on SDGs in the Digital Era’, I developed further all the issues around giving voice to minorities, women and vulnerable groups throughout the whole cycle of Artificial Intelligence and innovation – from the data ecosystem to international law and leadership – providing an in-depth systemic view of the challenges faced by minorities, women and vulnerable groups (see Chapter 8, the article ‘A diverse (AI) world: How to make sure that the digital world reflects the richness and diversity of our world’).
Virginie MARTINS de NOBREGA
Virginie MARTINS de NOBREGA is a multilingual and multicultural professional with +12 years’ experience in multiple international contexts working at the intersect of law and human rights, social and ethical implications of innovation and business management/strategy. She decided to refocus her career completely to improve people’s lives, through working for the HPD nexus. For that, she conducted research on how Artificial Intelligence and frontier technologies can increase the speed, efficiency and impact of international organizations in the Humanitarian-Development-Peace nexus, during her EMPA at the SDA Bocconi School of management. For the last 4 years, she has been advising on AI, tech and data issues and advocating for an open, diverse and multicultural AI with strong international ethical and international law benchmarks promoting human rights, fundamental freedoms and leaving no one behind.