Data ecosystems are fluid, changing and evolving. The place that data has in different spheres of society also changes through time. In the past 15 years the studies and debates around the impact on digitalization and datafication in everyday life grew all around the world as practices were also evolving. Along with those new changes and initiatives, the demands towards governments arose. Some of them focused on increasing the benefits of new data created inside governmental agencies and spreading them to encourage a wider participation through data openness initiatives. In contrast, some other demands were focused around the question: how can citizens be protected in their privacy in this new digitalized era? And this led to the update of the Data Protection Laws.
In recent years, a new wave of innovations came across the data ecosystem, bringing with it new solutions as well as novel concerns. The widespread of artificial intelligence (AI) systems promises to be as ubiquitous as datafication, and data structures are key to train and feed those. Therefore, AI gained momentum in data debates as well as Open Government Data, along with new innovations required in the data protection laws, to be up to date with the age of algorithms.
Taking into consideration these new concerns -along with many others-, the Global Data Barometer (GDB) was built from its predecessor, the Open Data Barometer, but widening its core focus from Open Data to Data for Public Good. It has aimed to explore different ways in which data can be released and used for the benefit of all, and at the same time looking at how countries are protecting their citizens from the potential harms in datafied societies. The close relationship between data and AI made it straightforward to start including some variables related to this area in the GDB survey. This was made mainly in the governance pillar, which explores data related frameworks, as well in the use pillar, which collected data use cases related to public good. Within the modules, the governance pillar also explores the presence of AI or algorithmic decision making in data protection laws and in data sharing frameworks, while in the use pillar we can track algorithmic uses of certain datasets for the public good. In the following section we will focus on the first pillar.
AI in data protection and data sharing laws
From the 109 countries surveyed, 85 (78%) have Data Protection (DP) regulations with the force of law, and 13 (11,9%) have some framework but which lacks the force of law. These numbers are similar to the results of a recent survey of the UNCTAD, that found 71% of 194 countries with data protection legislation and 9% with draft legislations. It is worth noting that while data protection frameworks are widespread around the world, this coverage hides some inequalities between regions and clusters of countries analysed by the Barometer data. For example, in the Middle East and North Africa only 38% of the surveyed countries have a DP framework with the force of law, while this number is 68% in Sub-Saharan Africa. In Europe, Central Asia and the so called Global North more than 90% of the countries have a current data protection framework with the force of law.
Despite being unevenly distributed around the world and with different levels of quality, data protection frameworks exist in most countries. Among the provisions measured by GDB regarding data protection frameworks the most widespread is the provision of data subjects with rights to access and correct data about themselves, the rights of choice and consent and the setting out of clear responsibilities for data holders.
Some of those countries with data protection regulations are prepared to face the new challenges that innovations have brought up or intensified existing ones. For example, more than 50% of the countries require notification of data breaches. However, only a small percentage of countries have some provision regarding location-related data or algorithmic decision making. In the following paragraph, we will focus on this last characteristic.
GDB findings show that in only 31 countries the data protection frameworks address algorithmic decision making, most of them in Europe and North America. To answer this question, most of the independent country researchers in Europe mentioned Article 22 of the GDPR, or some related law or article on the national level reflecting these same principles, which declare that the “data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”, and states some exceptions for that, for example, if the decision “is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or is based on the data subject’s explicit consent”.
After the Global North cluster of countries, the best performing region was Africa. Despite the very low number of countries which mentions algorithmic decision making (13%), the region shows some interesting bright spots. For example, Uganda has one of the most complete data protection frameworks in the world according to GDB results. Its recently approved data protection law indicates that where a decision which significantly affects a data subject is based solely on automated processing “a data controller shall as soon as reasonably practicable notify the data subject that the decision was taken on that basis and the data subject is entitled, by notice in writing to require the data controller to reconsider the decision within 21 days after receipt of the notification from the data controller; The data controller shall within 21 days after receipt of the notice, inform the data subject in writing of the steps that a data controller has taken to take to comply with the notice”. That section of the law, however, does not apply to decisions made in the course of considering whether to enter in a contract with the data subject and other similar situations or for a purpose authorised or required by law.
Ghana is another African country with high performance in the data protection framework indicator, which also covers algorithmic decision making with similar mandates. Burkina Faso´s framework covers these issues, in its article 19 of Chapter 2, Right of the Data Subject, and states that: “Everyone has the right to know and to challenge the information and reasoning used in any processing operation, whether or not automated, the results of which are held against him or her. Where such processing is based on artificial intelligence, the criteria and nature of the personal data on which the processing is based shall be indicated to him or her as soon as the personal data concerning him or her are collected“.
As we have seen, many countries are updating their frameworks to face new challenges, such as algorithmic decision making, but most of them still have a way to protect their citizen´s data in these new scenarios. If you are interested in these topics, we recommend you to keep an eye on GDB further editions, the recordings of the AI week and Empatia project by ILDA (in Spanish), as well as check the Global Index on Responsible AI, a new project of D4D.net and Research ICT Africa, that will measure progress on the responsible use of AI in over 120 countries around the world.