Our world is undergoing an information Big Bang, in which the universe of data doubles every two years and quintillions of bytes of data are generated every day. The impact of big data is commonly described in terms of three “Vs”: volume, variety, and velocity.
More data makes analysis more powerful and more granular. Variety adds to this power and enables new and unanticipated inferences and predictions – This is GREAT for Business and Economic growth of the nation.
Artificial intelligence (AI) will likely accelerate this trend. Much of the most privacy-sensitive data analysis today–such as search algorithms, recommendation engines, and adtech networks–are driven by machine learning and decisions by algorithms.
As artificial intelligence evolves, it magnifies the ability to use personal information in ways that can intrude on privacy interests by raising analysis of personal information to new levels of power and speed.
Privacy – Personal privacy is a topic that has been tossed around a lot lately due to the rise in identity theft, data breaches, ransomware and hacking incidents. The question of privacy is paramount to the transparency issue of AI software and hardware. What information is collected about who and how it is stored, used and shared are things the public deserves to know.
The ability to train a deep learning (DL) system on large amounts of data has increased the speed of analysis and results, but the need for more and more data increases the risk of lack of privacy. However, the discussion of AI in the context of the privacy debate often brings up the limitations and failures of AI systems, such as predictive policing that could disproportionately affect some members of the communities.
The momentum of AI development further complicates matters because current privacy and security practices and standards might not account for AI capabilities. For example, an article in AMA Journal of Ethics (in the US), explains that current methods for de-identifying data are ineffective “in the context of large, complex data sets when machine learning algorithms can re-identify a record from as few as 3 data points.” The authors also note that AI algorithms can be susceptible to cyberattacks, which could pose threats to individual safety and data integrity.
In particular, with regards to health information technologies, such as electronic health records and patient portals, artificial intelligence (AI) raises concerns about data privacy and security — particularly in an era where cyberattacks are rampant and patients’ protected health information (PHI) is highly valuable to identity thieves and cyber criminals.
As such, protecting patient privacy and securing digital data will continue to be a fundamental risk issue as AI becomes more mainstream in healthcare, raising numerous legal and ethical questions. Thus, it will be incumbent on healthcare leaders, AI developers, policymakers, data scientists, and other experts to identify vulnerabilities and consider innovative and proactive strategies to address them.
AI has made the challenge both more addressable and more of a risk
This debate centres in particular on algorithmic bias and the potential for algorithms to produce unlawful or undesired discrimination in the decisions to which the algorithms relate. These are major concerns for civil rights and consumer organizations that represent populations that suffer undue discrimination. A model focused on data collection and processing may affect AI and algorithmic discrimination in several ways as discussed below:
1. Data stewardship requirements, such as duties of fairness or loyalty, could militate against uses of personal information that are adverse or unfair to the individuals the data relates to.
2. Data transparency or disclosure rules, as well as rights of individuals to access information relating to them, could illuminate uses of algorithmic decision-making.
3. Data governance rules that prescribe the appointment of privacy officers, conduct of privacy impact assessments, or product planning through “privacy by design” may surface issues concerning use of algorithms.
4. Rules on data collection and sharing could reduce the aggregation of data that enables inferences and predictions but may involve some trade-offs with the benefits of large and diverse datasets.
Regulation of AI usage in Sub-Saharan Africa
Consequently, the challenge for governments in Africa is to pass privacy legislation that protects individuals against any adverse effects from the use of personal information in AI, but without unduly restricting AI development or ensnaring privacy legislation in complex social and political thickets. And it is fair to say that most of these countries are already looking at how best to deal with these issues. For example;
1. In February 2018, the Kenyan government announced that it had formed an eleven-member blockchain and AI task force, comprised of representatives from academia, research institutions, and the local technology sector, and directly accountable to the Cabinet Secretary for Information, Communications and Technology. The immediate goal of the task force is to “make recommendations on how the government can leverage on the emerging technologies ethically.
2. On its part, Nigeria reportedly approved a robotics and AI agency in August 2018. Sources at the country’s Ministry of Science and Technology stated that the new agency “would leverage collaborations with international research bodies on robotics and AI” and enable “research and teachings in more complex technology skills to thousands of young people. Recent reporting indicates that investment in AI will be directed through this Agency
As AI use permeates industry, governments are updating laws to keep pace with the technology…In this regard, some argue that AI transparency will be essential to protect private data, ensure individual privacy and avoid risk to civil liberties.
The AI transparency paradox
· Trade secret and patent law
The problem with transparency concerning AI usage and patented technology is that it may fall under trade secret laws, and those are protected and, in most countries, banned from public record. Secondly, many law enforcement agencies stress the need for privacy when using AI in the field, in order to prevent criminals finding ways to circumvent the technology.
Another problem facing transparency with AI is that, if the specifics about a particular piece of device or complex algorithms are made public, hackers could potentially find ways to take over and control devices. As such, this paradox exists and It’s a fine line between protecting citizens’ rights and being held accountable.
· Current Scenario in Europe
Currently, with the way public records laws are structured, most documents and references to AI algorithms and devices are not available. Therefore, the public is left in the dark about how government agencies, law enforcement and the private sector use AI to assess situations and make policy decisions that potentially affect human lives.
In Conclusion
The question of how far to go with transparency laws concerning AI technologies is not a simple one. As such the challenge for Governments all over Africa is to pass privacy legislation that protects individuals against any adverse effects from the use of personal information in AI, but without unduly restricting AI development or ensnaring privacy legislation in complex social and political thickets.
In this regard, Delta3 International stands ready to offer our knowledge, experience and expertise in this specialist area to the various stakeholders including governments and private sector organisation in Africa.