Categories
Governance

Necessity of Openness in Normative Technology

This essay is written in response to the United Nations Office of the Tech Envoy’s Call for Papers on Global AI Governance.

Problems of Closed Systems

The attraction of the closed approach is certainly clear: If these critical components only reside within a small number of organizations, one may think that they will be easier to audit, and that a thorough understanding of the technology can be found before they are deployed to the public in an orderly manner.

It also mirrors the opinion of leading legal scholar Lawrence Lessig, who proposed this in his seminal book “Code and other laws of cyberspace” in 1999. While the book was remarkably prescient in many of its theses, it is clear in retrospect that the assumption that technology would be easier to regulate when it is centralized within a few companies was highly flawed.

The reasons are varied, but it is important to note that technology companies posit other forms of power than regulators do; they have the knowledge required to fully understand how systems are developed and work, they have the capacity to implement the rules that govern how people operate, and control the infrastructure onto which others deploy their systems and so constrain their possible actions too. This power is illegitimate, but nonetheless often exceed the power of nation states, resulting in loss of regulability.

This has been the de facto technological landscape up to now, and AI technologies are unlikely to change these fundamental problems, if anything, they may aggravate the problems.

Audit of closed systems are unlikely to be successful unless the auditee is perfectly honest and fully committed to transparency, as the knowledge power imbalance will be significant, and hiding malicious behavior is easy. Auditing of perfectly honest actors is pointless, that is an advisory and not an audit. Requirements of transparency is equally difficult for the same reason, there is no way to know whether everything that is relevant has been made available.

If we define safety as an absence of harm, there is a myriad of ways that harm could arise of which not all can be predicted, and therefore safety is practically impossible to prove. While certain harms can be identified and mitigated, estimating all risks is impossible and can only be mitigated after the harmful effects starts to manifest themselves. In closed systems, understanding such effects is harder since it relies on information that is only available to the entity that is the cause of the harm.

Closed systems are likely to benefit only hegemons, and that implies only a handful of global companies in a globalized world, and it will not result in a safer deployment of AI technologies.

Problems of Open Systems

We also acknowledge significant risks with open systems. As open systems lowers barriers to entry for all, it also lowers barriers for actors with nefarious purposes.

Researchers have created information pollution machines, in part with open source tools, something that would require a much larger investment if it had not been for open source tools.

Open Source also discourages business models that rely on exclusivity of code and models, and so, business strategies are more difficult to develop and maintain.

Moreover, access to computing power and ethical access to training data that pose further problems, which must addressed to avoid further consolidation of power.

While Open Source is very widespread and many projects have been highly successful, governance models are often not well adapted to a highly contested space. This has and will lead to attempts to capture, indeed, there are examples of business strategies advocating just that.

Public Knowledge – Public Safety

Generally, the public needs to have knowledge to respond thoughtfully to adversity and crisis, and this includes both citizens and the authorities that are tasked with protecting them. With closed systems, this knowledge will be restricted to a select few, and not necessarily those who can ensure public safety.

Irene Solaiman provides a compelling gradient framework for considering a closed-open continuum. While we generally align with the discussion there, but as discussed above, I do not agree that risk control is tied to this axis, in the closed end, it very much depends on the honesty of the organization, and in the open end, it depends on the quality of the governance system and access to resources. Risk control, and therefore public safety, is a separate issue from the closed-open axis.

However, the historical record makes it very clear that risk control in very powerful private corporations is very difficult. The current situation is worse than similar situations in the past because the knowledge imbalance is much larger than in any other industry, social media companies are examples that previous attempts at regulation has been largely unsuccessful.

Governance of open systems must be improved urgently, but can be achieved through novel public institutions that develop systems with a public mandate in open ecosystems alongside corporations, academia and other actors. There are many examples of healthy ecosystems, in fact, Digital Public Goods are generally developed that way today.

However, as future AI systems are likely to have a different risk profile than today’s Digital Public Goods, efforts to design good governance systems must be redoubled. Given the history of development of Digital Commons, this direction is far more likely to succeed than an approach to control risk in closed systems.

Categories
Governance

Nuances in Governance Design

Once you start thinking about how to govern technology, you realize there is no one-size-fits-all answer.

Even if one may not be able to articulate exactly how, it is obvious to people that the way a code-base of an open-source UI component is governed should be different from how to govern a large Digital Identity system. Projects that have a large number of contributors, being used by many, need to be governed differently from projects with a smaller number of contributors, used by a few. There could also be projects where deep technical expertise resides with a few, but the projects affect many stakeholders.


These are obvious reasons that normative technology needs a multiplicity of governance models, but there are some subtle reasons too.

Normative technologies do not all exist in the same societal context. There are societies where trust in government, fellow citizens and rule of law are high e.g. Norway. These societies would benefit from a model where a government appointed ombudsman, or oversight body could be trusted to do their job independently and fairly. However, in other societies like India, where trust in government and rule of law are not as established, these models would fail.

This underlines the focus of these discussions. Governance models do not have an inherent “goodness”. The exercise here is to build trust amongst the stakeholders that they have both a voice, and a choice. They should have a say in shaping the normative technology that they participate in. Where necessary, they should also have the choice to reject the normative technology they do not believe agrees with their values.

These are hard problems, because ultimately, we are not just governing code, but governing ourselves. These problems of governance of normative technologies, intersect with the problems we have in governance of society itself. However, these problems are not impossible. We need to study examples of successes in various different contexts and document them, till we are able to learn what works where, when does it work and ultimately, why? Only then will we start to understand how to build governance models that help us build technology that upholds the norms we desire.