Categories
Governance

Necessity of Openness in Normative Technology

This essay is written in response to the United Nations Office of the Tech Envoy’s Call for Papers on Global AI Governance.

Problems of Closed Systems

The attraction of the closed approach is certainly clear: If these critical components only reside within a small number of organizations, one may think that they will be easier to audit, and that a thorough understanding of the technology can be found before they are deployed to the public in an orderly manner.

It also mirrors the opinion of leading legal scholar Lawrence Lessig, who proposed this in his seminal book “Code and other laws of cyberspace” in 1999. While the book was remarkably prescient in many of its theses, it is clear in retrospect that the assumption that technology would be easier to regulate when it is centralized within a few companies was highly flawed.

The reasons are varied, but it is important to note that technology companies posit other forms of power than regulators do; they have the knowledge required to fully understand how systems are developed and work, they have the capacity to implement the rules that govern how people operate, and control the infrastructure onto which others deploy their systems and so constrain their possible actions too. This power is illegitimate, but nonetheless often exceed the power of nation states, resulting in loss of regulability.

This has been the de facto technological landscape up to now, and AI technologies are unlikely to change these fundamental problems, if anything, they may aggravate the problems.

Audit of closed systems are unlikely to be successful unless the auditee is perfectly honest and fully committed to transparency, as the knowledge power imbalance will be significant, and hiding malicious behavior is easy. Auditing of perfectly honest actors is pointless, that is an advisory and not an audit. Requirements of transparency is equally difficult for the same reason, there is no way to know whether everything that is relevant has been made available.

If we define safety as an absence of harm, there is a myriad of ways that harm could arise of which not all can be predicted, and therefore safety is practically impossible to prove. While certain harms can be identified and mitigated, estimating all risks is impossible and can only be mitigated after the harmful effects starts to manifest themselves. In closed systems, understanding such effects is harder since it relies on information that is only available to the entity that is the cause of the harm.

Closed systems are likely to benefit only hegemons, and that implies only a handful of global companies in a globalized world, and it will not result in a safer deployment of AI technologies.

Problems of Open Systems

We also acknowledge significant risks with open systems. As open systems lowers barriers to entry for all, it also lowers barriers for actors with nefarious purposes.

Researchers have created information pollution machines, in part with open source tools, something that would require a much larger investment if it had not been for open source tools.

Open Source also discourages business models that rely on exclusivity of code and models, and so, business strategies are more difficult to develop and maintain.

Moreover, access to computing power and ethical access to training data that pose further problems, which must addressed to avoid further consolidation of power.

While Open Source is very widespread and many projects have been highly successful, governance models are often not well adapted to a highly contested space. This has and will lead to attempts to capture, indeed, there are examples of business strategies advocating just that.

Public Knowledge – Public Safety

Generally, the public needs to have knowledge to respond thoughtfully to adversity and crisis, and this includes both citizens and the authorities that are tasked with protecting them. With closed systems, this knowledge will be restricted to a select few, and not necessarily those who can ensure public safety.

Irene Solaiman provides a compelling gradient framework for considering a closed-open continuum. While we generally align with the discussion there, but as discussed above, I do not agree that risk control is tied to this axis, in the closed end, it very much depends on the honesty of the organization, and in the open end, it depends on the quality of the governance system and access to resources. Risk control, and therefore public safety, is a separate issue from the closed-open axis.

However, the historical record makes it very clear that risk control in very powerful private corporations is very difficult. The current situation is worse than similar situations in the past because the knowledge imbalance is much larger than in any other industry, social media companies are examples that previous attempts at regulation has been largely unsuccessful.

Governance of open systems must be improved urgently, but can be achieved through novel public institutions that develop systems with a public mandate in open ecosystems alongside corporations, academia and other actors. There are many examples of healthy ecosystems, in fact, Digital Public Goods are generally developed that way today.

However, as future AI systems are likely to have a different risk profile than today’s Digital Public Goods, efforts to design good governance systems must be redoubled. Given the history of development of Digital Commons, this direction is far more likely to succeed than an approach to control risk in closed systems.

Categories
Innovation

Institution Innovation

Some types of innovation are well understood and celebrated, like technology innovation and business model innovation. If I write and app that does a new thing, that’s innovation, and if I find a new way of selling solar panels, that could well be important innovation.

We’ve seen novel institutions emerged in societies through history. Usually they have emerged to wield power or to regulate power, whether it is economic power, or to establish a monopoly on violence, or to balance powers. Establishing division of power into legislative, executive and judicial branches was an innovation with profound implications for society.

Now, the emergence of companies that are at least as powerful as nation states should prompt us to consider how they derive their power and if we can balance that power through new institutions. This would be institution innovation, and believe it such innovation is instrumental to balance certain forms of power.

For example, I think that the technology industry to a great extent derives its power from controlling essential knowledge about how the systems we rely on operate, and from implementation capacity. By forming an institution that develops Normative Technology with a democratic mandate, the institution balances these two forms of power, and doing so is currently very urgent.

In detail, such an institution would be focused on creation. With that, it would ensure that technology innovations are made in lockstep with moral and political thinking. This would also have broader outreach since it would happen in collaboration with commercial and civil society actors. It would firstly establish infrastructures (for example identity and data management systems) and then by developing technologies that would address the reasons why citizens are tied up to platforms by creating things that are immediately useful to them so that they can be set free from such platforms.

Beyond technical innovations, new institutions need to make innovations in fields of for example

  • governance of digital commons,
  • methods of interaction between political authorities and technological implementers,
  • development in cross-disciplinary teams,
  • project management given more complex teams and stakeholder views,
  • oversight to ensure that development is in citizen’s best interest and power remains balanced,
  • legislation based on needs identified by technological progress.

Setting institution innovation in motion is hard. If I was writing an app, I could just start doing it. If I had an idea for business innovation, I could just start selling something. Institution innovation requires a consensus to form within societies that it is needed and helpful. It is absolutely dependent on political support.