Categories
Governance

Why Technology Companies End Up Promoting Ignorance

Now, I first acknowledge three mutually exclusive schools of thought:

  1. Regulators do not understand technology, and therefore they should leave innovation and industrial development to those who do, namely the technologists themselves.
  2. Regulators understand technology sufficiently, and are able to enact meaningful legislation to advance society.
  3. Regulators do not understand technology, and we have to fix it!

I understand that it is hard to sell to politicians that point 2 is not the case, but bear with me, I believe that we have much more in common than what separates us, namely, the end goal of democratic control of the progress of technology. Those who argue point 1 have very little to show for, as argued in the first paragraph.

I believe we are in the situation of point 3, and that it is not difficult to understand why we ended there. Even many who haven’t been to business school (like myself) are aware of the fundamentals of a business strategy framework known as VRIN/O. In this framework, you should understand what value you provide to customers, if it is rare enough for you to be a chosen provider, if someone else can easily imitate you and take your business, if it can be substituted with something else, and if you have the organization to pull it off.

Several of these rely on a certain exclusivity, that’s one of your key competitive advantages. Technology companies, especially software companies, is in the knowledge industry. The software itself is in my experience pretty easy to imitate if ideas and understanding is sufficiently clear. Thus, these companies are obviously very strategic about what knowledge they keep in house, and what they share, and also careful about building internal knowledge bases. Understandably, right? That’s really their business.

Often, this is not a problem. Often, it drives competition. If you have a system that you’ve been making 10% faster than a competitors system based on the knowledge you have, then you are likely to win in the marketplace, and your competitor has to do their research to keep up, or find some other value proposition to compete. This provides better value for customers, so everybody wins!

So, this is not in a general problem, but it is a disaster for society when the technology has normative effects. When that happens, then society ends up without the ability to influence the direction society is taking because it lacks the knowledge to understand what has happened. We end up with privatized governance. We end up living in a company town, without ever having opted into doing so.

And that’s where we’re living right now. It has to end. This is our town, not theirs.

We know what the alternative looks like, in fact, we have it all around us. Even big, closed systems consists mostly of Open Source Software. Open Source Software establishes non-exclusivity and non-subtractability. That is, anyone can use it, study it, make modifications and distribute it. Large businesses build Open Source sometimes because it is of no strategic value to control the knowledge behind it, or because it confers other strategic advantages, like attracting developers to their products. Usually, you see a little software on top of this systems that are closed, as I said, they are very strategic about it. This could happen for reasons of healthy competition, but often it is a ruse. Buyer beware!

This is where democracies need to get in, these strategic decisions must not be left to the companies alone. We must protect people from entering the ruse, and even more importantly, we must not allow exclusive knowledge for normative technology. Since we already know how to develop software that doesn’t have exclusivity, we should do that.

However, even though governance has always been a part of Open Source projects, it is not defined as part of it. Moreover, we haven’t been terribly good at it beyond meritocracies that produce excellent products. There’s a big jump from a meritocracy to a governance model that is part of democratic infrastructure, and this is the jump we have to make now. This is a conversation that happens within what is now called Digital Commons, as governance has for thousands of years been part of Commons, and humanity has been pretty good at it.

So, that’s how we ended up in this sorry situation: Common business strategy that went unchecked, and where the means to keep it in balance went under the radar. This can be fixed with normative technology.

Categories
Governance

Necessity of Openness in Normative Technology

This essay is written in response to the United Nations Office of the Tech Envoy’s Call for Papers on Global AI Governance.

Problems of Closed Systems

The attraction of the closed approach is certainly clear: If these critical components only reside within a small number of organizations, one may think that they will be easier to audit, and that a thorough understanding of the technology can be found before they are deployed to the public in an orderly manner.

It also mirrors the opinion of leading legal scholar Lawrence Lessig, who proposed this in his seminal book “Code and other laws of cyberspace” in 1999. While the book was remarkably prescient in many of its theses, it is clear in retrospect that the assumption that technology would be easier to regulate when it is centralized within a few companies was highly flawed.

The reasons are varied, but it is important to note that technology companies posit other forms of power than regulators do; they have the knowledge required to fully understand how systems are developed and work, they have the capacity to implement the rules that govern how people operate, and control the infrastructure onto which others deploy their systems and so constrain their possible actions too. This power is illegitimate, but nonetheless often exceed the power of nation states, resulting in loss of regulability.

This has been the de facto technological landscape up to now, and AI technologies are unlikely to change these fundamental problems, if anything, they may aggravate the problems.

Audit of closed systems are unlikely to be successful unless the auditee is perfectly honest and fully committed to transparency, as the knowledge power imbalance will be significant, and hiding malicious behavior is easy. Auditing of perfectly honest actors is pointless, that is an advisory and not an audit. Requirements of transparency is equally difficult for the same reason, there is no way to know whether everything that is relevant has been made available.

If we define safety as an absence of harm, there is a myriad of ways that harm could arise of which not all can be predicted, and therefore safety is practically impossible to prove. While certain harms can be identified and mitigated, estimating all risks is impossible and can only be mitigated after the harmful effects starts to manifest themselves. In closed systems, understanding such effects is harder since it relies on information that is only available to the entity that is the cause of the harm.

Closed systems are likely to benefit only hegemons, and that implies only a handful of global companies in a globalized world, and it will not result in a safer deployment of AI technologies.

Problems of Open Systems

We also acknowledge significant risks with open systems. As open systems lowers barriers to entry for all, it also lowers barriers for actors with nefarious purposes.

Researchers have created information pollution machines, in part with open source tools, something that would require a much larger investment if it had not been for open source tools.

Open Source also discourages business models that rely on exclusivity of code and models, and so, business strategies are more difficult to develop and maintain.

Moreover, access to computing power and ethical access to training data that pose further problems, which must addressed to avoid further consolidation of power.

While Open Source is very widespread and many projects have been highly successful, governance models are often not well adapted to a highly contested space. This has and will lead to attempts to capture, indeed, there are examples of business strategies advocating just that.

Public Knowledge – Public Safety

Generally, the public needs to have knowledge to respond thoughtfully to adversity and crisis, and this includes both citizens and the authorities that are tasked with protecting them. With closed systems, this knowledge will be restricted to a select few, and not necessarily those who can ensure public safety.

Irene Solaiman provides a compelling gradient framework for considering a closed-open continuum. While we generally align with the discussion there, but as discussed above, I do not agree that risk control is tied to this axis, in the closed end, it very much depends on the honesty of the organization, and in the open end, it depends on the quality of the governance system and access to resources. Risk control, and therefore public safety, is a separate issue from the closed-open axis.

However, the historical record makes it very clear that risk control in very powerful private corporations is very difficult. The current situation is worse than similar situations in the past because the knowledge imbalance is much larger than in any other industry, social media companies are examples that previous attempts at regulation has been largely unsuccessful.

Governance of open systems must be improved urgently, but can be achieved through novel public institutions that develop systems with a public mandate in open ecosystems alongside corporations, academia and other actors. There are many examples of healthy ecosystems, in fact, Digital Public Goods are generally developed that way today.

However, as future AI systems are likely to have a different risk profile than today’s Digital Public Goods, efforts to design good governance systems must be redoubled. Given the history of development of Digital Commons, this direction is far more likely to succeed than an approach to control risk in closed systems.

Categories
Governance

Nuances in Governance Design

Once you start thinking about how to govern technology, you realize there is no one-size-fits-all answer.

Even if one may not be able to articulate exactly how, it is obvious to people that the way a code-base of an open-source UI component is governed should be different from how to govern a large Digital Identity system. Projects that have a large number of contributors, being used by many, need to be governed differently from projects with a smaller number of contributors, used by a few. There could also be projects where deep technical expertise resides with a few, but the projects affect many stakeholders.


These are obvious reasons that normative technology needs a multiplicity of governance models, but there are some subtle reasons too.

Normative technologies do not all exist in the same societal context. There are societies where trust in government, fellow citizens and rule of law are high e.g. Norway. These societies would benefit from a model where a government appointed ombudsman, or oversight body could be trusted to do their job independently and fairly. However, in other societies like India, where trust in government and rule of law are not as established, these models would fail.

This underlines the focus of these discussions. Governance models do not have an inherent “goodness”. The exercise here is to build trust amongst the stakeholders that they have both a voice, and a choice. They should have a say in shaping the normative technology that they participate in. Where necessary, they should also have the choice to reject the normative technology they do not believe agrees with their values.

These are hard problems, because ultimately, we are not just governing code, but governing ourselves. These problems of governance of normative technologies, intersect with the problems we have in governance of society itself. However, these problems are not impossible. We need to study examples of successes in various different contexts and document them, till we are able to learn what works where, when does it work and ultimately, why? Only then will we start to understand how to build governance models that help us build technology that upholds the norms we desire.