Categories
Governance

Why Technology Companies End Up Promoting Ignorance

Now, I first acknowledge three mutually exclusive schools of thought:

  1. Regulators do not understand technology, and therefore they should leave innovation and industrial development to those who do, namely the technologists themselves.
  2. Regulators understand technology sufficiently, and are able to enact meaningful legislation to advance society.
  3. Regulators do not understand technology, and we have to fix it!

I understand that it is hard to sell to politicians that point 2 is not the case, but bear with me, I believe that we have much more in common than what separates us, namely, the end goal of democratic control of the progress of technology. Those who argue point 1 have very little to show for, as argued in the first paragraph.

I believe we are in the situation of point 3, and that it is not difficult to understand why we ended there. Even many who haven’t been to business school (like myself) are aware of the fundamentals of a business strategy framework known as VRIN/O. In this framework, you should understand what value you provide to customers, if it is rare enough for you to be a chosen provider, if someone else can easily imitate you and take your business, if it can be substituted with something else, and if you have the organization to pull it off.

Several of these rely on a certain exclusivity, that’s one of your key competitive advantages. Technology companies, especially software companies, is in the knowledge industry. The software itself is in my experience pretty easy to imitate if ideas and understanding is sufficiently clear. Thus, these companies are obviously very strategic about what knowledge they keep in house, and what they share, and also careful about building internal knowledge bases. Understandably, right? That’s really their business.

Often, this is not a problem. Often, it drives competition. If you have a system that you’ve been making 10% faster than a competitors system based on the knowledge you have, then you are likely to win in the marketplace, and your competitor has to do their research to keep up, or find some other value proposition to compete. This provides better value for customers, so everybody wins!

So, this is not in a general problem, but it is a disaster for society when the technology has normative effects. When that happens, then society ends up without the ability to influence the direction society is taking because it lacks the knowledge to understand what has happened. We end up with privatized governance. We end up living in a company town, without ever having opted into doing so.

And that’s where we’re living right now. It has to end. This is our town, not theirs.

We know what the alternative looks like, in fact, we have it all around us. Even big, closed systems consists mostly of Open Source Software. Open Source Software establishes non-exclusivity and non-subtractability. That is, anyone can use it, study it, make modifications and distribute it. Large businesses build Open Source sometimes because it is of no strategic value to control the knowledge behind it, or because it confers other strategic advantages, like attracting developers to their products. Usually, you see a little software on top of this systems that are closed, as I said, they are very strategic about it. This could happen for reasons of healthy competition, but often it is a ruse. Buyer beware!

This is where democracies need to get in, these strategic decisions must not be left to the companies alone. We must protect people from entering the ruse, and even more importantly, we must not allow exclusive knowledge for normative technology. Since we already know how to develop software that doesn’t have exclusivity, we should do that.

However, even though governance has always been a part of Open Source projects, it is not defined as part of it. Moreover, we haven’t been terribly good at it beyond meritocracies that produce excellent products. There’s a big jump from a meritocracy to a governance model that is part of democratic infrastructure, and this is the jump we have to make now. This is a conversation that happens within what is now called Digital Commons, as governance has for thousands of years been part of Commons, and humanity has been pretty good at it.

So, that’s how we ended up in this sorry situation: Common business strategy that went unchecked, and where the means to keep it in balance went under the radar. This can be fixed with normative technology.

Categories
Governance

Necessity of Openness in Normative Technology

This essay is written in response to the United Nations Office of the Tech Envoy’s Call for Papers on Global AI Governance.

Problems of Closed Systems

The attraction of the closed approach is certainly clear: If these critical components only reside within a small number of organizations, one may think that they will be easier to audit, and that a thorough understanding of the technology can be found before they are deployed to the public in an orderly manner.

It also mirrors the opinion of leading legal scholar Lawrence Lessig, who proposed this in his seminal book “Code and other laws of cyberspace” in 1999. While the book was remarkably prescient in many of its theses, it is clear in retrospect that the assumption that technology would be easier to regulate when it is centralized within a few companies was highly flawed.

The reasons are varied, but it is important to note that technology companies posit other forms of power than regulators do; they have the knowledge required to fully understand how systems are developed and work, they have the capacity to implement the rules that govern how people operate, and control the infrastructure onto which others deploy their systems and so constrain their possible actions too. This power is illegitimate, but nonetheless often exceed the power of nation states, resulting in loss of regulability.

This has been the de facto technological landscape up to now, and AI technologies are unlikely to change these fundamental problems, if anything, they may aggravate the problems.

Audit of closed systems are unlikely to be successful unless the auditee is perfectly honest and fully committed to transparency, as the knowledge power imbalance will be significant, and hiding malicious behavior is easy. Auditing of perfectly honest actors is pointless, that is an advisory and not an audit. Requirements of transparency is equally difficult for the same reason, there is no way to know whether everything that is relevant has been made available.

If we define safety as an absence of harm, there is a myriad of ways that harm could arise of which not all can be predicted, and therefore safety is practically impossible to prove. While certain harms can be identified and mitigated, estimating all risks is impossible and can only be mitigated after the harmful effects starts to manifest themselves. In closed systems, understanding such effects is harder since it relies on information that is only available to the entity that is the cause of the harm.

Closed systems are likely to benefit only hegemons, and that implies only a handful of global companies in a globalized world, and it will not result in a safer deployment of AI technologies.

Problems of Open Systems

We also acknowledge significant risks with open systems. As open systems lowers barriers to entry for all, it also lowers barriers for actors with nefarious purposes.

Researchers have created information pollution machines, in part with open source tools, something that would require a much larger investment if it had not been for open source tools.

Open Source also discourages business models that rely on exclusivity of code and models, and so, business strategies are more difficult to develop and maintain.

Moreover, access to computing power and ethical access to training data that pose further problems, which must addressed to avoid further consolidation of power.

While Open Source is very widespread and many projects have been highly successful, governance models are often not well adapted to a highly contested space. This has and will lead to attempts to capture, indeed, there are examples of business strategies advocating just that.

Public Knowledge – Public Safety

Generally, the public needs to have knowledge to respond thoughtfully to adversity and crisis, and this includes both citizens and the authorities that are tasked with protecting them. With closed systems, this knowledge will be restricted to a select few, and not necessarily those who can ensure public safety.

Irene Solaiman provides a compelling gradient framework for considering a closed-open continuum. While we generally align with the discussion there, but as discussed above, I do not agree that risk control is tied to this axis, in the closed end, it very much depends on the honesty of the organization, and in the open end, it depends on the quality of the governance system and access to resources. Risk control, and therefore public safety, is a separate issue from the closed-open axis.

However, the historical record makes it very clear that risk control in very powerful private corporations is very difficult. The current situation is worse than similar situations in the past because the knowledge imbalance is much larger than in any other industry, social media companies are examples that previous attempts at regulation has been largely unsuccessful.

Governance of open systems must be improved urgently, but can be achieved through novel public institutions that develop systems with a public mandate in open ecosystems alongside corporations, academia and other actors. There are many examples of healthy ecosystems, in fact, Digital Public Goods are generally developed that way today.

However, as future AI systems are likely to have a different risk profile than today’s Digital Public Goods, efforts to design good governance systems must be redoubled. Given the history of development of Digital Commons, this direction is far more likely to succeed than an approach to control risk in closed systems.

Categories
Innovation

Institution Innovation

Some types of innovation are well understood and celebrated, like technology innovation and business model innovation. If I write and app that does a new thing, that’s innovation, and if I find a new way of selling solar panels, that could well be important innovation.

We’ve seen novel institutions emerged in societies through history. Usually they have emerged to wield power or to regulate power, whether it is economic power, or to establish a monopoly on violence, or to balance powers. Establishing division of power into legislative, executive and judicial branches was an innovation with profound implications for society.

Now, the emergence of companies that are at least as powerful as nation states should prompt us to consider how they derive their power and if we can balance that power through new institutions. This would be institution innovation, and believe it such innovation is instrumental to balance certain forms of power.

For example, I think that the technology industry to a great extent derives its power from controlling essential knowledge about how the systems we rely on operate, and from implementation capacity. By forming an institution that develops Normative Technology with a democratic mandate, the institution balances these two forms of power, and doing so is currently very urgent.

In detail, such an institution would be focused on creation. With that, it would ensure that technology innovations are made in lockstep with moral and political thinking. This would also have broader outreach since it would happen in collaboration with commercial and civil society actors. It would firstly establish infrastructures (for example identity and data management systems) and then by developing technologies that would address the reasons why citizens are tied up to platforms by creating things that are immediately useful to them so that they can be set free from such platforms.

Beyond technical innovations, new institutions need to make innovations in fields of for example

  • governance of digital commons,
  • methods of interaction between political authorities and technological implementers,
  • development in cross-disciplinary teams,
  • project management given more complex teams and stakeholder views,
  • oversight to ensure that development is in citizen’s best interest and power remains balanced,
  • legislation based on needs identified by technological progress.

Setting institution innovation in motion is hard. If I was writing an app, I could just start doing it. If I had an idea for business innovation, I could just start selling something. Institution innovation requires a consensus to form within societies that it is needed and helpful. It is absolutely dependent on political support.