AI Governance: A Matryoshka Suitcase

Lorenn Ruster
8 min readMay 31, 2019

Suitcase words:

No… not ‘carry-on’, ‘hardcase’ or Samsonite.

A term coined by Marvin Minsky to describe words like ‘consciousness’, ‘thinking’, ‘intelligence’.

Words that mean nothing by themselves but hold a variety of meanings inside when you start unpacking. To really deeply engage with these words, you need numerous other concepts and layers.

‘Governance’ is a bit like that. It’s a suitcase word.

We know we need ‘good governance’. We know it’s important. We know it’s core to making decisions and holding people accountable.

We know the basic function of the suitcase. But once we start unpacking what that really means, how that works in practice, who makes the judgement call on what it is and what ‘good’ really looks like, then we find a range of things stored in the suitcase. Some things we knew were hiding somewhere inside, some we can’t find despite our best efforts, some are damaged in transit, some we forgot we packed and others we never included in the suitcase in the first place but maybe we would if we travelled again.

When we speak about the governance of AI, I like to visualise a what I’ve termed a Matryoshka Suitcase — like the Matryoshka dolls, unpacking one suitcase reveals another suitcase slightly smaller, and another, and another, and another. All require unpacking to engage with them meaningfully.

All of a sudden you’ve got more suitcases than hands and fitting them all back together is never as simple as it seems.

It can be paralysing.

For this post, I thought to begin to unpack one of the Matryoshka Suitcases and focus a little on the linkage of AI governance with power.

Matryoshka Dolls

Firstly, a few questions that jump out of this power suitcase are unpacked.

Then a couple other suitcases found within are explored:

Matryoshka Suitcase #1: Indigenous Intelligence

Matryoshka Suitcase #2: Human Rights

Matryoshka Suitcase #3: Collective Impact

Spoiler alert: there are no answers in this post; rather a series of questions revealed with each suitcase ripe for further exploration, research, consideration and collaboration.

AI — its development, implementation and impacts — is a driving force for the future of our humanity. Shaping the governance of AI means tackling how to create institutions/ organisations / bodies / processes of the future that are able to engage with, design and steward change in power dynamics in new and different ways.

Analysing the power dynamics at play right now:

- What actions are we taking now, consciously or not, which are defining the power dynamics inherent in the development and use of AI?

- Can we pinpoint decision-making that is taking place?

- Who is making decisions that impact the development of AI for the future?

- What norms are already in place regarding AI development? What principles do they subscribe to? Who is scrutinising these principles on behalf of humanity?

Intentionally designing a future that consciously tackles power distribution:

- How do we ensure that those lacking power in the evolving systems surrounding AI are not only accounted for, heard and included but can meaningfully be a part of determining their own futures?

- What mechanisms can be designed to distribute power and safeguard humanity from a future catering to only some? What learnings from other fields can we leverage?

- Is representation of our diverse society in decision-making a mechanism for power redistribution? If so, how do we define true representation and how do we define diversity — of culture, thought, mindset, geography, worldview? Are there other models to consider?

- Who do we trust to steward a process of design for a future that distributes power, recognising that this will ultimately lead to the realities of power transfer, upsetting the status quo and ultimately, for some, an acceptance of loss?

In an effort to unpack this further, below are a few adjacent perspectives on power, providing some clues to the power dynamics we need to foster, create, control and let emerge.

Indigenous intelligence

“All technology can be viewed as a cultural artefact; a technological representation of society at that time, that reflects the cultures and values of the society that created it”. (PwC’s Indigenous Consulting submission on Human Rights and Technology to the Australian Human Rights Commission, 2018)

As much as AI is driving a large shift in our society, it is not the first time humanity has dealt with such transformation. Our Indigenous peoples hold great wisdom on this, yet today are some of the most marginalised peoples on earth, often without a seat at the table, without power to influence where it counts.

In business, learning from those who have walked the path before is critical to success and progress: we do market and competitive analyses, we interview stakeholders widely to learn, test and iterate, we hire people with experience as part of our leadership team to transform us from the inside and we look to have them in coveted positions on our boards to steward the company into the future.

Why would this approach be any different in how we design governance of AI in the future?

Some questions for further consideration:

What are we doing to learn with our oldest cultures on earth and how they have designed and adapted to technology development and integration into their lives over thousands of years? I deliberately use the term ‘learn with’ and not ‘learn from’. This requires a fundamental shift in how we engage in learning, co-creation and ownership of our shared humanity. Central to this is how we design systems and mechanisms that actively and intentionally consider and shape power dynamics.

How can Indigenous wisdom be at the core of AI governance now and in the future? What practical conditions would need to be in place for this to occur in a way that moves beyond inclusion and enables self-determination?

Human Rights

When looking at how we might govern AI into the future, a question around humanity’s commonly held values emerges quickly as we explore questions of accountability: accountability to whom and to what standards.

The Universal Declaration of Human Rights is an expression of fundamental values shared by members of the international community. It begins by recognising that ‘the inherent dignity of all members of the human family is the foundation of freedom, justice and peace in the world’.

This declaration is evidently a good guide to humanity’s commonly held values.

Despite this, there is already significant concern regarding the potential impacts of AI on core articles in the Universal Declaration of Human Rights (UDHR), the International Covenant on Civil and Protective Rights (ICCPR), the International Covenant on Economic, Social and Cultural Rights (ICESCR) and the EU Charter of Fundamental Rights. An indicative summary demonstrating this is outlined below.

Source: summarised from AccessNow, Human Rights in the age of Artificial Intelligence, Nov 2018.

Describing the impacts is important and making recommendations regarding due diligence, impact assessments, grievance mechanisms etc are also important.

But what else do we do here? The stakes are high that the continued development of AI has the potential to erode many of our human rights and have an effect of disequilibrium on commonly held values previously guiding our behaviour and, in many ways, shaping and guiding power structures.

Some questions for further consideration:

Do we, as an international community, need to fundamentally revisit these Human Rights in the context of AI and its future impacts?

How can we use these rights to drive the design of AI into the future and be part of the mindset of those working in and shaping the field (rather than an afterthought or a way to do some sort of compliance-based check)?

How can human rights dynamically contribute to the governance of AI development as it happens?

Collective impact

Over the past 5–10 years there has been increasing traction for and demonstration of “Collective Impact” — a way of tackling complex social problems.

At its core is a notion that siloed ways of working are not conducive to tackling complex social change, and a collective approach is required.

“The Collective Impact approach is premised on the belief that no single policy, government department, organisation or program can tackle or solve the increasingly complex social problems we face as a society. The approach calls for multiple organisations or entities from different sectors to abandon their own agenda in favour of a common agenda, shared measurement and alignment of effort. Unlike collaboration or partnership, Collective Impact initiatives have centralised infrastructure — known as a backbone organisation — with dedicated staff whose role is to help participating organisations shift from acting alone to acting in concert.” (Kerry Graham and Dawn O’Neil, Probono Australia, 2013)

There are 5 conditions of collective impact: establishing a common agenda, shared measurement, mutually reinforcing activities, continuous communication and backbone support (see figure below).

Source: John Kania, Fay Hanleybrown, & Jennifer Splansky Juster, Essential Mindset Shifts for Collective Impact, Stanford Social Innovation Review, Fall 2014.

Power dynamics are also an essential consideration in implementing a collective impact approach, particularly as governance structures (e.g. the backbone organisation / team) are often not in place at the outset. The emergence of these governance mechanisms and the role they play in stewarding the process, holding people to account and making decisions is entangled in the power dynamics of a particular community.

Further work is required to unpack the relevance of this approach in addressing AI as an emerging complex social challenge.

Some questions for further consideration:

To what extent is the place-based nature of many collective impact initiatives a crucial element to the model? Could the model be applied to a more networked, nebulous community as would potentially be the case when considering governance of AI?

Are there already backbone type organisations in the field of AI governance? Are they oriented to collective impact?

Who would be the stakeholders involved in a collective impact approach to tackling AI governance in the future? To what extent is there already a common agenda around what is trying to be achieved?

Ultimately, what can we learn from collective impact in terms of the establishment of governance structures and navigating and negotiating power?

And so it begins… Matryoshka Suitcase after Matryoshka Suitcase.

In the era of exponential change, unpacking these suitcases is one approach to identify ways to build an AI governance infrastructure that values an equitable world for all.

For context: I am an aspiring tri-sector collaborator, born in Sydney, Australia and citizen of the globe. A strategy consultant by trade, I have worked across many industries and seek to combine business acumen, social impact and human compassion. An alumnus of Singularity University, I have a particular interest in the intersection of social impact, technology, ethics and innovation. In 2016, I completed the Acumen Global Fellowship and spent a year building a marketing / innovation /customer insight capability at a fast-growing solar energy social enterprise in Uganda. I am currently a director focused on systems change, social business innovation, Indigenous Affairs and cross-sector collaboration with PwC’s Indigenous Consulting. I like to reflect on my experiences as a way of making sense of them and with a hope that my sharing may spark something in others too. Thank you for reading!

--

--

Lorenn Ruster
Lorenn Ruster

Written by Lorenn Ruster

Exploring #dignity centred #design #tech #AI #leadership | PhD Candidate ANU School of Cybernetics | Acumen Fellow | PIC, SingularityU, CEMS MIM alum|Views =own

No responses yet