Mission accepted: my PhD research topic on dignity-centred AI development in 5 need-to-know points
This week marks 1 year since I began my PhD at the Australian National University’s School of Cybernetics. It’s been an amazing adventure so far.
Lots of people wonder what I’m actually up to. What does a PhD student actually do in their first year (in Australia at least)?
Well essentially, we work out where to focus and why. And we present a proposal on what we’re going to do for the rest of the time. I did this a couple of weeks ago, and wanted to share a bit about where my research is at, why it’s important, why you should care and share some ways you may be able to help!
My research topic (like most PhDs) looks like a bit of a mouthful:
to investigate leverage points that enable and thwart dignity-centred Artificial Intelligence development in entrepreneurial ecosystems.
Here’s what you need to know in 5 points.
(Spoiler alert: If you’re more of a video-type, you can scroll to the end for a 12 minute version of my thesis proposal presentation)
1. Entrepreneurs are a core group of humans shaping our technologies and thereby our futures, yet are understudied and underserved.
Humans shape technologies and technologies, in turn, shape humans; this is referred to as an interactional stance. Entrepreneurs envisioning, designing and building Artificial Intelligence (AI)-enabled systems hold power in shaping our collective futures. Their values, consciously or unconsciously, drive decision-making. And the ecosystem — societal, technological, environmental — within which entrepreneurs sit, shapes their values.
All AI embeds values, consciously or not. And we, as humans, have the power to shape our futures through the design and implementation of technologies. The field of AI ethics assists technology designers and developers to make informed decisions about what values to put at the centre. Yet it’s not particularly tailored to entrepreneurial contexts (often assuming large corporate or government agendas and resources!) and it generally does not assist in how to practically do it. It is hoped that this work will generate fresh insight into what dignity-centred AI development can look like and make an important contribution to the practical implementation of AI ethics, particularly in entrepreneurial contexts.
2. Centring dignity in artificial intelligence development is important if we hope to build technologies that tackle inequality
There are many examples of AI systems that discriminate, disenfranchise and disempower humans; however, there is also immense potential to use AI in ways that empower and contribute to changing systems of inequality, not further entrenching them. When I was a Global Fellow with Acumen back in 2015/2016, we spoke a lot about the opposite of poverty (in all sense of the word) is dignity (read more in Acumen’s manifesto here). This research posits that if technologies are to tackle inequality, then entrepreneurial ecosystems must value dignity.
There are lots of ways of thinking about dignity. Some initial thoughts are captured in this whitepaper, jointly published with Centre for Public Impact.
An interdisciplinary review of concepts of dignity will form the first part of my research. Stay tuned for more on this soon!
3. It’s applied research, infused with cybernetic perspectives
This research is applied. In academic terms, it’s called ‘intervention research’. As the researcher, I am not passively observing, I am actively part of the system, intervening with ideas and models within it, testing them and updating them over time.
I will be using a variety of methods (also known as mixed methods). The methods are likely to include a combination of:
- Interviews
- Workshops
- A survey
It’s infused with cybernetic perspectives. This probably needs a post all of its own, but in a nutshell what I mean by this is that there’s a focus on:
- Seeing things in (eco)systems
- Viewing things in terms of their feedback loops
- Integrating views across disciplines (transdisciplinary practice)
- Seeing myself as part of the system (also known as 2nd order cybernetics)
I’ll also likely be using different frameworks from management cybernetics to frame and understand my research.
4. I’ll be working with early-stage startups using (or planning to use) AI models
This research zeroes in on entrepreneurial ecosystems because it is here where hotbeds of innovation form, where the DNA of the next unicorn company is created, and where decisions are made about the nature of new AI-enabled products and services that will potentially impact our lives and shape our futures at scale.
In terms of the next year, my activities will look roughly like this:
- Phase 1: Exploring questions on what dignity-centred AI development looks like in entrepreneurial ecosystems, what mechanisms currently exist to enable a dignity ecosystem and where there are gaps.
- Phase 2: Working with early-stage start-ups to develop a prototype designed to enable dignity-centred AI development.
- Phase 3: This prototype will be further tested and evaluated in at least one other context, for example, an accelerator context or in the context of venture capital post-investment services.
5. There are ways you can get involved. Please reach out!
As a former consultant, I’m very aware of the risks of doing work that sits in a report (or a thesis in this case) and never realises its full potential for practical impact. To rail against this, I am focused on ensuring my research is ‘in the world’. You may be able to help!!
If you’re someone that’s implementing AI ethics in practice in an organisation of any size (even better if you’re a startup!) — I’d love to have a chat and if it’s a good fit, interview you on your experiences.
If you’re someone who works in (or has founded) an organisation that has dignity as a value / principle / guiding light. I’d love to chat to understand what dignity means for you in your context!
If you’re an entrepreneur who’s interested in AI ethics and not sure what to do next, do reach out! I’m currently working with three different AI startups and hoping that the learnings we create together will have applicability for much wider audiences.
If you’re an academic, practitioner or pracademic reading this and thinking it would be great if she read <insert author> or attended <insert conference> or collaborated with <insert amazing human>— please reach out! I’m also particularly interested in resources/ people who have things to share on reflexive practice, management cybernetics and non-Western views on dignity.
And if you’re someone who I haven’t envisioned above but you have something to share, ask or suggest, I’d absolutely love to hear from you.
Surprising encounters generally lead to incredibly interesting places. I can’t wait to see where the research may take me from here.
For a short version video (12 minutes!) of my Thesis Proposal Review, please see below:
Lorenn Ruster is a social-justice driven professional and systems change consultant. Currently, Lorenn is a PhD candidate at the Australian National University’s School of Cybernetics and a Responsible Tech Collaborator at Centre for Public Impact. Previously, Lorenn was a Director at PwC’s Indigenous Consulting and a Director of Marketing & Innovation at a Ugandan Solar Energy Company whilst a Global Fellow with Impact Investor, Acumen. She also co-founded a social enterprise leveraging sensor technology for community-led landmine detection whilst a part of Singularity University’s Global Solutions Program. Her research investigates conditions for dignity-centred AI development, with a particular focus on entrepreneurial ecosystems.