How do governments fit into the AI brew? Australian-French persepectives from recent Cafe Scientifique

Lorenn Ruster
6 min readMay 25, 2024

Originally published on ANU School of Cybernetics blog; prepared with the assistance of ChatGPT.

In the ever-evolving landscape of technological innovation, Artificial Intelligence (AI) stands as a beacon of both promise and concern. At a recent panel event titled “Byte-sized AI: How do governments fit into the AI brew,” experts gathered to discuss the intricate dance between harnessing AI’s potential while mitigating its risks.

Here are some of the key takeaways from the insightful discussion featuring Dr. Zena Assaad, Sarah Vallee, Dr. Ahmed Imran, and Dr. Emma Burns, and moderated by Dr. Rim El Kadi.

AI Ethics and AI Governance

Dr. Zena Assaad, Senior Lecturer at the Australian National University’s School of Engineering emphasized that despite the seemingly critical need for ethical considerations in AI governance, she is yet to really see clarity around what ethical AI is and dared to add that she has never seen ethics within the governance of any organization. While AI’s rise has sparked excitement, defining ethical boundaries remains murky. Even in the most recent interim response on safe and responsible AI from the Australian Government, ethics is only mentioned three times.

Zena highlighted the challenges in connecting AI to human traits like hallucinations. She believes that this is really problematic and detracts from the fact that humans are the ones that are making the decisions. She believes that the ethics conversations need to be focused on the humans and the systems that it sits within, not on the technology in isolation.

She pointed out that while ethical frameworks exist, practical implementation is often lacking and without being able to describe some benchmarks of what it really looks like, then governments and practitioners will continue to “suffocate under the avalanche of frameworks”.

For more information on Zena’s contribution to the panel, see her blogpost.

The EU Perspective: Balancing Innovation and Regulation

Sarah Vallee, AFRAN AI Community Lead and Secondee to the ANU School of Cybernetics, shed light on the European Union’s approach to AI, particularly the EU AI Act adopted in March 2024. She gave an overview of the legislative environment in the EU, noting that AI systems were already regulated before the AI Act through other instruments such as consumer protection laws and the GDPR, but these were deemed insufficient for dealing with the complexity of AI systems.

The EU AI Act legislation classifies AI systems based on risk, aiming to protect fundamental rights while promoting innovation. There are four levels of risk:

  • unacceptable risk — e.g. emotion recognition or social score systems.
  • high risk — e.g. systems that impact on health, safety or fundamental rights such as medical systems or recruitment systems. These systems are the main focus of the EU AI Act.
  • limited risks — e.g. systems involving chatbots. It needs to be made transparent that you are interacting with technology and not a human.
  • Minimal risks — e.g. systems like spam filters or Netflix recommender algorithm

By incorporating ethical guidelines into binding legislation, the EU seeks to strike a delicate balance between fostering AI uptake and safeguarding against potential harm. Sarah clarified that the EU AI Act mostly regulates high risk systems, and today most systems are currently considered limited and minimal risk. However, this might change with the development of large language models (LLMs) like ChatGPT.

Sarah shares that Member States like France or Germany are concerned that regulating too heavily will hinder European innovation and make it challenging for their AI champions (like French start-up Mistral AI) to raise capital.

She also highlighted that Europe is a leading voice on global AI governance. For example, France will host the next global “Summit on Artificial Intelligence” and is advocating for the creation of a World AI Organization that would evaluate and supervise AI systems worldwide.

Addressing Digital Inequality: A Global Concern

Dr. Ahmed Imran, Associate Professor at University of Canberra and Founder of the Research Cluster of Digital Inequality and Social Change delved into the pressing issue of digital inequality, stressing its implications for societal well-being.

With nearly three billion people lacking internet access, bridging this divide is paramount. But Digital Inclusion is important in all societies, even in Australia, as the latest Australian Digital Inclusion Index report highlights. Digital inclusion impacts identity, status, dignity, rights, empowerment, power imbalance, opportunities, health, income and wellbeing.

Ahmed challenged the audience to think carefully about who the beneficiaries of AI really are and whether everything digital really is good. He discussed how the world is becoming captive to a handful of giant tech companies, creating new forms of techno feudalism and data colonisation.

He highlighted the need for a philosophical shift towards a people-first approach, advocating for inclusive tech solutions and policy reforms to address digital inequities. This shift would mean a research and innovation approach that:

  • Is holistic (a departure from silo-based approaches of today).
  • Has an adequate focus on contextual and cultural issues.
  • Involves the collaboration of multiple cross-disciplinary lenses.
  • Is Inclusive and accepts diversity.
  • Is based on social justice and taking a beneficiary perspective.
  • Preferences solutions and outcomes that are a combination of social and technical.

To read Ahmed’s research:

Industry’s Role in Ethical AI Implementation

Dr. Emma Burns, Data and AI Specialist at Microsoft Australia and New Zealand, provided insights from an industry perspective, emphasizing the importance of responsible AI deployment. She spoke about how she is working with government and the education sector on how they can understand and access Microsoft Data and AI technologies, with a view that you cannot use it ethically or responsibly if you don’t understand it.

She reflected on how in her role she’s often approached about the technology first. Her approach is always the same — “what’s the problem you’re trying to solve”.

She described how it really takes diving deep into a problem together to understand the opportunities and implications of using technology in a particular context. Although there are repeatable patterns to impactful and responsible adoption of technology, it is never simple and easy. Every use case requires careful thought.

Microsoft’s partnership with OpenAI since 2019 has driven technological leaps in AI as evidence by the release of ChatGPT in November 2022. But Emma was keen to emphasize that there are many forms of AI, outside of generative AI, with some AI solutions applied to drive positive change in for example environmental monitoring and management.

No matter which form of AI used, Emma highlighted the necessity of trusted partnerships to enable effective and responsible use: “if Microsoft is going to bring a technology to the market, they should also be present in partnerships deeply enough to ensure it’s applied in the right context”.

Emma discussed how AI use is already pervasive but how there has been a lot of fear and confusion since the release of ChatGPT. She emphasized we need to be careful not to stifle all AI innovation and positive use whilst the world seeks ways to regulate generative AI.She was very clear that despite the hype, AI is not sentient, it is not magic, it is math. Humans need to be in the loop and that’s where the responsibility lies, with the human. In this context, critical thinking and supervision skills have become more important than ever.

Group photo of Dr. Rim El Kadi, Sarah Vallee, Dr. Emma Burns, Dr. Ahmed Imran, Dr. Zena Assaad and Prof Katherine Daniell

The panel concluded with some Q&A from the audience who were interested in the role of AI in education and the implications of AI regulation in the Australian context.

The event was organised by the Australian-French Association for Research and Innovation (AFRAN) and the Alliance Française Canberra.

Lorenn is a PhD candidate at the Australian National University’s School of Cybernetics, a Responsible Tech Collaborator at Centre for Public Impact and undertakes freelance consulting on responsible AI, governance, stakeholder engagement and strategy. Previously, Lorenn was a Director at PwC’s Indigenous Consulting and a Director of Marketing & Innovation at a Ugandan Solar Energy Company whilst a Global Fellow with Impact Investor, Acumen. She also co-founded a social enterprise leveraging sensor technology for community-led landmine detection whilst a part of Singularity University’s Global Solutions Program. Her research investigates conditions for dignity-centred AI development, with a particular focus on entrepreneurial ecosystems.

--

--

Lorenn Ruster
Lorenn Ruster

Written by Lorenn Ruster

Exploring #dignity centred #design #tech #AI #leadership | PhD Candidate ANU School of Cybernetics | Acumen Fellow | PIC, SingularityU, CEMS MIM alum|Views =own

No responses yet