Accountable AI: Deeper Implications
A conversation with David Weinberger and Professor Kevin Werbach
Werbach’s[1] podcast series focuses on Artificial intelligence and its promise to transform our world. This promise requires scrutiny, as AI’s potential cannot be reached without accountability and guardrail mechanisms, including governance, that ensure AI is deployed responsibly. The challenge of AI development is equal to ensuring its safe and trustworthy deployment, as we are already witnessing shadows that these amazing epistemic technologies can cast.
It was no surprise to see David Weinberger as the latest guest in this podcast series.
David is an American author, technologist and speaker, with quite a few other strings to his bow. His influential work explores how our ideas about knowledge, business and society are reshaped by technology, in particular the Internet and AI[2].
His well-known quote: “The smartest person in the room is the room.”, encapsulates a central idea in David’s work: knowledge is no longer an individual attribute or privilege but resides in the collective intelligence of connected people and systems. His “room” represents the group, the internet, or collaborative environments that nurture and host diverse ideas and perspectives. With the rise of AI, the room has grown exponentially larger and continues to expand; its shifting walls are becoming increasingly indefinable, blurred and elusive.
The Discussion: Starting with a Question
The podcast introduced a fundamental question: why is considering the deep impacts of AI important to ask in the first place? Isn’t this a premature question to ask?
David highlighted that although his current writing considers whether and how AI might be shaping our thinking about fundamental things, the inquiry into considering its impact might be too early. Yet he feels it is a valid question to explore and engage in a way of thinking about it, especially since he doesn’t know the answer. His approach of exploring without knowing or reaching an outcome, gives us a sense of what it means to ask such profound questions. My ears were pricked.
Technology has deeply influenced our thinking across a vast spectrum of domains, disciplines, and societal transformation by changing how we view and deal with information. This influence has spread very quickly. Now, in the early beginnings of the Age of AI, exploring our relationship with information and its effects on our thinking is, to say the least, intriguing. This exploration becomes even more fascinating when we admit not knowing the answer to AI’s impact on shaping our thinking, while engaged in the speculation that AI is the next dominant technology. Even though current developments suggest the likelihood of AI’s technological domination, the answer to its impact on our thinking remains elusive.
A New Machine Paradigm: Beyond Human Generalization
In this fascinating exchange, Weinberger puts forward that large language models are already permeating our interactions with AI. More significantly: they have been designed to sound like humans and react to us like very smart persons. In short: they represent technological anthropomorphisms. This quality differentiates them from other machine learning developments. However, we are not interacting with a person; in fact, we are interacting with quite the opposite: a world of bits and bytes found in other computer technologies, presenting a human-like veneer.
For centuries humans have sought and aspired to generalizations as the foundations for knowledge, scientific endeavour and discovery. In our knowledge world, principles and general truths prevail. They are what humanity relies on to build knowledge and inform research and discovery.
It may not be that outlandish to propose that humanity needs generalizations to make sense of the world that would otherwise be impossible to fathom.
Not disputing the validity and importance of Laws of Physics, Newton or Einstein as examples of generalizations, David highlights that machine learning is leading us to a very different view of the world namely, that of the.
To illustrate his argument, the podcast’s guest refers to the use of machine learning applications in retinal scans research[3]. Contrary to the generalizations that emerge from research( such as the link between cancer and smoking), what emerged here represents a completely different way of viewing the world, in this case the retinal scan and the eyeball. Using patterns and predictions, the machine discovered anomalies, variations or differences that human specialists could not detect. In other words, David argues that while humans typically look for what we have in common i.e. the generalization or universal, the machine reveals the opposite: what is particular, different, and therefore hard to detect.
The Capacity of the Machine
Already emerging in this new dawn of organic and inorganic intelligence is the capacity of machines to reveal particularities and variations that humans couldn’t detect on their own.
We may have become masters of creating the epistemic tools or agents that enable us to expose new truths of particulars that can identify and predict based on tremendous amounts of information. However, these present an order of magnitude that our limited brains cannot fathom, never mind hold.
Our generalizations served us well to date; the machine’s nous for revealing particulars relies on a new multi-dimensional capability that uncovers novel correlations through predictions. At times these predictions lead to “hallucinations”[4], at others they reveal new and valid truths. Interestingly, hallucinations can be the direct result of generalizations in the data sets. These tend to favour or represent our human biases, too often stemming from the WEIRDness of the data sets, referring to the Western, Educated, Industrialized, Rich and Democratic populations generating the data sets used. This creates manifested bias.
David puts forward the argument that the machine is a black box, because our world too is a black box. This presents us with new challenges and opportunities.
The Power of the One versus the Many
No surprise that the dialogue eventually moves toward power dynamics, as relatively few people are involved in a technology that affects the vast majority, typically representing a very thin white layer on a multi-coloured cake. Rich and increasingly powerful corporations and autocratic governments dominate the AI landscape with their gigantic and expensive models. To counter this dominance, David calls for smaller models and new companies entering the field. On top of that, more regulatory control and governance over what the companies are allowed to do is necessary. These are by no means certain.
In Summary
Needless to say, the discussion in this podcast reached well beyond what I’m trying to highlight in this post. This reflection merely encapsulates what stood out for me personally. There is no doubt that my discussion sells the podcast and its presenter and guest short. To discover the full depth of the conversation, I recommend tuning in[5].
What struck me most in the discussion was David’s highlighting of generalizations (universals) and how humanity relies on them, versus the manifestation and prowess of the machine for detecting the particulars. Somewhat of an oxymoron, these machines are fed our generalizations yet reveal the particulars we are unable to observe by ourselves.
Despite the human brain being the inspiration for AI development, it seems that our brains versus those of the machine have developed distinct minds of their own, with unique capacities and limitations that complement rather than mirror one another. Where these minds will take one another, and how the shape of our thinking will change resulting from this new intelligence relationship, remain an elusive answer to a very profound question.
[1] Prof Kevin Werbach: Professor of Legal Studies and Business Ethics at the Wharton School of the University of Pennsylvania
[2] Books include: The Cluetrain Manifesto, Everything is Miscellaneous, Too Big to Know and Everyday Chaos.
[3] The research project in question was performed at Leeds University using deep learning techniques in machine learning to analyze retinal scans for identifying several eye conditions and research on the correlations between retinal anomalies and cardiac disease.
[4] Term used in connection with AI referring to a response or contains that contains false or misleading information, whilst appearing plausible.
[5] Podcast available on Spotify, Apple, YouTube and through several websites, including https://accountableai.net
Prof. Kevin Werbach’s The Road to Accountable AI: AI You Can Trust, podcast webpage.