Home Bots in Society Dr. Dympna O’Sullivan advocates AI literacy

Dr. Dympna O’Sullivan advocates AI literacy

‘AI is too important to be left to the technology people’

by Marco van der Hoeven

Dr. Dympna O’Sullivan is Head of Research at the Faculty of Computing, Digital and Data at the Technological University Dublin. She has a PhD in AI, and her current research focuses on the way AI systems align with human values. AI and data science are one of the key research topics at the TU, and the institute cooperates closely with Workday, which works on AI at its EMEA Headquarters in Dublin. Rocking Robots sat down with her to discuss the way AI impacts society.

“One of the things that I’m interested in is human computer interaction for AI,” says Dr. Dympna O’Sullivan.” And within that field I am very interested in how machines can explain what they are doing to humans: Explainable AI. There are different perspectives of looking at that. One is opening the black box, and trying to figure out which neurons in a deep neural net are really contributing to a decision.”

“What I am trying to do is developing a visual language for explaining AI. We have a visual language when we are interacting with a computer, like icons, and we are all used that. But now we need a different type of visual language for AI to explain decisions.”


There is a lot of things to consider. “The first thing is the level of risk. Is this a decision about your smart heating system, or a system that is giving advice to a clinician? The level of risk there is different. Then there is the user and their level of expertise. With a high-risk system, like the clinical decision support system, you could have an expert consultant, or a very junior doctor, so they need different types of explanations. And then there is the actual device, how do you present it?”

“This type of explanation falls more into the realm of cognitive science. Should it be a factual explanation? Should it be case based? The type of explanation is important, as is the presentation format. Should it be text? Should it be graphic? Should it be an image that resonates with users? So what I am trying to do is creating a design pattern library for explainable AI.”

“An important part of my work on Human-AI collaboration is talking to users and learning whether they understand the explanation. Another part of this is collaborating with social scientists and psychologists on what is important in these explanations. Is it comprehensibility? Is it trust? Is it adoption? Is it safety? We talk a lot about trust in AI systems, but safety in AI system underpins trust.”

General public

There will be a lot of discussion about AI in the general public. “Right now, if you ask the public about AI the two most common examples that always come up are Netflix recommendations and killer robots. But people do not realize how many decisions are being made about them by algorithms: by banking software, by recruitment software, by devices in your home. The public are mainly getting the bad use cases. And now the world is going into an election cycle, half the world’s population is going to experience an election. There is much emphasis now on misinformation, generative AI, and on top of that you have regulation. But regulation brings distrust. Eventually regulation leads to more trustworthy AI, but we must go through this conversation for people to go forward.”

Generative AI is an important part of this discussion. “Generative AI has taken us all by surprise. These enormously large language models are the ultimate black boxes. And GPT 4 was trained on the internet. So when you talk about asking developers to open training data, how do you do that if it is the whole internet? Even in the academic community we were caught out by how fast this was progressing. So we really need to start thinking about AI literacy.”

Navigate the world

“AI literacy is a key competency to navigate the world. So how do you empower citizens to understand AI? There is a responsibility for academia, and there is a responsibility for government to upskill citizens in terms of AI. If you think an algorithm has decided about you and it is wrong, where do you go? The product was pushed out, and it bolted. And now the regulation is trying to catch up.”

“This is the year that AI has really caught the public perception. Now, we are all going to have AI in our pocket, our phones are going to be driven by AI, we are going to have copilots in there, generative AI is going to underlie so many features. And that is huge. In a way, it democratizes things. AI is in the hands of consumers to do what they want, but it ends up in the hands of bad actors as well. That is where the dangers are.”


She says the risk-based approach the EU is taking in the AI Act is the right one. “The emphasis on fundamental rights is important. AI models are consuming vast amounts of personal data, and personal data can be used in many ways. The big risks are misinformation and disinformation. That is very destabilizing for society. And the other significant risk is bias and discrimination. AI systems can be biased for many reasons.”

The lesson about AI she gives her students is to treat it with caution. “It is certainly a tool for good. But without applying the appropriate guard rails you can lose control of it quickly. We should have our ethics in every curriculum, but part of AI ethics is teaching students empathy. Who are the end users of your system going to be? Think about other users than 21-year-old males. Think about older people, think about people with disabilities, think about people from different countries and people from diverse cultures.”

“People get excluded because they were never in the data set. Or people are outliers in their own data, and developers just delete outlier data. AI is too important to be left to the technology people. We need to bring we need to bring the citizen voice into it. For that, they need the vocabulary, so they can ask the right questions, understand the risks and benefits, and society can decide what are the levels of risk that we are willing to tolerate.”

Misschien vind je deze berichten ook interessant