Hey there! Welcome to the Marketer Of The Month blog!
We recently interviewed Michael O’Flaherty for our monthly podcast – ‘Marketer of the Month’! We had some amazing insightful conversations with Michael and here’s what we discussed about –
1. Government’s Role in Protecting Individual Rights in the Face of AI Advancements
2. Research-Based Regulatory Models: Navigating the Complexities of Fast-Evolving AI Technologies
3. Amplifying Citizen Voices: The Vital Role of Civil Society in AI Governance
4. Inclusion and Collaboration: Creating Spaces for Meaningful Exchanges between Governments and Civil Society in AI Decision-Making
5. Empowering the Future Generation: Providing Platforms for Youth Engagement in AI Policy
6. Human Rights as the Foundation: Ensuring Dignity and Respect in the Development and Deployment of AI Technologies
About our host:
Dr. Saksham Sharda is the Chief Information Officer at Outgrow.co. He specializes in data collection, analysis, filtering, and transfer by means of widgets and applets. Interactive, cultural, and trending widgets designed by him have been featured on TrendHunter, Alibaba, ProductHunt, New York Marketing Association, FactoryBerlin, Digimarcon Silicon Valley, and at The European Affiliate Summit.
About our guest:
In this episode, Michael O’Flaherty, an Irish human rights lawyer and academic, and Director of the EU Fundamental Rights Agency, shares insights on the intersection of AI and human rights, discussing the role of governments, research-based regulatory models, citizen engagement and a lot more!
EPISODE 112: A Race Against Time: European Union’s FRA Director Michael O’Flaherty on the Need for Urgent AI Regulation
Table of Contents
Saksham Sharda: Hi, everyone. Welcome to another episode of Outgrow’s Marketer of the Month. I’m your host, Dr. Saksham Sharda, and I’m the creative director at Outgrow.co. And for this month we are going to interview Michael O’Flaherty, who is the Director of the EU Fundamental Rights Agency. Thanks for joining us, Michael.
Michael O’Flaherty: Great to be here. Thank you.
Don’t have time to read? No problem, just watch the Podcast!
Or you can just listen to it on Spotify!
The Rapid Fire Round!
Saksham Sharda: Let’s start with the rapid-fire round. The first one describes what your organization does in one sentence.
Michael O’Flaherty: The Fundamental Rights Agency advises the EU so that it is human rights compliant in its work its laws and its policies.
Saksham Sharda: how long does it take you to get ready in the mornings?
Michael O’Flaherty: Oh, it takes me a good half an hour on a good morning or a lot longer on the bad days.
Saksham Sharda: Most Valuable skill you have learned in life?
Michael O’Flaherty: Patience
Saksham Sharda: city in which the Best Kiss of your life happened
Michael O’Flaherty: Dublin.
Saksham Sharda: How many speakers can you name in this summit?
Michael O’Flaherty: About 20
Saksham Sharda: name them.
Michael O’Flaherty: No.
Saksham Sharda: In one sentence, describe one problem that your organization is facing
Michael O’Flaherty: resources. we never have enough resources to do the job that we need to do for European citizens.
Saksham Sharda: How do you relax?
Michael O’Flaherty: I relaxed by visiting art galleries.
Saksham Sharda: A habit of yours that you hate
Michael O’Flaherty: talking over other people.
Saksham Sharda: Work from home or work from office?
Michael O’Flaherty: Bit of both
Saksham Sharda: The most embarrassing moment of your life.
Michael O’Flaherty: I embarrass myself every day.
Saksham Sharda: How many hours of sleep can you survive on?
Michael O’Flaherty: I can survive Well on seven
Saksham Sharda: Your favorite app?
Michael O’Flaherty: My favorite app, news apps Irish Times, and BBC.
Saksham Sharda: Biggest mistake of your career?
Michael O’Flaherty: Not doing adequate pension planning.
Saksham Sharda: First movie that comes to your mind when I say the word technology,
Michael O’Flaherty: Social network
Saksham Sharda: How many cups of coffee Do you drink in a day?
Michael O’Flaherty: Far too many. But that’s about five,
Saksham Sharda: Your favorite Netflix show?
Michael O’Flaherty: Dirty girls.
The Big Questions!
Saksham Sharda: All right. So that’s the end of the rapid-fire round, we’re gonna move on to the bigger questions. And you can answer these with as much ease and length as you’d like. The first one is our governments acting fast enough to guarantee individual rights in the face of rapidly evolving artificial intelligence technologies. Or do they need support from nongovernmental organizations?
Michael O’Flaherty: No, they’re not moving fast enough. We’ve got to speed up our game. But we’ve got to do it very carefully, based on solid research, and know what we’re doing. Come up with regulatory models that work for a very complex, fast-evolving context, where we have to look at not just tech in general, but its application and its uses. And only on that basis, then can we put in place the right regulatory frame, civil society plays a critically important role. Civil society is the voice of rights holders, the voice of citizens, and we need that voice to be fed into the discussions. Governments don’t automatically know what the issues are, what the challenges are, and where the problems are lying hidden. That’s what we gather from civil society out there incredibly well-skilled, right across all of the sectors. And without them, we’ll do it, we won’t get it. Right.
Saksham Sharda: So what are some of the ways civil society can be involved in this?
Michael O’Flaherty: It’s a very good question, we have to make sure to maintain organized spaces where there can be exchanges of views with civil society. And they tell us that when they’re consulted, often they’re not listened to and that they’re asked for their views. But they aren’t given enough time or opportunity to express those views. And that they’re given and that they don’t often get the sense that they’ve been listened to. So be it in the tech world, the AI world, or any other, we have to invest far more in creating these spaces where they can express their view, and it has a meaningful impact. And by the way, one group that shouting very loud about this is young people, young civil society feels particularly excluded in decisions that have massive implications for their future.
Saksham Sharda: How would you define a human rights-based approach to technology in the first place?
Michael O’Flaherty: Human Rights have no value in and of themselves, human rights are just a set of tools to honor human dignity. And so the answer to your question is that technology and the regulation of technology will be human rights-based, when its goal is honoring human dignity in all its wonderful diversity, making sure that every single one of us is respected in the settings.
Saksham Sharda: What according to you are the main points of contention in the debate between privacy advocates and AI developers?
Michael O’Flaherty: There are many points of contention The first is that they misunderstand each other because it’s above so much more than privacy, it’s often reduced just to being human rights just an issue of privacy. But it’s about every human, right? Look at the way AI works in reality in our lives. And you see the extent to which everything about who we are is engaged. It’s about our free speech, it’s about our free movement in our association. It’s our social welfare entitlements when a tech app is making a decision about whether I get the social welfare or not, it’s about my job, it’s about my access to school to housing, and the list goes on. Even as we move into the next generation of technology to a human identity itself, it’s going to be the issue is going to arise of what does it take to be a human? I’ve what point have we created artificially something equivalent to a human? We’re not there yet, thank goodness. But my point is that it’s every imaginable issue. And in engaging all of those issues, the biggest challenge right now is transparency. It is imperative that the industry open up and make clear what it’s doing the content of algorithms, the content of training materials, and that I, they’ll tell you that it’s not possible in many cases, we challenge that this black box has a lid, we need to prise open the lid. Now we can have discussions about what transparency looks like in practice, I can see that. But nevertheless, we have to have those conversations, I’m afraid of a future within which we will not have achieved transparency for these extraordinary technologies, which have such huge implications for human well-being.
Saksham Sharda: So what are the arguments on the side of people who are saying that it’s hard to be transparent?
Michael O’Flaherty: The arguments on their side are not convincing. I’m not a scientist, I’m a lawyer. But I refuse to concede, without I refuse to concede that this is magic. This is the technology invented by people and controlled by people for purposes that they consider useful for people. So how come people cannot understand what’s going on? Now I am told that there’s a dimension of certain types of AI that operate in such an opaque way that it’s actually difficult to even for the designer to understand quite why it worked. Let’s at least have a conversation with those people around these technologies, and see to what extent we can draw out adequate information so that we can, for example, regulate in an appropriate way.
Saksham Sharda: How can we make sure that AI tools do not discriminate against certain groups?
Michael O’Flaherty: In the first place, we have to say that AI tools are capable of astonishing levels of discrimination, we can attempt to remove inappropriate up by the way all tech is biased, it is intended in a certain direction. So not all bias is bad, but the bias that distinguishes between people on what we call protected characteristics, your gender, your age, your race, your ethnicity, your religion, your sexual orientation, your gender identity, that’s about. And we see from our research that time and time again, despite genuine efforts to remove the bad bias. It’s still in there, for instance, through proxies, and still in there through proxies. It grows ever more distinctive through the feedback loops. And it’s very, very important to tackle these, they constitute one of the biggest concerns we have for the application of artificial intelligence today.
Saksham Sharda: And this game of catch up like is it ever going to end or is it going to get worse and worse, exact is going to exacerbate into worse and worse situations.
Michael O’Flaherty: I am not going to take the question in those terms. It doesn’t have to be worse and worse, it can be better and better. AI is the most astonishing potential tool for human thriving that we’ve seen in our history, I would argue if it’s marshaled and channeled in the right direction, with the right levels of care. So we can have an astonishingly good future for human beings through a smart application of artificial intelligence, but we will always be catching up. That’s the nature of technology. Is it ever different in any other sector, things are constantly in evolution. And so the law and the regulator and the oversight will never stop just moving along as technology moves. The key here is to avoid any sense of complacency. We have for example, in the EU, put in place an AI regulation. That’s smart and anticipatory for the future that can already identify, not identify, but can already be poised to address future applications that we can’t even think of today.
Saksham Sharda: So the EU has begun to play the global technology game. Where would you say the EU is in this game in terms of sophistication strategy, resources, and especially in surprising to the US, Russia, and China?
Michael O’Flaherty: The EU believes in trustworthy AI. And that, by the way, is the only AI that’s compatible with respect for human rights. So I’m very proud to play my own small role in my agency proud to play its role in the development of this regulatory framework. And so and by the way, you’ve mentioned other parts of the world. I believe in the longer term, the trustworthy AI that the EU approach is is is is is championing will ultimately be the more successful one, because, frankly, consumers, users, people, citizens, they’re only, they’re only going to tolerate an AI that they can trust that they see is oriented to human wellbeing.
Saksham Sharda: What do you think about the evolving role of algorithms in society and how it affects our ability to make informed decisions?
Michael O’Flaherty: I think that society isn’t adequately aware of the extent to which algorithms are playing vital roles in people’s lives. For example, I think there’s a low level of awareness of the extent to which the State is using algorithms. There was a scandal last year in one country in the Netherlands, where social welfare payments were taken back from individuals. On the basis, it was ultimately learned of the applicant, the discriminatory application of an algorithm a racist, essentially, action of an algorithm. I don’t doubt the whole thing was unintended. But it’s it was a state. It was a state service, which was impacted here. So we need our citizens to wake up to the extent to which algorithms are in play, we need public debates about the extent to which that is or is not acceptable to people on the European streets. And we need also exposure to that reality, to allow people to make complaints if they feel they’ve been wronged. You won’t complain about the decision made by an algorithm, where you’re not even aware that such a decision was taken that impacted you negatively.
Saksham Sharda: How successful do you think the EU has been in shaping global standards of privacy and data protection?
Michael O’Flaherty: I think it’s been the world leader, the GDPR, the General Data Protection Regulation, it was a global first, and it may not be perfect, nothing is and nothing human-made is perfect. But nevertheless, it’s a very serious effort to protect this very important human right of privacy. It’s not an absolute right. But nevertheless, we have to protect it very carefully, to avoid the abuses of privacy that led to such horrors as to take the most extreme example, the genocide, the Holocaust here in Europe. So European leadership, there has been very important. You’re even if it were copied nowhere else, I would applaud it, but it has been copied. It has been taken. It’s been adopted locally in many countries around the world proof if any were needed, that it was not an act of folly, but rather a commitment to global values and a pathway towards honoring them.
Saksham Sharda: So the last question is of a personal kind it is, what would you be doing in your life If not this right now?
Michael O’Flaherty: Yeah. I’ve spent so much of my life working in the area of human rights, I would find it really hard to know what I would do otherwise. I think I’d like to be an art dealer. I’d be a very bad art dealer, but I think I’d like that.
Saksham Sharda: Thanks, everyone for joining us for this month’s episode of Outgrow’s Marketer of the Month. That was Michael O’Flaherty, the Director of the EU Fundamental Rights Agency. Thanks for joining us, Michael
Michael O’Flaherty: Pleasure. Thanks for having me.
Saksham Sharda: Check out the website for more details and we’ll see you once again next month with another marketer of the month.