marketer of the month

Hey there! Welcome to the Marketer Of The Month blog!

We recently interviewed Matthew Blakemore for our monthly podcast – ‘Marketer of the Month’! We had some amazing insightful conversations with Matthew and here’s what we discussed about-

1. AI’s creative prowess within various industries.

2. The pivotal role of international AI standards.

3. Best Practices for Ethical Use of  AI.

4. Bootstrapping vs Funding: Need for external investment in a startup’s journey.

5. AI’s potential impact on the Hollywood industry.

6. Startup Voice Podcast: Echoes of Innovation.

About our host:

Dr. Saksham Sharda is the Chief Information Officer at Outgrow.co. He specializes in data collection, analysis, filtering, and transfer by means of widgets and applets. Interactive, cultural, and trending widgets designed by him have been featured on TrendHunter, Alibaba, ProductHunt, New York Marketing Association, FactoryBerlin, Digimarcon Silicon Valley, and at The European Affiliate Summit.

About our guest:

Matthew Blakemore, Chief AI Strategist at AI Caramba and Member of ISO is an acclaimed entrepreneur and AI strategist with over a decade of experience in bringing product concepts to commercial success. His entrepreneurial journey, highlighted by his sustainability-focused tech startup demonstrates his exceptional leadership in AI and business development.

EPISODE 140: Rated ‘E’ for Ethical: British Board of Film Classification’s Matthew Blakemore on International AI Standards

The Intro!

Saksham Sharda: Hi, everyone. Welcome to another episode of Outgrow’s Marketer of the Month. I’m your host, Dr. Saksham Sharda, and I’m the creative director at Outgrow. co. And for this month we are going to interview Matthew Blakemore who is the chief AI Strategist at AI Caramba.

Matthew Blakemore: Great to be here. Thank you.

Don’t have time to read? No problem, just watch the Podcast!

<

Or you can just listen to it on Spotify!

The Rapid Fire Round!

rapid fire

Saksham Sharda: Okay, let’s start with the rapid-fire round. The first one is, at what age do you want to retire?

Matthew Blakemore: That’s a good question. Maybe 50

Saksham Sharda:  How long does it take you to get ready in the morning?

Matthew Blakemore: 20 minutes

Saksham Sharda: Most embarrassing moment of your life?

Matthew Blakemore: In Stockholm. When I lost my key, I had to run from a shower on a boat to a big reception office, which was up a hill. So I was just wrapped in a towel in minus 10 degrees Celsius. It was crazy.

Saksham Sharda: Favorite color.

Matthew Blakemore: Red.

Saksham Sharda: What time of day are you most inspired?

Matthew Blakemore:  Probably in the evenings.

Saksham Sharda: How many hours of sleep can you survive on?

Matthew Blakemore: Probably four or five.

Saksham Sharda: Fill in the blank. An upcoming technology trend is _____.

Matthew Blakemore: I think artificial general intelligence.

Saksham Sharda: The city in which the Best Kiss for your life happened?

Matthew Blakemore:  Probably London.

Saksham Sharda: Big one, Elon Musk or Mark Zuckerberg?

Matthew Blakemore: Mark Zuckerberg.

Saksham Sharda: The biggest mistake of their career?

Matthew Blakemore: The biggest mistake of my career was thinking when I started my startup that it would be really easy.

Saksham Sharda: How do you relax?

Matthew Blakemore: I don’t relax. Normally, Netflix or swimming.

Saksham Sharda: A habit of yours that you hate?

Matthew Blakemore: Sometimes I stress too much.

Saksham Sharda: The most valuable skill learned in life.

Matthew Blakemore:  Persistence.

Saksham Sharda: Your favorite Netflix show?

Matthew Blakemore: Favorite Netflix show. It’s probably one of the South Korean dramas, to be honest, I love them.

Saksham Sharda: One-word description of the leadership style driven top priority in your daily schedule?

Matthew Blakemore: Top priority. Achieving as much as possible with the time available.

Saksham Sharda: Ideal vacation spot?

Matthew Blakemore:  Ideal vacation, probably in the Nordics, like the cold weather. So yeah.

Saksham Sharda: Key factor for maintaining a work-life balance?

Matthew Blakemore: You know what, I don’t have a work-life balance. So I continually work but the key factor is having a schedule and sticking to it.

Saksham Sharda: Memorable career milestone?

Matthew Blakemore: I think achieving Innovate UK funding in my previous role so that we could proceed with a big artificial intelligence project.

Saksham Sharda: A recent business innovation that caught your attention.

Matthew Blakemore: Oh, Bill Bryant, in Sweden, with CamStudio launched a generative AI model that can generate cool background music for a song or a film. And it uses high-quality loops. So it’s much better than a lot of the other music-generating tools out there.

The Big Questions!

Big Questions

Saksham Sharda: Okay, well, that was the end of the rapid fire. Fantastic. Now we can go into the longer questions as you can answer them at much ease or time. Your first one is, can you tell us more about your role as an AI strategist? And your journey in the field of AI?

Matthew Blakemore: Of course. So I’ve been working with AI for over a decade now. I started working with AI and my own company looks good on me in the fashion sector. And we were using visual recognition tools, which back then were few and far between. And they were nowhere near as good as they are today. Interestingly enough, we were using those tools to recognize details or fashion items so that we could suggest other items in the store that might go with them. And I think I met at least three people here today who were using the new technology to do similar things. And so it’s really interesting to see that it’s still ongoing. And then obviously more recently, in my last role, the UK regulator was using AI for a couple of things, generating international multi-territory age ratings, from one viewing of video content. So instead of having to go to every regulator around the world to get age ratings, you could do it all from one setting. The second more complex project was analyzing video content from a compliance perspective and recognizing where not only the violence occurs, but being able to attribute some sort of severity level to it using a combination of visual speech and sound recognition tools. So it was really interesting to kind of build that multimodal fusion model to do that. And I was, as you know, having taken all the learnings from all of those projects has put me in a good place, certainly from the creative industry perspective to advise on AI strategy. And that’s really where my role as a strategist comes from.

Saksham Sharda: And your plan to return to any of these projects anytime.

Matthew Blakemore:  Really good question. I mean, certainly, the media and entertainment industry is one where I think there’s a lot of growth and a lot of opportunities. So, you know, maybe there’s opportunities there. But also, I think, with the work that we were doing in terms of recognizing issues in content, I think there’s a lot of opportunity in the user-generated content space to provide better tools for platforms like YouTube. So yeah, absolutely, really keen to kind of carry on that work.

Saksham Sharda: Are there any interesting lessons that you learned now that you look back at everything you’ve done that you bring with you today?

Matthew Blakemore: Yeah, I mean, as I think I mentioned earlier, persistence is so important, and having that drive to continue when things go against you. So, you know, with securing funding, that was a long process, and if you weren’t persistent in and believing in your vision, there was no way you could have got over that hurdle and achieved that result. So I think that sort of drive and persistence is a key thing that I’ll carry with me throughout my career.

Saksham Sharda:  So you’re actively involved in developing AI standards, including ISO/IEC 8183, could you explain what the standard is?

Matthew Blakemore: So I’ve been part of the BSI and ISO AI committees. Now for quite some time. The BSI is the British Standards Institute. The ISO is the International Standards Organization. The ISO IEC 8183 is an international standard, a foundational standard for the AI data lifecycle. So it’s about how we use data in AI projects, how we can optimize that and make sure it’s ethically used, and how we can mitigate things like bias. So it’s all there in that foundational standard as a guide for businesses who are either working on AI projects currently or are thinking about doing so. And all of the other standards around data quality, and risk that are present within the ISO AI standards, kind of refer to the AI data lifecycle. And that’s why we wanted to get that standard out there. So that any business approaching these projects can have and can have that as a kind of tool that they can use. Also, we visualize that it can be used by policymakers, politicians, and things to think about really, how we want to regulate this area, too.

Saksham Sharda: And what has the progress and regulation looked like? We all know the letter that Elon Musk and everyone signed once wanted to regulate Congress. But what has been the update on that?

Matthew Blakemore: So I was very skeptical about NASA, I don’t know if it came from a good place. And what I mean by that is, obviously open AI, has not necessarily been entirely open about the kind of data sources they’ve used to train their models. And they are in an advantageous position because of that. And I felt that some of the people signing those letters calling for regulation were doing so because they were in an advantageous position to build something before regulation. And they were potentially looking to restrict others from building tools of a similar nature and competing with them. So I don’t know how, you know whether that came from a good place. In terms of the actual regulation, though, it’s really interesting to see the EU AI Act coming into force soon. The UK government has more of a relaxed approach to AI for the time being, but we’ve got our AI safety Summit coming up in November, where all these kinds of issues, core issues are going to be discussed. And I know the US is in a similar position where they’re looking at whether they should bring in regulation or not because they tend to be more hands-off than say, the European Union, for example, the one challenge the European Union has, from what I can tell with the EU AI Act, is it’s very clever in terms of breaking down AI projects into different risk categories. But the issue is that several projects already exist such as GPT4 that have been widely used as chat GPT. And they’ve already been trained, and they’re already in public consumption. And the problem is they probably fall into the high-risk categories. But what are we going to do now that is already being used by the public? Are they going to be withdrawn from the market? Because that would seem to be quite an unpopular thing. I would think because people have gotten used to using these tools. So how are we going to work with tools that are already out there? And I think that’s something that will be a challenge for the EUA AI X. 

Saksham Sharda: And how does one navigate the challenge of having an international standard in the first place, which may or may not be respected by countries at all?

Matthew Blakemore: This is a really good question. And, you know, when we got the AI data lifecycle standard approved, I think 24 countries all approved it. But you’re right, that, you know, when it’s a standard, it doesn’t have to be adhered to. So it’s not a regulation, it’s not in law. And it’s up to the governments of those countries if they want to use it to kind of form a legal basis for it. But I do think it’s tricky. If you have countries that are under tough regulation with the EU AI act, and countries that are not because what you could see is innovative businesses opting to develop their tools in less restricted markets, because then they may be able to use practices that would not be allowed in the restricted market, and then bringing their products which have had the advantage of maybe more flexibility to a market like the European Union, and not being entirely honest about how they’ve trained them. And the issue there is, you know, you’ve got black box AI. So neural networks, it’s very difficult to break into those and to find out what data has been used to train them. And so I think that, again, is an intrinsic problem. If there’s not a globally accepted regulation around it, then we can have all these standards, and we can have all this advice, but really, then it comes down to the ethics of the company, and the ethics of the individuals more than anything else. 

Saksham Sharda:  What motivated you to become a member of the British and International Standards Organization? And of course, what do you do within them?

Matthew Blakemore: Sure. So in terms of motivation, it was soon after the Cambridge Analytical scandal broke. Obviously, since then, there’s been that book by Brittany Kaiser called Targeted which I recommend everyone reads. It is fascinating how AI can be used in a bad way, to influence politics, for example. And I was quite concerned about what was being done in standards and to try and help governments to form policies that would kind of stop that sort of thing happening in the future. And so that’s why I got involved in the British Standards Institute in the first place. And then the opportunity to work with experts like Julian Padgett, Colin Crone, and Jeremy Swim from Green on the AI data lifecycle standards got me excited about working in the international standards organization as well.

Saksham Sharda: So does the Cambridge Analytical scandal pale in comparison to what has happened since and technology?

Matthew Blakemore: I’m not sure it pales into insignificance. I think it still is. You know, when you’re looking at uses of AI that are bad. I think that it’s certainly up there with the scandals. I mean, deep fakes are going to be very interesting. There’s a British election coming up, there’s already been one deep fake online of Sir Keir Starmer saying things that he didn’t say, trying to make him look bad. And I think that could be the start of something very bad. If we start to have loads of deep fakes before that election to try and influence the result, that’s going to be very problematic. It has been, however, a positive thing to see both parties putting out messages making clear that it was a deep fake. So hopefully, if they carry on in that way, and, you know, standing up for democracy, both parties, that key party standing up for democracy, I think we can avoid any issues. But certainly, you know, it’s not a good look that this far out from the election, which must be at least six months, they’re already deep fakes are already emerging. 

Saksham Sharda: And so we mostly hear about the negative impacts of AI on politics or any positive impacts on politics?

Matthew Blakemore: You know, what I think in the future AI can be a really useful tool to predict how certain policies will impact the public. And I think you know, we might even be able to get to a point where we can rely on AI tools to shape strategy from an economic perspective. So I think that there are benefits that we could see from AI and politics. But certainly, there are negatives with everything, it depends on the use case.

Saksham Sharda: In one of your posts, you mentioned the importance of ethical and responsible AI development. How do you advocate these principles in your book?

Matthew Blakemore: So certainly, in the projects I’ve been in, I’ve worked on, we’ve always had an AI ethics committee. I think that’s important. We’ve always had independent voices come in as well because it’s very difficult for a company to look at their products and analyze them from an ethical perspective without having an independent voice in there as well to look at them and to feed into that. So we worked very closely with the University of Bath and had their representation there. We had other external parties to that. I think that’s a really important thing for companies to take away from this that I would recommend an AI ethics committee, but do invest in bringing in outside voices if you’d like to, to feed into your ethics program.

Saksham Sharda: And besides these two, are there any other best practices or frameworks you recommend for organizations looking to prioritize ethical AI development?

Matthew Blakemore: Yeah. I mean, the ISO standards, so the international standards, organization standards, will also sometimes the government publishes, recommendations, and best practice papers as well. So it’s always worth looking out for those two. But as a starting point, the ISO standards are a good thing.

Saksham Sharda: Since we’re going to various assignments around the world, could you share some insights from your presentations, especially on the topics of demystifying AI?

Matthew Blakemore: Yeah. So I think, you know, in the latest presentation, I talked about the, the challenges certainly with C level, sometimes not understanding that when you bring in an AI team 80% of their time is going to be spent on sorting out the data, it’s going to be spent on the data collection, the data preparation. And I don’t think there’s always that realization. Sometimes these companies bring people in, and they expect them to deliver an AI product in six months. And, you know, the data cleaning could take two years. It depends on how bad the data is, you know, whether it’s in a structured way, or whether it’s unstructured, whether it’s in silos, there’s lots of things to consider. And I would say that is a big consideration for sea level in terms of demystifying that I think we need to be clear as an industry to explain that. And maybe there needs to be more training available because the only way we’re going to see real progress is if the people at the top of these companies feel confident and trust in the AI process. 

Saksham Sharda: So have you had any particularly tough or interesting questions for that year during one of these speaking engagements that you had to admit you think?

Matthew Blakemore: Oh, that’s a good question. Yeah. I mean, I’ve been asked a lot of questions about bias. And I think bias mitigation is incredibly important when it comes to video content. And it’s very complex. I had a question, actually, in My last session today, regarding the most challenging issue with bias that we’ve encountered. And I would say the most challenging issue with bias is when you’ve got an AI product that can scan video content, for example, to identify things. It can pick up biases from the training data, it can pick up biases that are actually in the content itself. So using Hollywood content, as an example, Hollywood content may have built-in biases, just from the way that it’s been shot, the race of most of the actors, the way they portray different genders, all that sort of stuff. But if you’re building a tool to work on Hollywood content, it may be that tool, in terms of analyzing that content works better with those biases. And then if you take those biases away, and so it makes those ethical decisions or unwanted biases incredibly difficult. And so I would say that is the most challenging thing. And that’s why it’s so important to have an AI ethics committee to be able to discuss those sorts of important issues.

Saksham Sharda: Speaking of Hollywood, what’s your opinion on the Hollywood AI anti vs fight that’s happening in the news right now?

Matthew Blakemore: That’s interesting. I was talking about that in my presentation. I mean, I completely understand where the actors and the writers are coming from. Generative AI is very scary. And certainly, it has the potential to put a lot of people out of work, I would say, but looking at it from the studio’s perspective, you know, a lot of the media and entertainment industry have been struggling for a long time, the streaming wars have taken their toll, you know, Disney has lost a lot of money from Disney plus, Netflix, have spent a load of money Prime Video spent a lot of money, it’s got to the point where to produce the amount of content that needs to be produced to keep people engaged, something’s got to give. And, I think realistically, they’re gonna have to start utilizing generative AI, whether that’s to generate scenes, so they don’t have to fly production teams around the world to do things. So it can reduce their carbon footprint, but also reduce the cost for them, whether it’s having some AI-generated actors, which will not be a popular thing to say, at this point. But I think, you know, there are going to be things that have to be done to cut costs for a lot of productions to keep up with the consumer demand and to try and maintain their position as one of the leading media and entertainment companies.  So I think there is going to be a to and fro on this. And I touched on the fact that actually when we’ve seen these sort of significant shifts and changes in the media and entertainment space, it’s quite common for there to be pushback and skepticism. Even when we were moving from films without sounds to films with sound, some of the actors and actresses were pushing back on it saying, Oh, it’s a terrible thing. You know no one should be adding sounds to movies. It’s like putting lipstick on this famous Greek statue. You know, it was, there was a lot of pushback. And equally, when we moved from radio to television, there was the same type of pushback. So I think it’s normal to have pushback, I sympathize with the actors and the writers, there are going to be shifts in the industry, and I think human in the loop is always the best. So a combination of AI and humans. And I do think that, if they adopt generative AI, it will create AI-related jobs. And so maybe it’s more of a question of upskilling, those people who are currently doing roles that may be impacted to be able to take some of those roles as well. 

Saksham Sharda: So what do you think of some ultimate statement that we always thought that first time for blue-collar workers and white-collar workers and then find the artists, but we’re seeing the opposite? For white-collar workers, and maybe public workers?

Matthew Blakemore: Yeah. I mean, it’s an interesting one. I mean, is it coming for artists? That’s a good question. Is it coming for us? It’s because of the current form of AI, that it can generate imagery and things. But you have to be the creative one to create the prompt, and you have to be the one who has the idea in the first place. The AI is not great at coming up with ideas. If you ask GPT to come up with a script. It won’t be great if you don’t give it a good prompt and, maybe a template of the script you’re looking for in the first place. So I think, yeah, it’s a very tricky question. Has it come for the artists? That is my question, or has it come for the copywriters? Has it come for the people who are not the creatives, but are the people who are, you know, maybe building on what the greatest people have done? That’s the question. 

Saksham Sharda: So let’s talk about your involvement with the Swedish Chamber of Commerce, as the chairperson of young professionals, support and mentor young professionals in the field of AI.

Matthew Blakemore: Of course, so I’ve always contributed quite heavily to the Swedish Chamber of Commerce over the years. It’s a fantastic organization. And I’m not Swedish myself, but they embrace me. I worked for a Swedish company back in the day called Foundation in the Nordics, I love hanging out in the Nordics. Exactly. So no, it’s been a really good opportunity for me to network with some amazing people. And in terms of, you know, feeding back, they always have this annual tech forum in London, which is a well-attended event. And it’s great to kind of put people forward as speakers each year and have people have the opportunity to speak about the amazing stuff they’re doing from my network. And then on top of that, we’ve run a mentoring scheme in the past as well for people who are just leaving school, and who are 18, looking to go to university. And we’ve been able to mentor them personally, to help them on that stage of their journey. And I’m always open as well to talk to others, who are at the start of their careers looking to kind of, I guess, embark on a career in technology. I recently did a podcast, which was distributed to the universities in London, for example, that was all about the fact that you don’t necessarily need to come from a computer science background to get involved in the AI industry.

Saksham Sharda: So in total, you have raised over 1 million pounds in grant capital, could you share some strategies and insights for securing funding? and the tech startup ecosystem.

Matthew Blakemore: Thank you, I think a lot of that comes from networking. So understanding where the opportunities are. And then it’s really hard work. I mean, to secure funding, for example, from Innovate UK, in the UK, you have to secure an academic partner most of the time, you have to have a realistic vision that can be achieved within the budget, you have to be able to put it all in a very detailed Gantt chart, you have to be able to talk about how you’re going to commercialize the proposition. There’s a lot of work that has to be done. So what I would say is grant funding can sometimes seem to be the easy way to get funding. And it isn’t that they expect just as much as I would say, an investor, if not more, when they’re looking to scrutinize these projects, and the competition for grant funding is extremely high. It’s a welcome thing that the UK has now once again joined the Horizon program, which it left during Brexit. In the Horizon program, anyone can apply for funding again. It’s a very long process to secure that funding, but it really will help science and technology companies. 

Saksham Sharda: So what’s your opinion on bootstrapping, on the other hand?

Matthew Blakemore: So I bootstrapped my startup for 12 months. And, you know, I had to do a part-time job alongside leading the startup so I wasn’t getting very much sleep at all. But I think, you know, bootstrapping is a good thing to a point, but it can start to affect your health. And just from experience when you’re spending so much time, you know, trying to bring in enough money to keep it ticking over to pay people to keep your vision coming to life. So I think that, yeah, it can be a challenge bootstrapping, but I respect anyone who does that and takes that on. But there does always seem to come a point where you do need external investment in one way or another. 

Saksham Sharda: So what are your best tips for dealing with stress as a startup founder, which you said you don’t deal with?

Matthew Blakemore: Well, I would say that you know, I always like to try and get as much stuff done in the time I have available as possible. So I will always put myself under a lot of pressure to get things done. And to achieve certain goals, which are always normally quite tough. So maybe to avoid stress, you should be more realistic with how you can spend your time and make sure you structure your day to have some downtime as well. I don’t normally do that, to be fair. But I think for most people that would help, I kind of like to feel a little bit of pressure because it reassures me that things are going in the right direction and that a lot is going on. It makes me quite unnerved. If there’s like, nothing going on, or I feel very relaxed, and like why am I so chilled at the moment?

Saksham Sharda: What role do you think pressure has to play in the entire startup ecosystem? Do you think competition makes it thrive in the first place?

Matthew Blakemore: I think so. Yeah. Absolutely. I mean, if you’re working on something that you think is groundbreaking, but there is no competition, that either raises alarm bells or means you’re working on something incredible. So I think most of the time investors like to see that there are competitive products because it shows that there’s a market and there’s an opportunity. And I think it takes a lot more to convince an investor to put money into a project if there is no obvious competition. 

Saksham Sharda: So you were shortlisted for the complex global goals impact awards?

Matthew Blakemore: Sure, so a lot of the stuff we’ve touched on already around the International Standards Organization, artificial intelligence, data lifecycle standards, that played a really important role in there, it covers a lot of the UN sustainability goals as well. So I think that was one of the reasons why I was nominated for this. Also, the work I’ve been doing, in my previous role at the UK regulator, was all around helping an organization that’s over 100 years old, to be able to channel AI to remain relevant and to cover online content with its classifications to help people in the UK and beyond. So I think those were the kinds of things that they saw as being positive goals. For nominating me for the award.

Saksham Sharda: And how has this recognition played or motivated you to continue your work and innovation?

Matthew Blakemore: Oh, it has. I mean, certainly, I would say that you know, from my perspective, social good projects are really important to me, they motivate me. And, you know, whether the award is going to have any of the nominations for that award is going to have any lasting impact on that. I would say it’s more of the fact that’s just what motivates me, I’d like to see projects that have a positive impact on society. So I think that’s a key thing throughout my whole career.

Saksham Sharda: So lastly, you’ve launched a podcast, titled Voice. What can listeners expect from this podcast? And what inspired you to start it?

Matthew Blakemore: Sure. So the truth is, I was at Slush last year. So Slush was a big event in Helsinki, where I think it’s the world’s largest startup gathering. I recommend it to anyone, it’s great fun, but you learn a lot as well. And I would say that the startup voice is really to shine a light on startups that are working on really exciting projects that are either priced or seed-funded. Series A, but where we can shine a light on this founder’s journey. And you know what inspired them to take on this task. Because having run a startup myself, I know how difficult it can be and how challenging it can be. It’s not an easy thing to run your own company. So I would say that it’s about allowing them to talk about the exciting stuff they’re doing, raising their profile so that investors can find out about their projects. But also I’ve got 10 other fantastic individuals who are in the tech sector, including the CEO, for example of the British design fund who are going to present the podcast with me, which will enable us to have multiple different views on his podcast and different ideas and different perspectives. That will bring value to the podcast and hopefully make it resonate with a lot of young people and a wider audience as well.  

Let’s Conclude!

Saksham Sharda: Thanks, everyone for joining us for this month’s episode of Outgrow’s Marketer of the Month. That was Matthew Blakemore who is the chief AI Strategist at AI Caramba.

Matthew Blakemore: Pleasure. Thanks for having me.

Saksham Sharda: Check out the website for more details and we’ll see you once again next month with another marketer of the month.

Similar Posts

Leave a Reply