marketer of the month

Hey there! Welcome to the Marketer Of The Month blog!

We recently interviewed Keith Sonderling for our monthly podcast – ‘Marketer of the Month’! We had some amazing insightful conversations with Keith and here’s what we discussed about –

1. Global Data Exchange: Enhancing Technology Regulations and Ethics

2. Data-Driven Equality: Addressing Discrimination through Collection and Analysis

3. Striking a Balance for Sensitive Information

4. The Rise of Data Collection: Navigating Sensitivity and Security

5. Collaborative Governance: Private-Public Cooperation for Innovative Regulations

6. Digital Democracy: Managing Censorship and Empowering Responsible Tech Development

About our host:

Dr. Saksham Sharda is the Chief Information Officer at Outgrow.co. He specializes in data collection, analysis, filtering, and transfer by means of widgets and applets. Interactive, cultural, and trending widgets designed by him have been featured on TrendHunter, Alibaba, ProductHunt, New York Marketing Association, FactoryBerlin, Digimarcon Silicon Valley, and at The European Affiliate Summit.

About our guest:

Introducing Keith E. Sonderling, a Commissioner at the U.S. Equal Employment Opportunity Commission (EEOC). In this episode of Marketer of the Month, Keith discusses responsible AI utilization, civil rights, and the evolving technology landscape.

EPISODE 111: Eyes Wide Open: United States EEOC Commissioner Keith Sonderling Exposes the Dark Side of AI in Corporate HR

The Intro!

Saksham Sharda: Hi, everyone. Welcome to another episode of Outgrow’s Marketer of the Month. I’m your host, Dr. Saksham Sharda, and I’m the creative director at Outgrow.co. And for this month we are going to interview Keith Sonderling, who is the Commissioner at the U.S. Equal Employment Opportunity Commission (EEOC). Thanks for joining us, Keith.

Keith Sonderling: Great to be here. Thank you.

Don’t have time to read? No problem, just watch the Podcast!

Or you can just listen to it on Spotify!

The Rapid Fire Round!

rapid fire

Saksham Sharda: So starting with the rapid-fire down the first question describe what your organization does in one sentence.

Keith Sonderling: The premier civil rights organization for the world.

Saksham Sharda: How long does it take you to get ready in the mornings?

Keith Sonderling: Under half hour

Saksham Sharda: The most valuable skill you have learned in life?

Keith Sonderling: Smile.

Saksham Sharda: The city in which the Best Kiss of your life happened?

Keith Sonderling: Where I am with my wife.

Saksham Sharda: In one sentence, describe one problem that your organization is facing.

Keith Sonderling: Bias in the workforce.

Saksham Sharda: How many speakers can you name at this conference?

Keith Sonderling: Myself.

Saksham Sharda: How do you relax?

Keith Sonderling: I don’t

Saksham Sharda: A habit of yours that you hate?

Keith Sonderling: Pass

Saksham Sharda: Work from home or office?

Keith Sonderling: Office, not even close,

Saksham Sharda: Most embarrassing moment of your life?

Keith Sonderling: Pass

Saksham Sharda: How many hours of sleep can you survive on?

Keith Sonderling: Six to seven

Saksham Sharda: Your favorite app?

Keith Sonderling: Instagram

Saksham Sharda: Biggest mistake of your career?

Keith Sonderling:Pass

Saksham Sharda: The movie that comes to mind when I say the word technology?

Keith Sonderling: Wally

Saksham Sharda: How many cups of coffee do you drink in a day?

Keith Sonderling: Should be illegal. Too many

Saksham Sharda:  Your favorite Netflix show?

Keith Sonderling: I don’t watch a lot of TVs. My wife does.

The Big Questions!

Big Questions

Saksham Sharda: So we’re going to long-form questions. Do you agree with this statement that “countries must routinely collect and exchange data on the success or failure of various technological policy measures across jurisdictions to bring regulations and ethics up to date?”

Keith Sonderling: Yes, and no, I do think that you know, the collection of data can be important. And it can be useful for civil law enforcement agencies like mine, the United States, in my case, to root out discrimination, specifically pay discrimination, which my agency is responsible for enforcing. So having some level of transparency to be able to see what the issues do take their collection of data. At the same time, I do understand that a lot of this data that tech companies or businesses have is proprietary is their secret sauce, or also has information that the employees or the users don’t want the government seeing. So I do think there has to be a balance of that. But it can be very helpful when people don’t know their rights to be able to use data to find that later on.

Saksham Sharda: So how do you think different countries approach this if you could give an example of this?

Keith Sonderling: Well, in the United States, my agency, for the first time in 2016, asked private employers for their information related to pay data. And it was very controversial, a lot of employers did not want to give that information, and they were very fearful that the government couldn’t handle the privacy of that data collection or the utility of just giving the government broad categories of pay data. And there was, of course, in the United States, there is litigation. And ultimately, my agency was required to collect that data. But I do think that you know, when governments do collect data, there should be pilot programs. First, the public should be involved in the amount of data that should be collected, because collecting data or having access to data is one thing, but having the utility of that data to be able to make an impactful change. Like, again, my example, to root out pay discrimination in the workplace between men and women is important, but we can’t just have, you know, unlimited amounts of data without it being very specified. And I think we do need to work with the private sector to be able to properly do that.

Saksham Sharda: And does the future look like there’s going to be more data collection of this kind?

Keith Sonderling: Absolutely, I think from both in the United States, and here in Europe, you know, whether it’s with GDPR or some of the efforts in the United States, I do think that more data as private companies get them, governments are going to want to have them too. But I think it’s important that we remember the sensitivity of that data, and the potential information about people’s health, and their finances that need to be protected. So it’s a very complicated topic that we all need to work together to get to do. Right.

Saksham Sharda: So to what extent are private companies leading the game of data more than government organizations?

Keith Sonderling: Well, I think private companies, obviously have a lot of data, not only their workforce but their customers as well. And a lot of them are now using artificial intelligence to call and correlate that data to make meaningful impacts. And I think that’s a very good use of data collection for private individuals if you’re going to do something with it. Now, are you going to just use it for a commercial purpose, which is okay for a lot of companies in that industry, but how are you going to use it to benefit your workforce? How are you going to use it to benefit your public? And that’s the question, especially in my space, which deals with the workforce, especially dealing with diversity and inclusion in the workplace, how are you going to use that data you collect in a meaningful way because you own the data, you have access to the data, whether or not the government will ever see that. But you can use that largely through technology to improve the conditions of your workers, whether it’s through finding the right jobs, finding what their skills are, or paying them properly. A lot of that can be done without government intervention.

Saksham Sharda: What do you think about trying to maintain democracy in the digital age while balancing censorship and passivity?

Keith Sonderling: Well, I think this is a very a key topic that has been discussed a lot at this conference. But you know, there has to be a balance here of ensuring that there is a regulatory framework when it comes to technology. But it doesn’t stifle innovation. And that’s what I’m trying harder to do when it comes to artificial intelligence in the workplace, to make sure that the tech vendors who are creating this technology, and businesses using this technology know that there are existing laws out there, and there is the existing legal framework regarding the use of data and decisions made through technology. And I think a lot of that has been lost as new technology comes on the market so quickly, as artificial intelligence is being so developed so rapidly, that there’s no framework and that we need new laws, and we need new regulations. And I’ve been trying to slow everyone down, for instance, the laws I enforce in my agency are from the 1960s. But they apply equal weight to decisions employers are making with technology, as they do, to decisions people were making with pen and paper. And that is getting lost in the regulation of technology perspective because it’s getting into a lot of tech censorship or the black box of algorithms and the decisions they made. But I believe that it’s important for regulators like myself, in the United States, and here in Europe, or the UK, wherever you are, to help tech companies understand the long-standing existing frameworks and distill them into a language that they understand. And that they’re not regulators, tech companies, computer engineers, or software engineers who are very smart. So we need to make sure that they understand the regulations so they can build it into their technology. That’s what I’m trying to do. And I think that needs to happen globally. Because if it does, the products can then be developed with the existing regulatory framework in mind. And we don’t need to rush to put a new framework that may hamper innovation.

Saksham Sharda: But to what extent is technology evolving at such a fast pace that every three years there’s kind of a paradigm shift? And do you think at some point, governments will have to bring in AI to kind of like, you know, keep pace with like, regulation for this?

Keith Sonderling: I do think that the technology, there’s new technology coming in the market every single day, and whether or not there are existing laws that can within those frameworks is a question to be asked. But you raise a great point about how does the government keep up with rapidly developing technology, and we’re not going to be able to because we have to deal with issues that don’t relate to technology? You know, we’re across the board. This is just one industry that we have to regulate. So I think it is really important here. And something that I’m trying to drive home to tech companies and users of the technology is self-governance is making sure you’re the ones who are developing these products so rapidly, that the government and other people are not going to be able to keep up with so it’s on you to make sure that you’re developing and deploying products that comply with the law, and then that is used ethically because ultimately at the end of the day, that’s where the liability is going to be. And that’s where the future regulations which a lot of companies don’t, you know, want because it can harm their development. We can avoid all of that. If these products are self-regulated it. And the users of them are building teams around them to make sure that they’re used properly.

Saksham Sharda: To what extent then will tech companies get very intertwined with the world of politics, you’re already seeing a lot of waves by tech giants like Elon Musk interfering with like, you know, usually it used to be two separate spheres. But now we see political and tech, even at this conference, combining what’s great is leading to.

Keith Sonderling: Well, I’m a government official, I’m here at this conference. So I think it’s really important. And I think from you know, from my seat in Washington, DC, the tech industry is just one of the 1000s of industries that we regulate, and having that relationship with industries, whether it’s technology, whether it’s manufacturing, whether it’s retail, whether it’s travel and leisure, you name it, some laws apply to all of them. So having that relationship with the key stakeholders from these different areas is how these governments have worked in Washington, DC has worked for a long time. But tech doesn’t need to be at the table, just like all industries, because we need to hear from them about what they want to do, how they can do it the right way, and how they can work with us to whether it’s passing reasonable regulations that protect consumers, but also allow a tech to flourish. And that’s the way other industries have done it for years and years and years. And it’s no different than the tech industry. So I welcome that interaction, and becoming part of this community, because that’s how things can happen, that everyone is happy with. And we can move forward as opposed to governments, whether it’s in DC, London, or Brussels just making the rules without talking to the industries or regulating. And that’s where the first thing comes from. That’s the problem. So I welcome this conversation,

Saksham Sharda: Would you say that it is possible to envisage a future in which AI can help enable privacy?

Keith Sonderling: Absolutely, I think that artificial intelligence, when designed properly and also implemented, carefully, can help a lot of the problems we’re dealing with, whether it’s in my space, which is eliminating bias in the workforce, by, you know, trying to use artificial intelligence, or neutral data characteristics and skills opposed to bias that has been built into a lot of decision-making process, or in other areas, you know, the benefit of AI is it has no mind of its own, it can make decisions based upon neutral characteristics. But then, of course, you know, the data has to be built on quality data, and people can’t interfere with the algorithm. So it is an area that can really across the board, artificial intelligence can help prevent a lot of our problems that have been occurring. And that may occur in the future. But at the same time, if it’s not properly designed, it could scale the issues, far larger than we’ve ever seen, because it can make decisions faster than any human. So that’s the balance that I’m trying to talk about now.

Saksham Sharda: And to what extent does AI destabilize the theory of equal employment?

Keith Sonderling: So it’s improperly designed or carelessly implemented, AI can discriminate on a scale far greater than any human being because you know, AI is just going to look at the data that it’s correlating. So if you want to diversify your workforce, and you’re using artificial intelligence to do that, and a lot of these AI HR programs are built and sold to advance diversity, and equity inclusion, but if you’re telling AI that these are my best employees, go find me them. And if those employees were made up of one race, gender, national origin, or religion, that computer is just going to look at those characteristics and replicate that existing bias that you’re trying to eliminate. So a lot of it is how it is, you know, the data that are being used, whether it’s being audited properly before it’s ever used to decide on someone’s livelihood because AI in HR is different than AI in business. Because when you’re using AI in HR, you’re deciding on someone’s livelihood, and their ability to enter the workforce and thrive in the workforce. And that’s a much different equation than using it to make deliveries faster or to build products faster. So it takes that very careful requirement to make sure that your data set is not going to be biased because you only have the same people in the data or the algorithm doesn’t let you screen out older workers doesn’t screw it lets you screen out, pregnant females. So a lot of it is not just on the dataset. It’s also on the design as well. So it can potentially cause discrimination if it’s not properly used.

Saksham Sharda: Do you already know some applications of AI in HR that are common knowledge in any startups around is something you’re looking to regulate?

Keith Sonderling: While we are talking it’s very common, a lot of large companies are using AI to recruit employees to screen applicants or to conduct job interviews. So it’s not uncommon for a large company. If you go to apply for the job. Your first interview is with a chatbot on an app on your phone, you know and that can be beneficial because the interviewer can’t see you. The interviewer doesn’t see The color of your skin, your gender, or if you are, you’re disabled when somebody else would see that, and then could potentially make an unlawful decision on that. So using AI to eliminate an example, I like to talk about somebody’s name, what somebody’s name tells you about the ability to perform their job, nothing. It tells you about their sex, their potential national origin, their race, or their religion, all things, which once you know, you can’t unsee right. So you know, being able to remove some of those indicators of being associated with who you are, and get to your skills on your resume is a really solid use of AI in HR that a lot of companies are taking that skills-based approach for us.

Saksham Sharda: Do you think that social media as a whole is doing more harm than good? Do you think the problems far outweigh the rewards?

Keith Sonderling: I mean, it’s sort of out of my wheelhouse, I can’t comment on that. I just don’t have any regulatory authority over it. My personal opinion doesn’t matter as much as my professional opinion,

Saksham Sharda: What do you have to say about the upcoming recession and predicted job losses?

Keith Sonderling: So after the recession in 2008, when the housing and job market collapsed, we saw an increase in claims of discrimination. So for instance, last year at the Equal Employment Opportunity Commission nationwide, we received around 62,000 cases, of charges of discrimination. Now in the United States, you can’t sue your employer without coming to the EEOC first, so we see all the trends to compare that relatively no low number of 62,000. After the recession in around 2010, through 2012, that number was around 100,000 cases of discrimination every year. So almost had a 40% decrease from the last recession. And why is that maybe if you were discriminated against in a job market, so well, you can just go get another job, often remote paying more at this point, the way the job market was because those opportunities are available. So instead of now bringing a case against your former employer, if you feel like you were fired, because you’re disabled, or fired because you’re a woman, you can just go find another job and move on with your life. But when there’s a recession, and there are other jobs that aren’t available, and you were just terminated, because of an unlawful reason, because of your race, religion, national origin, etc, then you may have nowhere else to go. So I do expect if there is a recession, and the job market shrinks, that we will start seeing more claims of discrimination, and whether or not those people were laid off because of a true reduction in the workforce, or they were laid off for a legal reason because there were older. You know, we have to sift through that in our cases, but typically, following the trends will see a large spike of discrimination with a recession.

Saksham Sharda: Okay, so the next question is one sentence on Elon Musk’s acquisition of Twitter.

Keith Sonderling: No comment.

Saksham Sharda: Okay, the last question for you is what would you be doing in your life if not this?

Keith Sonderling: I think I would enjoy being a morning TV anchor, like somebody who does the 4 am or 5 am shift to be real peppy early in the morning when people are sleeping or groggy waking up and just being a smiling face on TV, welcoming everyone to the new day when they’re half asleep. So that’s probably what I would enjoy doing talking about, you know, the opposite ends of the spectrum. You know, now I’m a government regulator in Labor and Employment. Already in college, I didn’t major in broadcast journalism. So I think I would go back to that.

Saksham Sharda: That’s great.

Let’s Conclude!

Saksham Sharda: Thanks, everyone for joining us for this month’s episode of Outgrow’s Marketer of the Month. That was Keith Sonderling, Commissioner at the U.S. Equal Employment Opportunity Commission (EEOC). Thanks for joining us, Keith.

Keith Sonderling: Pleasure. Thanks for having me.

Saksham Sharda: Check out the website for more details and we’ll see you once again next month with another marketer of the month.

Similar Posts

Leave a Reply