marketer of the month

EPISODE 197: Marketer of the Month Podcast with Joshua Tucker

Hey there! Welcome to the Marketer Of The Month blog!

We recently interviewed Joshua Tucker for our monthly podcast – ‘Marketer of the Month’! We had some amazing insightful conversations with Joshua and here’s what we discussed about –

1. Generative AI reduces content production costs, influencing election campaigns and foreign interference.

2. Importance of cross-disciplinary partnerships in digital democracy research.

3. Difficulties in changing entrenched political attitudes and affective polarisation.

4. Changes in Twitter data access for academic research under new ownership.

5. A shift from social graphs to engagement-maximizing algorithms in content delivery.

6. Implications and concerns of banning TikTok in the US from a democratic perspective.

7. Zelensky’s efforts to garner international support through diplomacy and technology.

About our host:

Dr. Saksham Sharda is the Chief Information Officer at He specializes in data collection, analysis, filtering, and transfer by means of widgets and applets. Interactive, cultural, and trending widgets designed by him have been featured on TrendHunter, Alibaba, ProductHunt, New York Marketing Association, FactoryBerlin, Digimarcon Silicon Valley, and at The European Affiliate Summit.

About our guest:

Joshua Tucker is a Professor of Politics at New York University, where he is also an affiliated Professor of Data Science, Russian, and Slavic Studies. He serves as the Director of NYU’s Jordan Center for the Advanced Study of Russia and co-director of the NYU Center for Social Media and Politics. His research focuses on comparative politics, mass politics, and the impact of social media on political behavior. 

Bots on the Ballot: NYU’s Professor Joshua Tucker on How Tech & Marketing are Throwing Democracies Into a Spin

The Intro!

Saksham Sharda: Hi, everyone. Welcome to another episode of Outgrow’s Marketer of the Month. I’m your host, Dr. Saksham Sharda, and I’m the creative director at Outgrow. co. And for this month we are going to interview Joshua Tucker who is the Professor of Politics at New York University.

Joshua Tucker: Great to be here. Thank you.

Don’t have time to read? No problem, just watch the Podcast!

Challenge yourself with this trivia about the exciting topics Joshua Tucker covered in the podcast.

Launch Interactive Quiz

Or you can just listen to it on Spotify!

The Rapid Fire Round!

rapid fire Don McGuire

Saksham Sharda: So we’ll start with the rapid-fire questions. You can only answer with one word or one sentence.

Joshua Tucker: Okay. 

Saksham Sharda: At what age do you want to retire?

Joshua Tucker: I don’t know.

Saksham Sharda: How long does it take you to get ready in the mornings?

Joshua Tucker: An hour.

Saksham Sharda: Favorite color?

Joshua Tucker: Green.

Saksham Sharda: What time of day are you most inspired?

Joshua Tucker: Late afternoon.

Saksham Sharda: How many hours of sleep can you survive?

Joshua Tucker:  Six.

Saksham Sharda: The city in which the best kiss of your life happened.

Joshua Tucker: New York.

Saksham Sharda: How do you relax?

Joshua Tucker: Exercising.

Saksham Sharda: How many cups of coffee do you drink per day?

Joshua Tucker: A lot, but they’re all decaf.

Saksham Sharda: A habit of yours that you hate.

Joshua Tucker: Fidgeting.

Saksham Sharda: The most valuable skill you’ve learned in life.

Joshua Tucker: Writing.

Saksham Sharda: One-word description of the leadership style.

Joshua Tucker: Good question. Consultative.

Saksham Sharda: Ideal vacation spot for relaxation.

Joshua Tucker: The mountains.

Saksham Sharda: Key factor for maintaining a work-life balance.

Joshua Tucker: Both.

The Big Questions!

Big Questions Don McGuire

Saksham Sharda: Alright. That’s the end of the rapid fire. Let’s go into the longer questions. What do you see the role of generative AI being in 2024, the year of elections generally, and the US 2024 election specifically?

Joshua Tucker: So, when we think about generative AI, I tend to like to think about this in conjunction with social media. So, social media dramatically lowered the cost of sharing information. If we think back to the al media era, if you wanted to share information widely, you needed a work that you had, maybe that was a digital network. But even before digital networks, you would’ve needed an analog network. Or you needed access to mainstream media. You needed a television station, you needed a newspaper, or you needed someone who would, someone in one of those cases to cover you. Social media dramatically decreased the cost of sharing information. Ostensibly, anyone can put information out for a very low cost, and that information can spread widely. Most information doesn’t spread widely, but it can, however, you still have the cost of producing content. What generative AI has the potential to do is dramatically reduce the cost of producing content. So, when I think of the role of generative AI in elections, I think about actors for whom producing content was more of a barrier previously, who had access to this decreased cost of sharing content. But still, the cost of producing content was a real barrier. So, if we think through what those kinds of cases are, we can begin to understand what types of actors might benefit more than other types of actors. So for example, one case of people for whom producing content is costly is if you’re trying to produce content in a language that’s not your native language. So that leads us to things like foreign influence campaigns and foreign influence campaigns. If you want to try to run an influence campaign in a country that’s not your native language, you have this big barrier. Well, generative AI has just reduced that barrier tremendously. So from this lens, we might think we might see an uptick in people trying to interfere in other elections because it’s less costly to do so. On the other hand, if you think about big national campaigns in the United States, like presidential campaigns, these campaigns have hundreds of millions of dollars, if not more than that. So producing content is not particularly costly for these campaigns. But if we think about perhaps local campaigns that have much lower resources, then maybe being able to produce content cheaper means we’ll see a rise of more targeted advertising campaigns in the context of local elections. If you combine these things, you might think, well, previously local campaigns would’ve had no way to do outreach, outreach, and targeting groups that didn’t speak the main language in their district or in their area. But maybe now things like that are gonna be possible. So I think that’s the first big piece of thinking about generative AI in the context of elections as lowering the cost of production and what are the, what flows from that. The other thing I think that we have to really think about right now is the sort of concerns that arise around generative AI and the harms that people are very concerned about. A lot of the things that this conference is gonna be about are all the incredible things generative AI can do in the business community, but in the political community, there’s a lot of worries about what it’s gonna do. Most people are particularly focused on what I would call first-order worries about generative AI, which is that people will see things that are not real and will believe them to be true. So the things we hear about all the time are an audio fake or a video fake of a candidate late in an election that goes viral. Everyone thinks that that’s true. And then that has an impact on what’s going on in the election. That’s what I would call a sort of first-order effect. However, we’ve been worrying about first-order effects around false information in elections, well, forever, but in particular in the last eight years since with the rise of social media, we’ve seen the rise of the ability to share false information or false news, what people will call this. There is, however, a much other concern here, which is a second-order effect. And one of the things that we scholars have learned is that to date anyway, false news or false information in the news that’s shared on social media is a surprisingly small portion of most people’s media diets. It’s not a small portion of everybody’s media diet, but for most people, it’s a fairly small portion. And in a sense, it gets much more attention than it is a proportion of people’s feeds on social media. You can do a little experiment like scroll through your last 30 posts on any feed you like and try to see how many of them contain false information. It’s usually quite a low number, but we talk about it a lot. And that leads to this second-order concern, which is not that people are going to believe things that aren’t true, but actually that it becomes harder to convince people that true things are true. Right? And this is a huge kind of long-term worry for democracy. Democracy is based on the idea of accountability, that we know what politicians are doing, and that we can cast our votes accordingly. As we get more and more into a world where it becomes easier and easier to dismiss anything that is true that you don’t like and claim that that’s false information or that that’s a deep fake or that that’s generative AI, it becomes harder for the norms that underpin democracy to hold. So actually, when I look at elections right now, we have examples of people trying to use generative AI to interfere with elections in 2024, the year of elections noticeably. There was an example of an audio tape in the Slovak elections earlier this year, but I’m much more concerned about these second-order effects that we begin to lose this accountability link because people begin to not know it, not know what’s true. By the way, this is also a huge kind of, I think, concern for businesses because when businesses have rumors and, you know, attacks online and things that flare up, it becomes harder to convince people of what’s true. What might have been an easier denial? No, there’s this rumor floating around, that it’s not true. Here evidence becomes harder if people increasingly don’t trust evidence because of all they hear about generative AI. Yeah, I mean, I think that kind of depends on the scale of your constituency. In a country like my country, like in the United States, which is just a huge country, candidates love to do grassroots activism and get out there, especially earlier on in the campaign in the primaries, and we’ll see loads of examples of that, of candidates, you know, hitting the pavement and shaking the hands of voters, and of course doing rallies and doing, you know, larger events. But the reality of the scale is that the way you’re gonna reach most voters is going to be through the media. In terms of backlash, I think we are in an interesting period here because the people who ultimately are gonna be, you know, the ones who are coming up with the regulation are the politicians. These are the people who get to at least put regulation in line. And so there is an interesting kind of balance right now between politicians who want to use generative AI in the course of their campaigns, but politicians who are also concerned about generative AI being used against them in their campaigns, but also concerned about the concerns of their constituents about generative ai, it’s not to say that generative AI is not gonna cause all kinds of problems in politics. It may, but we’re at a particular moment in time at this point in 2024, where the worry about generative AI outstrips the actual harms that generative AI has yet caused in the political sphere. Again, I don’t wanna say what’s gonna happen in the future, but this is gonna be an interesting tension to navigate. And it goes back to what I was talking about in my first answer about these second-order concerns. When you have large numbers of constituencies, large numbers of your constituents who are concerned about the potential harms of AI, what does that lead politicians to wanna do? What kind of campaign promises are they gonna make?

Saksham Sharda: So let’s talk a bit about your research then. Can you discuss the importance of interdisciplinary collaboration in these times in your research? And how do you market these partnerships to potential collaborators and funders?

Joshua Tucker: Yeah, so one of the most exciting things about this field that I found myself in right now, which we, you know, it’s as loosely can be called digital democracy, or digital autocracy, or digital politics, is that, as you say, it is at the intersection of a lot of established fields in, in academia. So I’m a political scientist but I work with people who are communication scholars, who are computer scientists, who are psychologists, sociologists, network scientists, and data scientists. It really is quite exciting. Now in my field of academia that presents certain challenges. Because there are a lot of things around the profession that are organized around the discipline. Hiring works by disciplines. Promotion tends to work through disciplines. A lot of our journals, which is how academics, the main output of academics, is peer-reviewed publications, the scientific findings, and scientific publications that we put into the public domain. But those also often work through disciplinary boundaries. So it is challenging, but it’s also, at the same time, incredibly exciting. The collaborations that have come about, it’s kind of interesting. Some of them I think came about early on when this field was emerging because potential funders in the field tried to bring people together from different fields. Universities are also well aware of the fact that a lot of exciting work is being done in interdisciplinary boundaries. There is this tension between universities and funders who wanna support interdisciplinary research and the professional norms of the field that tend to reward research a little bit more through disciplinary boundaries. But a lot of people have been entrepreneurial about this, about trying to get people together. And it becomes easier over time because a lot of this is ironically about network effects. It’s about who you know and who you’re in contact with. And, networks are able, you know, as you begin to grow interdisciplinary networks, then your students meet students of people you’re working with. And these networks kind of strengthen themselves. Ironically. There’s also, you know, you do find on social networks and online networks becomes an interesting place to be able to do, to be able to meet people who are doing similar research, but who you might not come into contact with through these normal, normal disciplinary channels. One of the postdocs in my lab right now became aware of the work of our lab because one of my former students found him talking about a paper on Twitter during the pandemic that had to do with looking at TikTok. And they connected online because they found each other on a social network. And now a few years later, he’s working in our lab.

Saksham Sharda: Speaking of network effects, then, could you elaborate on your findings regarding the effects of network diversity on political tolerance?

Joshua Tucker: Yeah, this is a super interesting question. So this was a study that we did, it was led by Alex Siegel, who at the time was a PhD student at NYU in our lab, but is now an assistant professor of politics at the University of Colorado Boulder. And Alex is a specialist in Middle Eastern politics. So we were looking at Twitter networks in Egypt. What we were able to do in Egypt was we had a sample of Twitter users in Egypt, and we were able to look at the extent to which they followed more secular elites and more religious elites. And it turned out interestingly enough that people who followed exclusively secular elites or exclusively religious elites tended to use more intolerant language in their posts on Twitter. But people who followed elites from both of these camps tended to use less intolerant language on Twitter. And we actually looked at this effect over time. Because You couldn’t, you might imagine this would be a selection effect. The type of people who were going to follow both secular and religious elites might be more open-minded and might be more tolerant, but we found over time effects too. So if you look down the road, as people had followed more of these elites, these effects kind of followed. So it’s an interesting question, but I will say it’s a complicated question because this leads to a big question in the study of filter bubbles, and there’s been a lot of interest in whether if we move people out of filter bubbles and by filter bubbles, or you might’ve heard the term echo chambers, this idea that you go online and you kind of only hear from people who have similar political views too. And so there’s been a lot of speculation and a lot of talk in the policy world about, oh, if we can only get people out of echo chambers, they’re gonna like each other more. They’re going to be not necessarily more tolerant, but maybe less politically polarized. But there’s been research in the United States by Chris Bale and colleagues at Duke and the University of North Carolina, where they exposed people to counter-partisan news sources by having them follow bots on Twitter that shared counter-partisan news. And they didn’t find any improvement. And they found, among some people, there was a backlash where they started liking people less. Recently, I’m one of the co-directors of the US 2020 Facebook and Instagram election study, and one of the co-leads of the Acade external academic team On this, we ran an experiment during the 2020 election with participants where we altered their Facebook feeds during the US of the campaign leading up to the US 2020 elections. And we depressed the amount of content coming from politically like-minded sources. So the idea here would be that it might make people less polarized. It, unfortunately, didn’t, it didn’t affect either their ideological polarization on issues or what we call affective polarization, which is this sense of how much you dislike supporters of the other party. At the Center for Social Media and Politics. We’re working on a study now where we’re gonna have people again, with their Twitter feeds, get more exposure to counter-attitudinal partisan news. So if you’re liberal, you’re gonna get more exposure to conservative news, and if you’re conservative, you’re gonna get more exposure to liberal news. The main study is going in the field soon. The pilot study though, wasn’t particularly encouraging in this regard, but we’ll see what happens with the main study. This remains a really important question. An open question about whether exposing people to more divergent attitudes is going to lead towards moderation and some of this edge to politics and this sort of great growing polarization of politics we’ve seen in the United States, but also, you know, throughout Europe and globally as well.

Saksham Sharda: In that case, what implications do these findings have for fostering a more inclusive political discourse?

Joshua Tucker: I mean, you know, this US 2020 Facebook and Instagram election study, we’ve tried a lot. We tried a lot of different things that have been postulated as being causes of polarization. So in another study, we looked at the effect of engagement-maximizing algorithms, and we had a randomized subsample who throughout the campaign didn’t see the engagement-maximizing algorithm. Instead, their content was produced by chronological feeds. So the old way that you used to get content, just the most recent content in your feeds, we also thought about virality and there have been concerns about viral content contributing to polarization. And so far in another study, we reduced the amount of exposure that people had or actually eliminated their exposure to reshared content on Facebook. Neither of these cases did it actually make any difference in polarization. We did do one study, so there’s different, you know, sorry, there’s different you know, these are all little bits and pieces of this and there’s lots of different studies that are trying to do these kinds of things, tease out portions of the information environment that are said to be contributing to polarization. I think what we’re beginning to learn, you know, in the United States is that things like affective polarization, how much you dislike supporters of other parties, once these attitudes get entrenched, they’re really kind of hard to dislodge. You know, you could do some lab experiments where you dislodge them for a short period of time, but then looking at these sort of longer over periods of time and how much these effects decay, it’s a really tricky problem. And I think one thing we have to become, you know, really come to grips with is understanding that some of these big picture questions like polarization in society, are multifaceted. And while we have a desire to kind of find a silver bullet only if we could change people’s feed to chronological feed, things will get better. These are complicated interplays looking at the rhetoric of elites and what the incentives are facing politicians in the system, long large-scale demographic changes. I mean, we’re sitting here in Europe, right? There are big implications for immigration and from migration and the way that politicians interplay with these issues. And of course, technology is a part of the puzzle. But I think one lesson that comes out that’s important to understand is that looking for silver, you know, we should continue to look at technology. We should continue to advance our scientific understanding of what technology is leading to these more polarized politics. But I think we have to temper our expectations of how much just playing around with technology is gonna improve things.

Saksham Sharda: So what opinion would you have on the way Elon Musk has shaped Twitter since taking it over if you would like to comment on that?

Joshua Tucker: Yeah, I mean, I think I’ll comment on it from the perspective of an academic. One of the great things about Twitter was that Twitter data was readily accessible to academic researchers. It ebbed and flowed a little bit. But since we have the new ownership at Twitter, as they have streamlined and followed, you know, their best understanding of what the economic imperatives are running the platform, unfortunately, one of the casualties of this is it’s become much harder for academic researchers to get access to Twitter data. Twitter data used to be made available to academics for free or at very reasonable prices that could be funded off of the kind of grants that academics can get. And again, I don’t think Twitter was aiming to do this or X was aiming to do this in their new policies. They were trying to deal with large-scale economic questions around the platform. But it’s been and you know, I don’t know if it was intended or unintended, but it’s been collateral damage of this, it’s much, much harder to do academic research. The pricing to get access to Twitter data these days is prohibitively expensive for academics. And what this means is that we, as the mass public, it’s harder for us to know what’s happening on social media. All these questions that you’re asking me right now, part of the reason we have answers to these is because academic researchers have been able to access data and Twitter was disproportionately the leader in making their data available. Now, you know, there’s larger questions there. Like, one of the things that makes Twitter great for being able to do research is the data is all public. It’s a platform where everybody, almost everybody posts publicly and all the research. And what we’ve learned about it has been from publicly available data. But Twitter taught us a lot about social media platforms. It allows researchers like me to be able to inform the media to be able to inform policymakers about what’s actually happening on this platform. So it has a real benefit for society. The other thing that was nice about Twitter was that it was so easy to get access to this data, you had a lot, you didn’t need a lot of resources to be able to, and you didn’t need a big lab to be able to do this kind of research. So you had lots of PhD students around the world doing dissertations and trying to, you know, learn and add scientific knowledge about what was happening on these platforms. So I would have sort of one message for the new leadership of Twitter, right? Academic research has never been a big part of the economic model of any of these platforms. If they can rethink some of these policies and think about how they can, while they’re going about the sort of economic changes that they feel necessary for the platform, if they can think about making it easier for researchers to access the, the publicly available data, again, I think this has a net benefit for society and it’s something that X can do to help the public understand better the aspects of social media. So that’s been my biggest concern today.

Saksham Sharda: So from a market perspective, what emerging trends in social media do you believe will shape the future of political behavior and public opinion research?

Joshua Tucker: Great question. I mean, I think the major thing that’s happening right now on these social media platforms is the transition from the content being delivered via social behaviour to these engagement-maximizing algorithms which are very individually focused. So let me explain what I mean by that. The way that social media started up, if you think about Twitter, you follow people. If you think about YouTube, you subscribe to channels. If you think about Facebook, your content always comes through your friends or the pages and the groups that you subscribed to. We call this a social graph. People were going on in these, onto these social media platforms to get content from networks that they had curated. Some of them were more your friends in real life, and some of them were more accounts that you found interesting, but this was based on you, the individual making decisions about what kind of content you wanted to see and from who. And that’s a crucial point from who. Now to be clear, algorithms always played a role in this. Social media platforms recommend people for you to be friends with, for you to follow. So I don’t wanna, you know, overstate this, but the information came through a curated social graph that you, yourself, you put yourself at the center of and you figured out who is in it. The big change that we’ve seen in the last couple of years is the rise of TikTok, and we can even call this the TikTokification of all of the social media, which is that TikTok accenting this algorithm, which no longer relies on the social graph. It took advantage of the fact that people were looking at videos and they had data on how long they would spend on videos, and which videos they would look, they would look at.They had lots of metadata on the content of these videos, but they had very strong data about users’ engagement with these videos. And in fact, we can think about this as kind of like the swiping effect. You only have one thing on the screen at any one time. The algorithm can learn what you like to have on your screen. TikTok kind of revolutionized the delivery of content by trying to come up with algorithms that focused on delivering people content, not from accounts they had followed, but content that was similar to the kind of things that they like to watch. We now see this spreading throughout social media platforms. If you think about the, for you feed on X or Twitter, if you think about Facebook now throwing all sorts of things into your feed that are not from people you follow, I think this is the major question. As researchers, I think we want to begin to think very seriously about what’s going to happen, as your feeds around politics are bringing you content based on engagement, maximizing algorithms as opposed to the social graph. So let’s take the question, we’ve been talking a lot about this question of echo chambers and filter bubbles is engagement, and maximizing algorithms gonna enhance that, right? We had a lot of worry 10 years ago that by giving people the ability to select into their social networks, that they would end up in these echo chambers and filter bubbles. Then a lot of groups, including our group at the Center for Social Media and Politics at NYU did a lot of research on this and found out, well, maybe these filter bubbles weren’t as hermetically sealed as we thought they were, right? You always hear the description of the crazy uncle. You become friends with people on Facebook for a lot of different reasons, some of which are political, but not all of which are political. And so the social graph method of delivering content did have ways of exposing people to cross-cutting content. So for example, in this US 2020 Facebook and Instagram site that I’ve been talking about, in one of these studies, we looked at the amount of exposure that people had on Facebook to content from politically like-minded sources, moderate sources, and politically cross-cutting sources. And what we found is, as expected, people saw a lot more information from politically like-minded sources, the median Facebook user at the time of the 2020 election in the United States had over 18 monthly active users on Facebook, and the median user saw about 50% of their content coming from politically like-minded sources. And about 15% of that content comes from cross-cutting sources. So that’s a lot more coming from like-minded people, but it’s not entirely coming from like-minded people. And the other 35% was from more moderate political sources. So the question is, is that gonna change with these engagement-maximizing algorithms? If your algorithm finds out that you’re liberal, is it no, you’re no longer gonna get that 15% of cross-cutting content that comes from more conservative sources. Are we gonna get into these tighter, more sealed hermetically, you know, hermetically sealed bubbles? I think the other big concern that people have in politics is whether or not these engagement-maximizing algorithms are going to lead people who are looking for extremist content to get even more of that extremist content. Now, to be clear, that’s very doable through current channels of social media. There are all sorts of platforms that cater to this kind of content. And even within the mainstream platforms, you can curate networks that are gonna deliver extremist content, but is the on-ramp to this kind of extremist content gonna get faster? Is it gonna get more slippery? Is it going to become easier for people to go online and only see extremist content? So I think that’s a big question that people are asking about trying to understand this, you know, what role these, these new types of platforms are gonna play in terms of leading people towards more extremist content online. So I think that’s another big question that we think about, you know, major developments that are happening, and of course, the topic we were talking about before, which is what is social media gonna look like in a generative AI world, right? Will people, if you combine these two things, the lowering cost of production of content combined with the engagement maximizing algorithms, it does lead to this interesting question about whether people’s feeds in the future are gonna be filled Certain people’s feeds online will be largely filled with content that’s created not by humans, but by generative AI. And whether it’s gonna be possible to have this kind of, you know, almost entirely kind of bot-generated ecosystems on social media platforms where you go online, the algorithm knows what kind of content you like, producers know how to produce that content at very low cost at high scale using generative AI and delivering it with social media accounts that are essentially run as generative ai and it kind of per pretends a potentially different future for these online platforms.

Saksham Sharda: So considering the extent to which TikTok can read the data into video viewership and interaction, how much of a valid threat is it to have it banned from the US for instance, or to have it have its data service within the country that’s operating it? 

Joshua Tucker: Yeah, that is a big question right now. So the question on hand is the question of the TikTok ban in the United States. So my background is studying post-communist politics. Before I got into social media and politics, my work was all on post-communist Russian and East European politics. Which means that I work with, you know, I have, I’ve spent a lot of time as a political scientist thinking about not just democratic regimes, but authoritarian regimes, competitive authoritarian regimes, transitions between these different states. I have to say, as an American citizen, it makes me very uncomfortable to be thinking about my government banning social media platforms as someone who’s long studied more authoritarian regimes. You know, and there’s incredibly exciting work in this field that’s been done about what happens when authoritarian regimes like China have banned platforms like Facebook and banned access to Google. And so the idea, I think before we get into the question of like, what exactly is the algorithm doing and what is the concern here, you know, the idea that we’re now going to be in a world where authoritarian regimes, when they wanna ban platforms that their citizens wanna have access that they’re gonna be able to say, oh no, this has nothing to do with being authoritarian, this just normal behavior. And the digital information environment. Look, the United States banned TikTok, which was a Chinese platform, and the US banned Chinese platforms. So it’s fine for the Chinese to ban Instagram, even if the behavior in China wanna get access to it. So I’m quite worried about the precedent, I’m quite worried about the world. We got into a US ban of TikTok because before this we had a real, you know, delineation between open societies, between liberal democracies that had flourishing market economies, and let these kinds of platforms for the most part especially ones that were widespread and widespread usage flourish, right? And let citizens’ choices decide which platforms are gonna be successful. And authoritarian regimes where the state weighed in, in cases where the state was controlled, they couldn’t control information, the state would then ban these platforms. That being said, your question about, you know, the TikTok algorithm, I don’t think it’s the algorithm that is the source of the ban. I think that’s important to get this distinction because there has been discussion in policy circles in Washington DC about whether platforms should be required to use chronological feeds instead of engagement-maximizing feeds. And there have been those, have those kinds of arguments been made in terms of politics, those kinds of arguments have been made in terms of wellbeing. Those kinds of arguments have been made in terms of the addictiveness of the platforms and children’s health, right? So those arguments get made for a lot of reasons. The concern about TikTok in the United States has to do with its ownership. If the ownership was transferred to an American company, then the United States wouldn’t ban TikTok. And so the concern is about what data this platform is providing about American citizens to the Chinese Communist Party. That’s a little bit more complicated of a question given the fact that in the United States, we have restrictions on foreign ownership of news broadcasters. So the question is like, what does that leak look like from news broadcasters to social media platforms? Well, we know that young people especially, get the vast majority of their news from social media platforms. There’s been a lot of talk in the United States about whether the algorithm on TikTok has been dialed up to show content that’s potentially more divisive. As we talk about this, we’ve had lots of protests throughout the United States around the Middle Eastern conflict. There’s a discussion of whether TikTok has unbilled the amount of content and has tried to show more provocative content to people in the US to sort of stir up these political divisions. I would say, again, if I was the one in charge of the regulation, what I would do is focus on making access to TikTok data, as we talked about for other platforms available to outside researchers. So instead of speculating about whether these things are happening on the platforms, our policymakers have good hard data, good, you know, rigorous scientific research on what’s happening on the platforms. When I think about regulation, I always think about data access and transparency as the prime mover of regulation. The thing about social media platforms is they’re enormous and they’re optimized for search, and it makes it really easy to find examples of anything on a social media platform, but finding out, doing kind of rigorous scientific research to understand things like denominators, like what proportion of content things are or trends over time or actual causal relationships. This is really hard to do precisely because the platforms are huge and because the data’s in new and, you know, complex forms as we sit here at a technology conference, right, when we think about all the new sorts of data that’s available. So, you know, my personal inclination is before you take these more draconian steps in terms of regulation, you wanna take this prime primary step, first step, make the data available so that we can do scientific research on it and understand what’s happening on these platforms. So it’s a complex topic, this question of the TikTok ban and it’s gonna be very interesting to see how it plays out in the United States because of the First Amendment provisions. So I think this is a story that I don’t know when this interview is gonna be broadcast, but I think we’re gonna see a lot over the coming, you know before this goes into effect. There’s a lot of twists and turns that are still to come in this story.

Saksham Sharda: So to combine everything related to tensions, technology, politics, and your background, how would you comment on the way Ukraine is handling the war with Russia, with Zelensky going to different countries and trying to get technological and monetary benefits?

Joshua Tucker: Yeah, it’s a great question and I think people, you know, military historians, but also communication scholars and international relations scholars will look back on this war as a kind of changing point in the way we think about some of these, these intersections between technology and military conflict. I’m not a specialist in military technology, but clearly, there are just big changes and big advances being made in the way this war is being fought on both sides, on both the Russian and Ukrainian sides. Originally, I think there was, you know, an incredible change in the way that Ukrainians were able to harness drones in the early days of this war when they did not have access to more advanced military equipment to gain informational advantages in the war, but also to deliver munitions. I think the Russians have made huge use of drones here, like everything in terms of technology and in terms of military conflict, these things are always cat-and-mouse battles, right? Like you have one side that advances in one way or the other side comes in and advances in another way. But I think on the battlefield wars are going to be fought differently because of what’s been learned in the context of this conflict, which has coincided with these kinds of rapid technological developments in terms of machine learning and you know, auto the ability for automated vehicles and these kinds of things. So I think there’s that aspect of it. There’s another aspect of it, which was Zelensky as president in the early days of the war, right? It’s, you know, right now we’re used to this and everyone who’s not paying a tremendous amount of attention to this war is used to thinking of this as a large-scale ongoing conflict, right? We hear, we hear about big shifts on the battlefield, and it’s in terms of, you know, hundreds of kilometers, hundreds of square kilometers of territory being gained, right? We’re not talking about massive changes in this at the moment we record this interview, the Russians seem to be doing better, but they have kilometres ahead of the Ukrainians and everyone thinks of this as kind of a long slot. But if you remember the beginning of the war, the previous wars that Russia had been involved with, with former Soviet republics had been over, you know, with Georgia, this kind of thing had been over in a matter of days, right? Famously it’s claimed that the Russians, part of the problem originally was the Russians were so overconfident that Russian officers were packing their dress uniforms for the parade in Kiev after the war was over in a week, right? And so it wasn’t a foregone conclusion that the West was going to rally to the Ukrainians the way that the West did. In fact, as someone who spent a lot of time watching what was happening in the early days of the war and speaking about this, to me, this was one of the great surprises of the war, was the way the West came to Ukraine’s defense and how unified the West was and how dramatic that, you know, how the dramatic were some of the steps that were taken in those early days. Zelensky, who interestingly enough, for those who are not familiar with this, is the president of Ukraine, the background was as an actor and a comedian, and actually, he was in a hit television show about a teacher who inadvertently becomes president of Ukraine. And when he was elected, a lot of people questioned his ability to deal with this geopolitical tension with this much larger, Russian neighbor. But, in a sense, his acting and presentation skills proved very useful in the early days of this war. And this was still, you know, this was still in a period of time of COVID travel restrictions, but neighbors master about using social media, about, you know, having every sort of one-liner that could go viral. And most famously this question period offeredCOVIDvacuate and he said, we don’t, you know, I don’t need a plane, I need weapons or something. I’m paraphrasing a little bit, you know, if you think about that, that’s from like a virality, that’s kind of a dream line, right? Like he’s able, you have the president and Zelensky would be, you know, not walking around wearing a jacket and a tie being in hotels. He’d be in fatigues, he’d be in bunkers, he’d be shooting these social media platforms. But then he also sort of mastered this sort of coming and talking to all of these Western governments and trying to buck up support.  Now that tends to be done more in person, but in the earlier days of the war when there was real concern about Ukraine falling, he did this using Zoom, and he would zoom, talk to whole parliaments, and talk to Nowegislatures. And so his communication skills, I think will also be looked back on. Now, unfortunately, there’s a question at this point about the durability of these kinds of attempts and politics. Unfortunately, you know, in the modern world, politics moves on and people get focused on other questions. I think he’s still trying to do this. He’s very good at this, but when you look back at the early days of the war, I think it was instrumental in building up the coalition that supported Ukraine and maintaining that support. He continues to be able to use his interpersonal communication skills, both digitally and to advance the goals of Ukraine. And you know, it’s a crucial moment right now. The US has finally, you know, brought in this larger package of support for Ukraine, which is going to make a big difference in the long run. But getting through this short-term period is challenging for Ukraine on many levels. I mean, they are fighting a war against a much, much larger country that’s right on its border. And I think, you know, technology, but also the sort of tech, short-term of communication skills with technology, I think people will look back on Zelensky as sort of a model of this and for, and to learn lessons on how to do this effectively in the future.

Saksham Sharda: The last question for you is of a personal kind, what would you be doing in your life if not this?

Joshua Tucker: That’s a great question. I mean, honestly, I think if I hadn’t gone into academia, what I would’ve liked to have done was like lead hofing trips for at-risk youth. I have a whole other part of my personality that involves being a camp counselor. I met my wife at my summer camp. We’re a big camp family, and I love that aspect of teaching with kids outdoors. But one of the great things about being a university professor is that I get to be a counselor, researcher, and teacher. And especially working in this field where so much of the exciting work is being done by younger scholars, I’ve been fortunate to be able to pursue this exciting new research agenda, to be able to have conversations with policymakers have conversations with all sorts of different people in these fields, but at the same time be able to have this aspect of my career that involves teaching and mentoring younger scholars. So I find it a very fulfilling career.

Let’s Conclude!

Saksham Sharda: Thanks, everyone for joining us for this month’s episode of Outgrow’s Marketer of the Month. That was Joshua Tucker, a Professor of Politics at New York University.

Joshua Tucker: Pleasure. Thanks for having me.

Saksham Sharda: Check out the website for more details and we’ll see you once again next month with another marketer of the month.

Similar Posts

Leave a Reply