Jack Dorsey, the CEO of Twitter, sat down for a one-on-one interview with Fox News Radio's Guy Benson of Benson and Harf. Dorsey reacts to shadow ban controversy and acknowledges conservative concerns about insular, left-wing culture of big tech. 

Jack: First and foremost I do understand the concern. It is something that we're aware of. I'm from St. Louis originally. My father's fairly conservative, Republican. So, I come from an area where I do have a lot of ideological diversity. And we even have it in our company and we even see some of this dynamic in play in our company. We have folks that are at various points in the political spectrum and they don't feel comfortable today bringing up certain issues or their viewpoints on certain issues. And I don't believe that is acceptable. It's not acceptable for us to create a culture like that, especially when we're creating a service where we are trying to enable to hear from every perspective to try to bring people together across the spectrum to look for different ideologies and encourage them to talk because we think that debate that critical thinking the critical questioning is viable and important. I am not stating that we know exactly how to do that today but we are resolute and committed to figuring it out.

Listen below:

FULL INTERVIEW TRANSCRIPT

00:00

Guy: Joined now by Jack Dorsey, who is the founder and CEO of Twitter. And right out of the gate thank you so much for making some time.

00:07

Jack: Thank you.

00:08

Guy: So obviously, your company has been in the news recently. There were concerns about maybe user erosion, stocks fell and everyone has their pet theory about why that may have occurred. I'm sure you have analyzed it very closely. What's your take on what happened with the market last week?

00:25

Jack: Well, you have to always consider these moves in a broader context. Often times it's not just pointed in our direction, but it is a broader context within the market itself. We have to focus on what we can control, which is how we're driving more daily utility for people all over the world getting more out of Twitter every single day. At the same time balancing a bunch of the concerns that we've seen rise to the surface, including how we're going after health of the platform.

00:56

Guy: So conversational health is a point of emphasis and I know a lot of people are sort of wondering okay what exactly does that mean? What does it look like? It seems like maybe to some people an ethereal concept some people say that almost sounds Orwellian. What do you mean when you think of conversational health on the platform? And how are you seeking to achieve that or at least enhance it?

1:17

Jack: Let me take a moment and make it super tangible or try to make it tangible. So, we all have these indicators of health, the human body has a number of indicators. Temperature is one such indicator. When your temperature is above or below 98.6 it's an indication that your system is out of balance in some way and we only know that because we built measurement tools to understand where the indicator is at. Again, it's an indicator of potential imbalance potential unhealthiness. Once we understand where the indicator is at we might put solutions against it. So, for instance based on all of our experience, based on everything that we've seen we know that if you drink hot water with lemon it will likely bring your system into balance faster. We know that if you take this glass of wine it might keep your system out of balance longer. Now what that means is that we can start experimenting with what things might bring our system back into health. So, what we're trying to do right now is simply understand the indicators or what conversational health is and it may seem conceptual it may seem a little bit abstract, but we know we all know we've all had this feeling where we're in a conversation that is empowering that is inspiring that we learn from. It might be challenging but we take something away from it. We also know when we are in a conversation that might feel a little bit toxic and might feel not receptive to any sorts of change. It might feel closed off. If we can feel it we believe that we can measure it. And that's where we are as, we need to understand what these indicators are so that we can measure them and then we can put solutions against it. Now we also know that not everyone is going to choose health and that's okay. Our job is to make sure that we're looking out for the broader collective. 

3:13

Guy: So, how would you respond, for example, my co-host quit Twitter because she thought it was just too toxic and the harassment was just too much. Another good friend of mine on the right end of the spectrum walked away from Twitter for about a week because she was getting a deluge of nastiness, but there's the other side of this which is the free exchanging of ideas and now limiting ideas beyond outright threats and that sort of thing. How do you try to balance that? Not driving people away because it can get so ugly versus not trying to put your thumb on the scale and potentially silence speech even if that speech is unpleasant?

3:49

Jack: I mean you hit on exactly the most important balance aspect that we need to consider which is we believe fully in free expression, but people don't necessarily feel safe to express themselves if they're constantly getting harassed or if they feel like they're getting abused and that may or may not be the case, but we need to make sure that people feel safe to participate feel safe to express themselves feel that they're walking into a conversation that they're actually going to learn from and again those conversations might be challenging. They might be upsetting. They might show some truths that people find uncomfortable. Those are things that we should run towards, but we need to make sure that people have the right tools to manage their own experience and I say I focus on the conversational health indicators and the measurement of such because if we can't measure this thing we don't know if we're helping or if we're hurting.

4:47

Guy: I guess on the continuum though with conversational health on one end and just unfettered free expression on the other. Do you value one more than the other? 

4:58

Jack: I don't think you can rip the two apart. I think you have to understand that people are only going to want to participate if they feel the safety to do so, and again there will always be a wide range of spectrums in order to feel that safety, but we need to make sure that we are providing people with simple tools in order to fully participate in a conversation. Ideally everyone feels safe to express themselves. Ideally everyone is focused on free expression. We see so many of the issues of the day. We are able to acknowledge the problems we have as a society. We are able to acknowledge some of the things we need to address because people feel free to share their perspective. We need to make sure that that comes first before everything else. 

5:50

Guy: There was a flab a few days ago about shadow banning and there we allegations that prominent republicans and conservatives were being shadow banned. I know that you guys have pushed back. There's a blog post. There was a dispute over what that term even means. There's a particular definition for shadow banning. But, in the blog post that Twitter put out they said we aren't shadow banning anyone certainly not based on politics, but certain people might be harder to find you have to work harder to track them down. And I think some people said that sounds shadow banning light, some sort of algorithm that is making users on the platform, including in some cases some members of Congress harder to find, less accessible for the broader using public. How are you guys pushing back on this notion of shadow banning and are you concerned about weaponizing the algorithm where people can mute or block or report through sort of an AstroTurf campaign to try to silence, at least temporarily, people that they disagree with? 

6:55

Jack: I mean first and foremost we don't start with a pushback. We start with learning, like what are people trying to tell us when they use this term and when they use this concept and I think there's a few things happening. One, we shifted to a ranked timeline two years ago. We started ranking based on what our algorithm believe that you will find interesting that you will find matter. It's based on who you follow. It's based on who you engage with. It's not based on decisions that we make. So, a lot of people have had an experience where they're following someone and they don't see the person they follow in their timeline. They go to their profile they see a recent but they don't see that tweet immediately in their timeline. 

7:40

Guy: Or they're searching that person and they don't auto populate.

7:42

Jack: So that's another issue. So, speaking of the timeline first because that's where people spend the majority of their time what the algorithm is doing is it's bubbling up based on what it thinks is most relevant based on the signals that we're getting based on your own interactions. It's not always going to be right. We're going to get  better and better over time. We have an escape patch here, which is you can turn that whole ranking off. You go into settings and you turn off the ranking of the timeline. We need to earn the right for that to stay on. We need to make sure that we are delivering something that is immediate timely and relevant. And if it's not today and you turn it off we need to learn from you turning it off.  There's another issue where the commonaries of the service like replies, like trends, like search where one can inject oneself into search. One can inject oneself into, it's not based on mechanical. So we have a number more signals on that. So that's all focused on relevance and again sometimes we get those wrong. In this particular case we got it wrong with the type ahead search. And we weren't looking at the particular content, but we were looking at clusters of behavior that might be, that might have surrounded the account in the past, which goes to your last question which is how do we make sure that we're protecting against gaming, intentional gaming. Looking at our algorithms understanding how they might work and then doing a coordinated block or coordinated mute. The answer to that is we can't just look at one dimension or behavior. We have to look at everything, and the algorithms have to take in all this information at once to really understand and again these things change hourly. They learn every single minute. So, if for some reason a tweet or an account is down ranked that is not permanent. That is not forever and it may not be based on behavior at all. It may just be based on relevance of the content to the person searching for it. So, there's a number of factors. The net of this is we need to do a much better job at explaining how our algorithms work. Ideally opening them up so that people can actually see how they work. This is not easy for anyone to do. In fact there's a whole field of research in AI called 'explainability' that is trying to understand how to make algorithms explain how they make decisions in this criteria. We are subscribed to that research. We're making sure that we can help lead it and fund it, but we're a far way off. So in the meantime we just need to make sure that we're pushing ourselves to explain exactly how these things work. How we're making decisions. Where we need to make decisions as humans vs where the algorithms make decisions based on behaviors and signals.

10:38

Guy: Last question about the humans. Can you understand where a lot of conservatives are coming from when they are suspicious on some level of big tech. I mean here we are in San Francisco which is a very liberal place and there have been events, Google Facebook and elsewhere even Twitter, where conservatives have felt targeted or on some level disrespected or not treated the same way because of their ideas. Whether you agree that's happening or not, do you understand the concern? And there's a huge room, a ballroom, that we're sitting right next to with close to 4,000 people in it I won't ask you to guess how many of them would lean to the right, but does having a significant critical mass of conservative thinking people, intellectual diversity does that matter to a company like Twitter? In terms of the people that you hire and the decisions that are made at the human level?

11:29

Jack: Absolutely. First and foremost I do understand the concern. It is something that we're aware of. I'm from St. Louis originally. My father's fairly conservative, Republican. So, I come from an area where I do have a lot of ideological diversity. And we even have it in our company and we even see some of this dynamic in play in our company. We have folks that are at various points in the political spectrum and they don't feel comfortable today bringing up certain issues or their viewpoints on certain issues. And I don't believe that is acceptable. It's not acceptable for us to create a culture like that, especially when we're creating a service where we are trying to enable to hear from every perspective to try to bring people together across the spectrum to look for different ideologies and encourage them to talk because we think that debate that critical thinking the critical questioning is viable and important. I am not stating that we know exactly how to do that today but we are resolut and committed to figuring it out.

12:39

Guy: Jack Dorsey I think a way to foster this environment that you are talking about and having these conversations includes transparent exchanges like this one. So I really appreciate you stopping by. Hopefully we'll be able to do it again and continue the conversation in the future. 

12:53

Jack: We will. Thank you so much.

12:55

Guy: Jack Dorsey CEO Founder of Twitter our guest on Benson and Harf more after this.