How Trust and Safety in Match Group Became a Growth Engine, Not a Cost Center with Yoel Roth
Fresh out of the studio, Yoel Roth, Senior Vice President and Head of Trust and Safety at Match Group, joins Bernard Leong to trace how trust and safety has evolved from a behind-the-scenes function into a board-level discipline. Drawing on his earlier work in Twitter (now known as X) and his current work across the different portfolio companies under Match Group: Tinder, Hinge, and OkCupid, Yoel reframes online fraud as an economics problem โ why a new face costs scammers more than a new SIM card. He unpacks why anonymity does not cause online abuse, compares the American, European, and Chinese regulatory models, and argues trust and safety belongs alongside customer acquisition cost (CAC) as a growth lever. The future, he closes with what great would look like, is governance โ AI shifting practitioners from moderators to auditors.
"The word I always come back to when I think of the future of trust and safety is governance. We are no longer just making moderation decisions. We're no longer just banning people or deleting posts or removing accounts. We are ultimately responsible for overseeing the health of the platforms and communities that we've created. Especially in an age of AI, I think there's a huge role for the field of trust and safety in overseeing the decisions that AI makes. In a world where we automate more of our decisions, that doesn't mean that we don't have jobs anymore. It means that our jobs change. And our jobs change to being auditors and overseers of automated decisions. And then when we find problems, we're the people who are now responsible for engineering solutions to them." - Yoel Roth
Profile: Yoel Roth, Senior Vice President, Trust and Safety, Match Group (LinkedIn, Personal Website, Match Group Profile)
Here is the edited transcript of our conversation:
Bernard Leong: Welcome to Analyse Podcast, the premier podcast dedicated to dissecting the pulse of business, technology, and media globally. Over the past decade, we have seen digital platforms reshape how we communicate, form relationships, consume information, and even how societies function. But with this transformation, we always have a set of challenges โ online fraud, digital harm, misinformation, platform manipulation, and now increasingly AI-generated risk.
Trust and safety, once a behind-the-scenes function, has now become very important to a lot of technology companies and even traditional companies out there. With me today is Yoel Roth, Senior Vice President and Head of Trust and Safety at Match Group, the parent company of Tinder, Hinge, and many other of the largest online dating platforms used by millions globally. Today we will explore trust and safety, how platforms are tackling online fraud and digital harm, and how AI is going to change the future of platform governance and online safety. Yoel, I'm a longtime fan, first time caller. Welcome to the show.
Yoel Roth: Thank you so much for having me.
Bernard Leong: I'll start off with your career journey. You started off doing research, studying social media, online communities, and platform governance, then moved into operational roles in trust and safety at Twitter, now known as X, and now Match Group. How did that journey actually unfold for you?
Yoel Roth: I started studying trust and safety because I think the internet is one of the most amazing things humanity has ever created โ maybe that's a big and controversial statement, but I've seen firsthand and have lived firsthand the rise of the internet, and have seen how it's changed people's lives, in my view, mostly for the better. I've also seen firsthand some of the darker sides of it. What I want to do with my life, what I get out of bed to do every day, is try to bend the arc of the internet a little bit more to the positive and a little bit further away from the negative.
As an academic, I thought I could do that best by researching it. I wrote my PhD dissertation looking at some of the earliest mobile social networks โ in the early days of app stores and smartphones, how they were approaching safety, privacy, and platform governance. I published some articles. I was really proud of them. They got put in some pretty good peer-reviewed journals, and nobody read them. So I realized that if I wanted to have the kind of impact on these problems that I was hoping to have, I needed to get my hands dirty.
I had the opportunity while I was in graduate school to intern at Twitter, and started doing content moderation work very early on at the company, long before trust and safety was the type of professionalized field that it is today. Almost immediately I realized that the decisions we were making, the product choices that I was empowered to shape, could influence the experiences that millions of people had. I never wanted to do anything else, ever again.
Bernard Leong: You spent more than seven years at Twitter, building and leading the teams that later became responsible for content moderation, spam, misinformation, and election security. What were the most important lessons from your career at that point that you can share with my audience today?
Yoel Roth: I would say there were many takeaways, but two that I would call out as the most important. The first one โ this seems obvious now, but it was not at the time โ is that what happens on the internet doesn't just stay on the internet. We used to talk a lot about the difference between what happens IRL and what happens online. Especially on a platform like Twitter, people would say, well, why is content moderation important? It's just people saying words on the internet.
In the years since I started working in this field, what we've all come to understand is that online harms don't just happen online. They can very much affect people's mental health, their state. They can radicalize people, lead them to extremism and terrorism, and create massive harms that impact the offline world as well. The first key lesson for me at Twitter was that the conversations we have online and the connections that we make are deeply impactful on the offline world, and we have to take that responsibility very seriously.
The second lesson for me was the importance of good governance. When I started working in the field of trust and safety, content moderation was just something we had to do because people showed up on platforms like Twitter and did bad stuff, and we had to figure out what to do with it. We didn't view it as a governance question. We didn't look at it like the task of a government serving their citizens โ but I actually think that's kind of what content moderation is. It's about a constituency, your users, and the way that you serve them. Whether it's the rules that you write, how you enforce them, how you communicate, the transparency you provide โ you have to approach it with the same kind of rigor and discipline that a good government does. I don't think we always succeeded in doing that at Twitter. I don't think any platform does. But we have to strive for that. You can't be content with just treating it as, oh, we have to delete some posts, we have to moderate some stuff. It's not that simple. These are really consequential decisions.
Bernard Leong: You have to make difficult decisions because of how far the freedom of speech, you can actually exercise your First Amendment rights in the U.S., right?
Yoel Roth: That's right. Especially for a platform like Twitter, but it's true of all social media. This isn't just words on a screen. These are people communicating the most important ideas to them. This is a place where heads of state go to talk to each other. That was one of the really crazy moments at Twitter โ when we started seeing not just some politicians and elected officials, but the leaders of the world talking to each other in public in real time. I remember thinking โ I first signed up to Twitter to post what I was eating for lunch and talk to some of my friends in school. How far this has come. How much that reinforces the responsibility to treat this not just as something on the internet, but as a key part of our political and social lives.
Bernard Leong: I first encountered you through a podcast appearance and at a conference. You had a pretty big front-row seat to one of the most consequential debates around platform governance and trust and safety. Looking back over that period of your career, what has it taught you about leadership under fire, platform accountability, and the human cost of making high-stakes safety decisions at scale? I'm sure you have gone through pretty tough times during that period.
Yoel Roth: One of the lessons I learned about Twitter was how much people care about the platform. It's a little bit crazy to think of it in those terms, but I loved Twitter. I was one of the earliest users of it, and I cared very deeply about the product. But every decision we made, every action we took, ended up being headline news.
I remember the first time a decision my team made resulted in a New York Times push alert to my phone, and I remember looking at my phone and thinking โ oh, I did that. That's headline news about me and my team. I remember a time when the police in India raided Twitter's offices because of a decision that my team made. This happened during COVID, so they raided the offices and no one was there. That ended up not really mattering. But the point is โ these are massively consequential decisions that can get people arrested. In my experience, the degree to which people cared about what was happening at Twitter led people to make threats against me when they disagreed with decisions that my team made. It led to a series of congressional hearings in the United States that I testified at, and all of this hammered home that the personal stakes for governance for a platform as significant as Twitter are enormous.
It was certainly a scary period in my personal life after leaving Twitter. It's not an experience I would wish on anyone else. For other business leaders out there, my advice would be โ think proactively about safety for your employees, especially your public-facing employees. Think about the risks people have when they are a public spokesperson. Think about what you need to do to make clear that decisions companies make are not one individual's choice; they are the company's choice. But ultimately, above everything else, think about what you as a leader need to do to keep your staff safe. Those were lessons we learned a little bit too late at Twitter, and I experienced that firsthand.
I'm fortunate now, several years later, to get to keep doing what I love. I get to keep working on online safety, but in an environment where it's a little bit less in the political spotlight. I have been a fan of the online dating industry for my entire career. That was what I started off studying. I'm a dating app success story myself โ I met my husband on a dating app. I love that I get to work on this now. The positive outcome at the end of a very scary journey after Twitter was finding a company where I can continue to do this work that I care so much about.
Bernard Leong: Now, as you said, the trust and safety role has evolved. It's not a very traditional career path that most people plan for. How has the field evolved based on your time in the industry, from those days into today when you're at Match Group?
Yoel Roth: I would say there are two really big shifts that I would call out. The first is regulation. When I first started working in trust and safety, platforms moderated because they chose to moderate. It was, in my view, an important business decision. We learned early on at Twitter that financial success and growth for the business required content moderation. We did it because we knew that not only was it morally the correct thing to do, but it was also the correct business outcome.
That's certainly true today, but increasingly โ you can think moderation is good, bad, important, unimportant โ but you don't have a choice. Under the laws of many countries โ here in Singapore there are strict online safety laws; in Europe there's the Digital Services Act; in the UK there's the Online Safety Act โ pretty much everywhere other than the United States, there are clear regulations that hammer home to companies just how critical engaging in content moderation is. That's a huge shift.
The second big change is the move from reactivity to proactivity. When I first started working in trust and safety, we primarily were focused on reactively reviewing reports that users submitted. Something bad would happen to you, you would submit a report and tell us about it. We would review that report and say โ yes, something bad happened here, we're now going to take action on it. We learned over time that wasn't what consumers actually wanted. People want platforms to protect them. If all you're doing is cleaning up a mess after it already happens, you have fundamentally failed at your duty to protect people. We started building technology at Twitter to proactively identify harmful content. We have extensive AI tools today that help us protect users on our dating apps. The whole field has reoriented from merely reviewing reports to now being focused on proactive prevention of harmful activity.
Bernard Leong: Does that also involve engaging with governments and working together to figure out how to deal with trust and safety on a scale that affects a population?
Yoel Roth: Trust and safety has to be a collaboration between the private sector and the public sector. I'd also throw in there โ it needs to be a collaborative effort across different companies in the industry. It needs to involve academics and civil society, but governments ultimately can help anchor what the industry does in the laws of their jurisdictions and in the values that they were elected to uphold.
We see the most successful forms of online safety regulation as being principles-focused. They say โ here are the standards we expect platforms to uphold, and it gives tech companies the space to do what we do best, which is innovate on the technologies to achieve those goals. I think regulation is great. The most successful types of it are regulations that say, here's the bar we want you to achieve, and then my team and I get to figure out how we build the technology to get there.
Bernard Leong: That brings us to the main subject of the day โ talking about trust and safety from the point of view of the business. You already gave a pretty good overview. Why is it a critical function for digital platforms to think about trust and safety? How should CEOs and boards think about trust and safety today? Is it just a risk management function or a policy function, or is it a competitive advantage from your point of view?
Yoel Roth: I think it's both. Without a doubt, trust and safety is a risk management function. In my role today, I regularly report my team's activities to Match Group's audit committee in the same way that we report on our progress on privacy and cybersecurity. So yes, I would encourage CEOs and boards to view trust and safety as a part of their corporate risk management strategy and to build in governance the same way that you would for something like cybersecurity.
But there's more to it than that. If you look at the research, in most consumer-facing fields, consumers want to feel safe and protected. That's true on platforms that are focused on free speech, where we see that when people don't feel safe, they don't participate. If people don't participate, there's not a conversation. So there's an imperative to engage in moderation. But it's even more true on dating platforms.
In the context of Match Group's apps, we offer a freemium service. You can sign up for Tinder or Hinge completely for free, and you don't have to pay to use our apps. But 98% of our revenue as a company comes from people who voluntarily choose to pay for our services or engage in in-app purchases because they like the experience and want more of it. We know from our research that they won't do that if they feel unsafe. Critically, they won't drive the flywheel of growth by recommending our services to friends and family if they don't feel safe. There is absolutely a business imperative to engage in safety. It's not just about morality and human rights โ of course that's important too. I'm a believer in those. But from a purely self-interested capitalist perspective, all of the research I've seen, all of the work I've done, indicates you cannot be successful as a consumer-facing internet service without engaging in trust and safety.
Bernard Leong: That comes to this point. Most platforms historically tend to optimize for growth and engagement. But trust and safety always requires some form of friction โ verification, moderation, or enforcement. How do tech companies balance that growth with safety today?
Yoel Roth: We've reached a point where consumers are willing to accept a little bit of friction in service of a safer experience. The other side of that coin is โ we've now seen a decade-plus of the disastrous consequences of hyperscale growth without constraints. We can learn some really important lessons from what's gone wrong in the history of social media and the internet.
On the side of what consumers will accept โ one of the reasons I'm in Singapore this week is to introduce a new feature on Tinder called Face Check. It's a mandatory biometric verification when users are signing up for Tinder. It's an extra step in the onboarding process. 10 years ago, if you said "extra step in onboarding" in a product review meeting at a tech company, you would be laughed out of the room. Nobody would accept it. But we've now rolled this feature out to cover a majority of our users around the world because it has such significant positive safety benefits. The end result is that we've added a little bit more friction to onboarding in order to get a massive positive safety outcome โ and nobody objects. We have not seen meaningful backlash from consumers. We haven't seen that people stop signing up for our apps. But we do see that people like it when they feel safer and more respected, and that the accounts on Tinder are more authentic. The lesson for me is โ maybe a little bit of friction is a good thing, and we can actually design better and safer products by adding the right forms of friction at the right times.
Bernard Leong: I agree with you. Specifically for something like dating, where everyone wants to be safe and everybody wants to know who they're dating is a real person, verification is important. From your experience, what are the biggest mistakes companies make when they try to build trust and safety functions?
Yoel Roth: You highlighted one of the things that I think is most important โ know the product you're building, and tailor your solutions to that product, not just to some generic picture of best practices. The appropriate trust and safety measures for a dating app are different from the appropriate safety measures for a product like Twitter, and they should be. There is no universally right version of trust and safety, any more than there's a universally right way to build a car or build some other tech product. You have to understand what your consumers want, and then think about what the right approaches are.
In the context of a dating app, a feature like Face Check that has mandatory verification makes a ton of sense, because we're actually bringing people together face-to-face in person, so having a little bit more friction is the right decision. Would I build the same product if I still worked at Twitter? Probably not. Leaders working in the tech space need to recognize that โ understand your customers, understand your industry, start with research, then build tailored solutions.
Bernard Leong: Is it possible for you to help my audience understand a comprehensive overview of Match Group and its subsidiaries? Everybody knows Tinder. How does trust and safety factor into the company's strategy now as a platform? Prior to this conversation, I was quite surprised by the high number of users of Hinge and Tinder in the region as well.
Yoel Roth: Match Group โ the first thing people don't always expect about Match Group is how long we've been around as a company. match.com, our namesake brand, just celebrated its 30th birthday. It's a little bit crazy. We started building internet-based and computer-based dating services before the internet was a mainstream product. Match Group as a company is built out of a variety of different platforms โ in some cases that have existed for 30 years, in other cases that are being built right now for the first time.
Across all of our different brands โ whether it's Tinder, Hinge, OkCupid, Plenty of Fish, match.com โ all of them are built on a foundation of trust and safety. We know that in the business of bringing people together and forming human connection, that's not going to happen if people feel unsafe. We have a portfolio with a diverse range of products. They have different user experiences, different features, but my team sits at the center of that portfolio guiding our trust and safety strategy. We've set up trust and safety as a shared service, where my team provides trust and safety technology, operations, policy, and governance to every one of our brands, to uphold a high bar of safety even as each of our brands continues to operate almost like mini startups and innovate. That portfolio approach is a real competitive advantage for us.
Bernard Leong: Now we come to online fraud and digital harm. Specifically in Southeast Asia, there has been a massive rise in online scams, impersonation, romance scams, and financial fraud. From your perspective, why has online fraud become such a large global problem?
Yoel Roth: One of the biggest shifts in the online fraud ecosystem happened right during the COVID pandemic. Much of it was driven by criminal enterprises primarily operating out of Southeast Asia, changing their approach. Many of these criminal organizations historically had been focused on operating things like casinos, and in the pandemic had to find a different business model โ so they started to move into online and internet-enabled fraud. Specifically, we saw the rise of highly organized, industrialized criminal operations engaging in investment and romance scams.
This is a huge pivot from the types of scams and fraud that have existed for years and years, where it was individual bad actors targeting people randomly. There would be losses, and it was very serious, but it was nothing like the industrialized, highly organized form of fraud that we see today. That, coupled with the emergence of generative AI, has created a fertile breeding ground for highly effective, targeted, loss-leading fraud. This doesn't just impact dating apps โ it impacts the entire internet sector, the financial services sector, telecoms, crypto. The degree of sophistication and resources has markedly shifted in the last five years.
Bernard Leong: I had the opportunity to interview people from the economies who actually exposed the scamming. The person running those scams has actually been apprehended and extradited back to China. Now you're at Match Group, you operate a lot of dating platforms, so there may be possibilities of being targeted by scams and impersonation. How has the threat landscape differed between dating platforms and traditional social media platforms? Are there differences, or are the problems actually very similar?
Yoel Roth: The threat actors that target our platforms are similar, but there are some unique challenges in the dating space specifically. People come to dating apps looking for connection. If you're using a social media site, you might be looking to share content or build an audience or get informed, but you're not always approaching it with the emotional vulnerability of looking to connect with another person. When you go to a dating site, you're looking for that connection and that intimacy. That creates some unique challenges, because on one hand you're looking to find that connection โ you're open to the possibility โ but you also have to maintain a guard against the possibility of deception, fraud, and scams. We really see scammers and fraudsters targeting dating apps because they see an audience of people that are looking for exactly the kind of connections that scammers prey on.
Bernard Leong: What role does identity verification play in reducing this? The reason you have face verification is because you also want to make sure the person verifying is not just some romance scam actor trying to target. Is that the rationale?
Yoel Roth: I take a very economics-based view of how to combat fraud and scams. This was true of my time at Twitter as well. Scammers are primarily profit-seeking. We can understand them as bad guys, we can understand them as immoral, but fundamentally they're looking to make money. My goal is to change the P&L dynamics, the economics of it. What can I do to raise the costs on scammers?
Historically, the tools that have been in our toolbox as a tech company have been things like โ block their email address. Okay, you register a new email address. I can sign up for Gmail in five seconds. That's not really a real deterrent. Then we started introducing phone number verification. Now you have to get a new SIM card.
Bernard Leong: But you can buy another SIM card too.
Yoel Roth: Totally. There's a cost there, but it's not a super high cost. About two years ago, we started thinking at Match Group โ what's something really expensive to change? We realized that it's much harder to get a new face than it is to get a new SIM card. We started looking at facial verification as a key piece of our approach to authenticity, both because it can help us confirm that the person using Tinder is the person who appears in the profile, but also because the costs for bad actors go way up. If we detect that somebody is a scammer and we block them, and we're blocking them on the basis of facial biometrics, you can't sign back up for Tinder without getting a new face. Surgery is pretty advanced these days, but it's not cheap. That really changes the economics of scams.
Bernard Leong: Interesting. According to The Economist, the scam industry is something like 550 billion dollars. If you were able to take a chunk out of that user base by increasing the cost for scammers, then the economics may just fail.
Yoel Roth: That's right. The individual losses to scams, especially for something like a cryptocurrency investment scam, can be significant. I was looking at data from the Singapore Police Force โ last year, individual losses on average to investment scams were something like 50,000 SGD per individual per loss. One successful scam can be profitable. But again, the costs are pretty high if you're blocking a face.
Imagine a situation in which you're a scammer. You sign up for Tinder, we catch you, you get blocked, and now we've blocked your facial biometrics. At that point, the only option for that individual to continue to carry out a scam is to go to some other platform. If they want to target Tinder again, they have to find a whole new person to do it. That gets at some of the real tragedies of industrial scamming, which is the human trafficking component of it. The costs also go way up. That's the first really meaningful deterrent we have in this space.
Bernard Leong: From your point of view, are we moving towards a world where it is almost impossible to maintain anonymity? If that's the case, how do platforms need to think about things like privacy, anonymity, and safety? There's no absolute right and wrong when it comes to trust and safety. It's just a question of trade-offs, and I like the way you frame the economics of that trade-off.
Yoel Roth: I'll go back to my comments earlier about the importance of building trust and safety solutions specifically in context. For a dating app, I think it's perfectly appropriate and important for us to build facial verification into our products, because we're bringing people together IRL and the harms of scams are so significant. But I wouldn't build the same product if I worked at Twitter, because in the context of a platform where people are talking about their political viewpoints or are engaged in dissent, having that type of requirement to unmask who you are is an invasion of privacy that is disproportionate. First and foremost, we have to tailor our solutions to different platforms.
The other big watch-out I'd have here is the common misconception that when people are anonymous on the internet, they're more abusive โ or the flip side of it, that if everybody has to use their real name on the internet, that's a solution to online abuse. All of the peer-reviewed research that I've ever seen conclusively disproves that. Anonymity and abuse are not correlated. In turn, requiring real names does not solve abuse. The level of online abuse and harassment I get on LinkedIn is unlike any other platform โ these are people using their real names and their real jobs. Requiring real names to be used publicly is not in and of itself a solution to online safety. We need to strike a delicate balance between when we require people to identify as who they are on their driving license or their passport, and when we give people that space to use the internet pseudonymously or anonymously.
Bernard Leong: We now have generative AI, which is increasingly being used in trust and safety for content moderation, fraud detection, bot detection, and behavioral analysis. Let's go to the positive side. How is AI changing trust and safety operations today?
Yoel Roth: There are a few areas where I think AI has been and will continue to be transformative. The first is translation. Multinational tech companies that operate at global scale across hundreds of markets consistently face a challenge of bringing local nuance and context to the awareness of moderators. Without a doubt, large language models can be transformative โ first in helping to accurately translate speech and put it in context, but then also surfacing additional information to moderators that can help them make informed decisions. I'm a big fan of assistive technologies that can not replace moderators making decisions, but give them useful tips and information.
Bernard Leong: More as indicators rather than actual decision-making tools.
Yoel Roth: Exactly. It's decision enablement. What's the old IBM saying? Never have a machine make a management decision. There are some types of moderation decisions that a machine can reliably make. Spam โ AI is terrific at dealing with it. A lot of types of fraud โ AI is better than humans are. But for those really tricky and nuanced cases, things like hate speech and harassment, there's no substitute for human judgment. In that situation, I want AI to play an assistive role rather than a decisive role.
Bernard Leong: I have a personal experience. I was once the head of all the post offices in Singapore. One day we suddenly had a big influx of grandmothers โ women above sixty โ who came to the post office on the grounds of needing to remit at least about 2,000 Singapore dollars to address something that happened to their loved ones. It was triggered by a phone call from some fake officers. My staff, the ambassadors, were trying to stop the grannies from wiring. It happened to two or three of them, and they actually got a phone call, and we had to call in the police. The police came and managed to discover that it was a scam.
After that experience, we did two things. We asked the police to train us on how to deal with this kind of situation. At the same time, I was at a digital conference where everybody said โ you should just allow grannies to use one button to press for remittance out. I started explaining the story to everyone. I was also chief digital officer at the time, and I was telling everybody โ well, if I hadn't had human beings right in front of the decision-making process, that granny would have just pressed the button and the $2,000 would have just disappeared.
I'm thinking about the harm AI can do โ that people now can fake the voice, fake even the face and everything. How do platforms need to think about being proactive in trust and safety from the other side of the conversation?
Yoel Roth: That story is so powerful because it highlights so many of the different dynamics at play in something like scams and fraud. First, there's the detection piece โ we need to be catching these campaigns as they're rolling out. Then there's the intervention piece โ who are the people who should be stepping in in those moments to say, hang on a minute, I think this might actually be a scam.
There are a few intervention points here. One is friction. If you're in a position of saying we want one-touch financial transactions with no recourse โ hang on a minute, there's a risk there, you need to think about it, you need to design around it. Second, it gets at who the credible, trustworthy voices are that can step in in a moment of crisis and help somebody realize that they may be engaging with a scam.
Some of the stories that are most heartbreaking when you speak with victims of scams are people who say โ I saw my family member or my relative getting scammed, and I told them, you're getting scammed. They said โ you don't know what you're talking about. This is just love. This is somebody who I trust and care about. When you're stuck in the spiral of a scam, it can be very hard to have that recognition that there's something suspicious happening. All the research I've read suggests that the most effective thing is hearing it from multiple different voices.
Tech platforms need to be reminding people to stay scam-safe. People working in post offices, people working in bank branches โ even if you're at a convenience store and you see somebody buying a lot of gift cards, that's a very common scam tactic. Maybe step in and ask what's going on. We're trained to be very private and not to get involved in other people's business, but scams are a societal-level challenge, and we all have a role to play in protecting each other ultimately from criminal activity.
Bernard Leong: I was quite thankful that the ambassadors managed to stop this. We did receive an award, which I took on behalf of the team, but I always tell the team that we should send the service ambassador to take it because they were the ones who discovered it. How do we now combine human judgment and AI systems so that we can enhance trust and safety operations like that?
Yoel Roth: Especially when we're dealing with emerging threats, there's no substitute for human intuition and investigative capacity. The human brain is one of the best tools that's ever been developed, ever, at spotting weird stuff. Anybody who does investigations for a living, whether they work in the police or at a tech platform, knows that sometimes you can just look at a spreadsheet of data and your eyes gravitate right to the thing that's weird. We need people to keep doing that.
But having the capacity for people to do what we do best, which is find weird stuff and dig into it, requires us to automate the things that are lower judgment. My strategy at Match Group for deploying AI is โ automate what we can automate. Let the robots handle the things that are low-context, low-sophistication, and which they can do reliably. Then give humans the capability to focus on those high-sophistication, high-context tasks, and really give us that space as an enterprise to focus on emerging threats and issues, rather than just getting bogged down in constant additional operational burden.
Bernard Leong: People react very negatively to the idea โ it's my money, I should be able to spend it how I want. Nobody can tell you how to spend your money or who you should spend it on, and people can react very aggressively in these moments. But it's sometimes the only option. If you get it wrong, if the police get it wrong, if your bank gets it wrong, maybe it takes a little bit of extra time for the transaction to go through. But that seems like a price worth paying.
One important part is governance and regulations. You are a practitioner and also a scholar on platform governance. How have you seen regulation evolving globally for digital platforms, especially as governments now put more pressure on issues like fraud, identity, and online harm?
Yoel Roth: We've seen the emergence of three primary approaches to governance and regulation of the internet space. The American model is pretty hands-off. The focus is on free enterprise and enabling innovation, and I think that's actually a good thing generally. We've seen a lot of companies build genuinely new products in a space where they are protected in the work they're doing and have the space to innovate.
In Europe, we've seen a model that's really anchored in human rights. From the founding of the European Union onwards, in Europe there's a focus on basic human rights โ rights to expression, rights to safety, rights to privacy, rights to conduct business freely. Laws like the Digital Services Act enshrine those rights and institutionalize them, and then constrain what platforms are doing based on those human rights principles.
The third approach โ China being the dominant example here โ is an approach of the government dictating what speech is and isn't appropriate, and directing companies in the private sector around what appropriate governance looks like. So you have hands-off, principles-based, government-directed. We see different countries around the world gravitating to different points on that spectrum. Some lean a little bit more toward the American free-market model, some lean a little bit more toward the European principles-based human rights model, some lean a little bit more toward the Chinese top-down directed model. Those have emerged as the three big regimes.
My bias as a trust and safety practitioner is very much toward the European model. Let's define what our values and our principles are, and what we're trying to uphold. Then let's create a regulatory framework of accountability. Let's require transparency. Let's require companies to have appropriate safeguards and risk assessments. Use that as the foundation for how you build good products.
Bernard Leong: What do regulators often misunderstand about how content moderation and platform governance actually work?
Yoel Roth: The biggest misunderstanding is that not all platforms are the same โ this is the theme I come back to again and again. A lot of times, laws are written, frankly, for Facebook, Instagram and Twitter. They're not written for platforms like Tinder and Hinge.
I'll give you one really specific example. On Tinder, we allow users to submit reports to us if something harmful happened to them when they're on a date. Obviously, if you meet somebody on our platform and you go on a date, we aren't there physically with you on the date. It would be a little bit weird if my team followed you around.
Bernard Leong: That makes complete sense.
Yoel Roth: We still want to make sure we know if something bad happened, so we give people reporting tools. On the basis of those reports, we review them, and if we find the report to be credible, we will take enforcement action. If you say โ I was on a date with Yoel, he did something bad to me โ we review the report; I could get banned on the basis of that report.
But that's not what the Digital Services Act was really built to deal with. It was built to deal with things where you can review a post on a social media site, make a determination about whether the content of this post does or doesn't violate the rules. We've had a lot of conversations with regulators in Europe explaining that when we have to deal with in-person interactions as part of our trust and safety approach, it doesn't exactly fit into the framework of laws in Europe. They understand that, but it doesn't change that the laws weren't built to deal with platforms like ours.
Bernard Leong: Is it because they're trying to find a one-size-fits-all law based on principles, and therefore it's difficult for them to see the trade-offs between the social media side, where you can post photos, versus when you have dating, like in your case, where there is actually translation into a real-world situation?
Yoel Roth: I think so. The core challenge of especially big, ambitious regulation like the Digital Services Act in Europe is that you have to solve for the majority case. There's a lot that gets lost around the margins. I don't think dating apps are marginal at all. They're used by millions of people. Match Group builds the biggest dating apps in the world. It's hugely consequential. A majority of relationships are starting online.
In that context, I want regulators to leave the space for us to navigate these trade-offs. But when all we're doing is writing laws that solve for how do you govern social media, how do you govern ad-driven platforms, you can end up not having the kind of nuance that allows platforms like ours to innovate.
Bernard Leong: Do you think that with the Digital Services Act, digital platforms will eventually become more regulated like financial institutions when it comes to fraud, identity, and controls?
Yoel Roth: In Europe and globally, there's an emerging consensus that scams and fraud are among the most severe issues that consumers face, and ones where we really need to have a coordinated and cross-sector approach. The banking sector, for lots of very good reasons, is highly regulated. When we look at the effect those regulations have, a lot of it is strict know-your-customer procedures to try to clamp down on things like fraud and money laundering. That's helpful and appropriate in that context.
But the level of know-your-customer diligence that you go through when you open a bank account wouldn't really be appropriate in the context of creating a social media account. I don't want to have to provide my passport or a birth certificate or something to sign up for Tinder. In that context, we need regulation that acknowledges that every part of this economy has a part to play here โ financial services, telcos, platforms โ but the solutions might look different across different platforms. At a bank, maybe it's document-based KYC. On a platform like Tinder, maybe it's Face Check. We think that all of those can play a part in cracking down on fraud and scams.
Bernard Leong: Looking ahead in the next five to 10 years, how do you think trust and safety will evolve as a function within technology companies?
Yoel Roth: The word I always come back to when I think of the future of trust and safety is governance. We are no longer just making moderation decisions. We're no longer just banning people or deleting posts or removing accounts. We are ultimately responsible for overseeing the health of the platforms and communities that we've created, and need to be good stewards of that.
Especially in an age of AI, there's a huge role for the field of trust and safety in overseeing the decisions that AI makes. I talk to my team a lot about how, in a world where we automate more of our decisions, that doesn't mean we don't have jobs anymore. It means our jobs change. Our jobs change to being auditors and overseers of automated decisions. When we find problems, we are the people now responsible for engineering solutions to them. That's different from the historical role of trust and safety as being the people who make decisions one by one by one. Now we play a governance function, not just a frontline moderation function. That's a really positive and promising direction for the field.
Bernard Leong: If young professionals want to develop and go into the trust and safety field, what would be your advice, and what skills should they have in the era of AI?
Yoel Roth: Curiosity is always the most important skill. That one's hard to learn, but it's easy to put into practice. Trust and safety is ultimately about trying to figure out why people are doing things, and what you can do to shape that behavior in a more positive direction. That starts with wanting to understand human behavior and society and communication, and a recognition that people are really weird. Understanding that weirdness and that nuance, and being curious about it, makes you a more effective practitioner. I would encourage people to think of themselves as researchers โ lead with questions, lead with empathy and curiosity. In the age of AI or not, those are skills that are always going to serve you well.
Bernard Leong: What's the one thing most business leaders and policymakers still misunderstand about trust and safety, but should know?
Yoel Roth: The biggest misconception is that trust and safety is just a cost center. People sometimes think โ there's not an easy way to understand the P&L of trust and safety. We spend a bunch of money on moderation and tools, and you're just another cost of doing business. I want to reframe that whole way of thinking about trust and safety to being about growth enablement.
I'll give you an example. I had a conversation with Spencer Rascoff, who joined Match Group about a year ago as our CEO. There was an initiative my team wanted to pursue to drive down false positives in our enforcement. We thought there was a group of users where some automation was incorrectly banning them, and we could build something more accurate. The cost of doing that was X dollars. In the conversation I had with him, his brain immediately clicked over to thinking about the cost of addressing these false positives divided by the number of users we would now preserve on our platform by being more accurate. He said โ well, the cost of saving each one of those users is lower than our cost of acquisition through marketing. So of course we should do this. It completely blew my mind. It was a way of thinking about trust and safety relative to other growth costs, CAC, in a business, and saying โ actually, it's more cost-effective to do good trust and safety than it is to acquire users in some other way. That's how we have to think of this field.
Bernard Leong: What is the one question you wish more people would ask you about trust and safety, but don't?
Yoel Roth: Let me think about this for a second. I get asked a lot of questions about trust and safety. A question I wish we talked more about is resilience. How do you maintain your sanity when your job is looking at the worst things on the planet?
There are a lot of things that are very operational. If you work in the field of trust and safety, there are practices like limiting the amount of time that you work at a stretch, taking breaks, viewing photos in black and white โ things that can really effectively preserve your mental health. But the broader message I have for my team and for other practitioners is โ focus on the mission. Resilience at work in the face of adversity is ultimately something that comes from focusing on what you're trying to do in the world. I do this work not because it's easy, but because I want the internet to be better, because I think it's magical. When I have a hard day at work and I've had to look at bad content, or I've had to deal with heartbreaking reports of people who have suffered scams or fraud, I focus back on what it is that I'm trying to do, and that helps me be resilient in the face of that adversity.
Bernard Leong: That's a very good one. There are a lot of unseen people behind the scenes who look at all that harmful online content. There are a couple of articles I read on different news publications about that. My traditional closing question โ what does great look like for trust and safety from your perspective at Match Group, or even as the field itself, given this is evolving so quickly in the next couple of years?
Yoel Roth: Great for us looks like building understanding. The reason I love conversations like this one is that I get to share the work we do in trust and safety with people who have benefited from it, but maybe have never thought about what it looks like in practice. The more that I can help expose what the field is, what we're working on, what it looks like day to day, what the trade-offs are, the better. Every person in the world will be equipped to understand the governance of the internet and why it matters. We have succeeded as a field not just when we're doing a good job keeping bad actors off our platform, but when every person using our apps understands that we are here protecting them, that we are building technology that works, and that people feel bought in on the trade-offs that we've made.
Bernard Leong: Many thanks for coming on the show, and thank you for educating me on the many dimensions of trust and safety that I hadn't thought about until you talked about it โ from the economics point of view, and even from changing the mindset from cost center to something that is essential to the growth of the platform itself. In closing, I have two quick questions. Any recommendations that have inspired you recently?
Yoel Roth: I'm a total nerd. I recently went on a reading spree focused on financial fraud, more specifically looking at white-collar crime. I read one of the classic books about business crime โ The Smartest Guys in the Room, which is about the fraud at Enron. Then I read Andrew Ross Sorkin's book [Too Big to Fail] about the 2008 financial crisis. What really inspired me about those books is โ first, they're just fascinating. White-collar crime is always a little bit crazy. But they highlighted the ways that good people in complicated institutions can end up making catastrophic decisions. For everybody who works in management or leadership, there's a lesson to be learned about looking at how the biggest, most profitable companies in the world can end up making these incredibly disastrous choices.
Bernard Leong: That's a good recommendation. How can my audience find your work, and anything about Match Group's trust and safety work?
Yoel Roth: We're trying to talk a lot about the work that we're doing. It's important and impactful. I am super active on LinkedIn โ you can find me under my real name. Please don't send me harassment. I'm also active on Bluesky. My website is yoelroth.com.
Bernard Leong: You can definitely find this podcast anywhere. Please send us feedback โ don't worry, even if it's abusive as well, because I got it a couple of times. You can drop us a note anywhere. Yoel, many thanks for having this conversation. I really enjoyed it.
Yoel Roth: Thank you so much for having me.
Bernard Leong: Thank you.
Apple Podcasts
Podcast Information: Bernard Leong (@bernardleong, Linkedin) hosts and produces the show. Proper credits for the intro and end music: "Energetic Sports Drive" and the episode is mixed & edited in both video and audio format by G. Thomas Craig (@gthomascraig, LinkedIn). The recording is done in Poddster Singapore.