Senators Blackburn, Hassan Press Match Group for Answers on Romance Scams

September 24, 2025

WASHINGTON, D.C. –  U.S. Senator Marsha Blackburn (R-Tenn.), Chair of the Senate Subcommittee on Consumer Protection, Technology, and Data Privacy and member of the Joint Economic Committee, and Senator Maggie Hassan (D-N.H.), Ranking Member of the Joint Economic Committee, today pushed Match Group, the operator of a number of major dating apps, including Hinge, Tinder, and OKCupid, to provide information about romance scams on its platforms. Romance scams, in which fraudsters form relationships to induce money or gifts from victims, have become a leading form of financial fraud in the United States and cost Americans at least $1.3 billion annually. Match Group’s platforms are used by nearly half of all online dating users in the United States, and more than half of all users believe they have encountered a scammer. In the letter, the Senators asked the company for information about its efforts to detect scammers on its platforms, crack down on fraudulent usage, and protect users.

“Match Group’s platforms have become a breeding ground for bad actors who prey on vulnerable Americans, especially seniors who have lost their life savings because of scams,” said Senator Blackburn. “Despite public promises to improve safety, the company’s practices and algorithmic design raise serious concerns about whether it’s truly protecting users. We’re asking for answers and transparency to ensure these platforms are not enabling bad actors.”

“Romance scams are robbing Americans of millions of dollars every year, and taking a devastating emotional toll in the process,” said Senator Hassan. “The companies operating dating apps must be honest with the public about the extent to which these scammers have infiltrated their platforms and must do more to protect their users from fraud.”

Click here to read Senator Blackburn and Hassan’s letter to Match Group, or see below:

Dear Mr. Rascoff:

Given Match Group’s stated commitment to improving upon its historical practices relating to user safety, we write today to request documents and information about the company’s policies, procedures, and practices related to fraudulent activity on its platforms. Romance scams, in which fraudsters form relationships to induce money or gifts from victims, have become a leading form of financial fraud in the United States, with annual losses reaching at least $1.3 billion, according to the Federal Trade Commission (FTC). Independent research indicates that nearly half of all online dating users in the United States have used a Match Group platform, and more than half of all users believe they have encountered a scammer. On a recent earnings call, you stated that Match Group would improve trust and safety and “prioritize users over short-term revenue and profit,” marking a shift from the way the company “operated historically.”

Over the years, many events have raised questions about whether Match Group—in its business practices and algorithmic design—has contributed to the proliferation of romance scams online. In a 2019 complaint, FTC alleged that Match Group knowingly exposed users to fraud. In fact, FTC alleged that between 2013 and mid-2018, up to 30 percent of new Match.com members were scammers. FTC further alleged that Match.com sent mass emails to non-paying users promoting paywalled communications from accounts it suspected or knew were fraudulent. This led nearly 500,000 users to subscribe within 24 hours of receiving an email or other advertisement that involved a fraudulent communication. These allegations raise concerns about whether and how Match Group protects users from fraud on its platforms.

Match Group has stated that it “permanently discontinued” the above practices. Publicly and privately, however, individuals who have worked at Match Group have suggested that “[rooting out scammers] wasn’t a real priority backed up by resources” and that the company’s “obsession with metrics ... [is] potentially dangerous.” At a recent panel appearance, Match Group’s Head of Trust and Safety, Yoel Roth, acknowledged that organized criminal groups increasingly carry out romance fraud in technologically sophisticated overseas scam compounds. Mr. Roth also said that Tinder’s “selfie-verification” process—whereby users create and verify their own accounts—is “pretty simple for a human to pass,” and that Match Group had seen thousands of submissions with similar backgrounds, likely originating from a scam compound. One expert has warned that the process offers “no guarantee against [deceptive profiles] because the images used for verification ... need not be the ones used on a person’s dating profile.” These statements appear consistent with the experiences of many online dating users. According to a 2023 Pew Research Center survey, for example, users are more than three times as likely to rate online dating companies as “very bad” instead of “very good” at removing fake accounts.

We are also concerned that Match Group, through its algorithmic design, creates trust that romance scammers can exploit. According to former OkCupid CEO Christian Rudder, for example, “users sent more first messages when we said they were compatible…[e]ven when they should be wrong for each other.” Studies also suggest that increased platform use may foster greater trust in dating algorithms. As a result, a persuasive algorithmic design may help facilitate romance scams when it recommends fraudsters who benefit from users’ trust in the platform’s judgments of compatibility.

Match Group’s business model is to keep users engaged, but this engagement is dangerous when it involves scammers. Online dating platforms drive user engagement, in part, with the promise of connections based on compatibility. Tinder, for instance, represents that its algorithm can “pick better potential matches” and helps users “see people they’ll vibe with.” Expert research into romance scams, however, has identified recurring characteristics and patterns among perpetrators and victims. Scammers often quickly escalate emotional intimacy (often termed “love bombing”), to which users prone to romantic idealization are especially vulnerable. These interactions can generate engagement metrics—such as high positive reply rates and low negative reply rates—that algorithms may interpret as indicators of compatibility. As one study on machine learning systems noted, “vulnerabilities can be automatically exploited when the vulnerable state or condition of an individual becomes entangled with the optimization criteria of an algorithmic system.” In that context, Match Group’s ability to monitor and influence user behavior, paired with limited transparency about potential harmful outcomes, raises concerns that its platforms may, even inadvertently, create conditions where romance scams are more likely to begin. 

To aid Congress in understanding Match Group’s efforts to prevent romance scams and the factors that allow these scams to begin on its platforms, please provide responses to the following document and information requests. These requests cover the time period of January 1, 2022, to the present, unless otherwise specified. Please provide your responses no later than October 15, 2025.

1. A description of the “signals” associated with accounts suspected or known to be fraudulent or scammers and how Match Group accounts for these signals in detection systems, recommendation algorithms, and the Trust and Safety team’s fraud review process;

2. All policies and procedures concerning the detection, review, and removal of accounts suspected or known to be fraudulent or scammers; their  

algorithmic treatment; and any restrictions on their visibility or communication with other users; 

3. For any accounts suspected or known to be fraudulent or scammers, documents sufficient to show for each quarter:

a. The number of these accounts, whether they were removed, and their geographic distribution;

b. Median time between account creation and (1) detection and (2) removal, broken down by detection method (e.g., user reports, manual review, automatic review, etc.) and “signal” (e.g., false identity, automated behavior, behavioral or language patterns, etc.);

c. The number of accounts detected but not removed within 24 hours, broken down by reason for non-removal (e.g., insufficient evidence, user remediation, process delay, false positive, etc.);

d. Total number of user messages sent and received; total number of “matches” (or platform equivalent); total session duration in which users interacted with these accounts; and aggregations of any other metrics used to evaluate engagement, profile quality, and “matching” outcomes; and

e. Platform revenues from in-app purchases by these accounts and users who liked, matched, or messaged (or platform equivalents) with these accounts within 24 hours before purchase;

4. Memoranda, presentations, reports, studies, analyses, audits, and meeting notes referring or relating to the algorithmic recommendation of accounts suspected or known to be fraudulent or scammers;

5. All internal reports, studies, or analyses referring or relating to the specific attributes, including demographics, platform use, and off-platform behavior, of:

a. Accounts suspected or known to be fraudulent or scammers; and

b. Users targeted by or interacting with accounts suspected or known to be fraudulent or scammers;

6. All documents and communications concerning the design, development, effectiveness, or consideration of fraud prevention measures, including measures discontinued or not implemented;

7. Itemized quarterly investments in trust and safety, including but not limited to the following categories:

a. Trust and safety policy, operations, and data;

b. Social advocacy;

c. Law enforcement operations and outreach;

d. Platform safety services and features; and

e. Safety by design;