Watch the CBSN Originals documentary, “Speaking Frankly: Dating Apps,” in the video player above. The full hour special premieres Sunday, November 10, at 8 p.m., 11 p.m. and 2 a.m. ET on CBSN.
Steve Dean, an online dating consultant, says the person you just matched with on a dating app or site may not actually be a real person. “You go on Tinder, you swipe on someone you thought was cute, and they say, ‘Hey sexy, it’s great to see you.’ You’re like, ‘OK, that’s a little bold, but OK.’ Then they say, ‘Would you like to chat off? Here’s my phone number. You can call me here.’ … Then in a lot of cases those phone numbers that they’ll send could be a link to a scamming site, they could be a link to a live cam site.”
Malicious bots on social media platforms aren’t a new problem. According to the security firm Imperva, in 2016, 28.9% of all web traffic could be attributed to “bad bots” — automated programs with capabilities ranging from spamming to data scraping to cybersecurity attacks.
As dating apps become more popular with humans, bots are homing in on these platforms too. It’s especially insidious given that people join dating apps seeking to make personal, intimate connections.
Dean says this can make an already uncomfortable situation more stressful. “If you go into an app you think is a dating app and you don’t see any living people or any profiles, then you might wonder, ‘Why am I here? What are you doing with my attention while I’m in your app? Are you wasting it? Are you driving me toward ads that I don’t care about? Are you driving me toward fake profiles?'”
Not all bots have malicious intent, and in fact many are created by the companies themselves to provide useful services. (Imperva refers to these as “good bots.”) Lauren Kunze, CEO of Pandorabots, a chatbot development and hosting platform, says she’s seen dating app companies use her service. “So we’ve seen a number of dating app companies build bots on our platform for a variety of different use cases, including user onboarding, engaging users when there aren’t potential matches there. And we’re also aware of that happening in the industry at large with bots not built on our platform.”
Malicious bots, however, are usually created by third parties; most dating apps have made a point to condemn them and actively attempt to weed them out. Nevertheless, Dean says bots have been deployed by dating app companies in ways that seem deceptive.
“A lot of different players are creating a situation where users are being either scammed or lied to,” he says. “They’re manipulated into buying a paid membership just to send a message to someone who was never real in the first place.”
This is what Match.com, one of the top 10 most used online dating platforms, is currently accused of. The Federal Trade Commission (FTC) has initiated a lawsuit against Match.com alleging the company “unfairly exposed consumers to the risk of fraud and engaged in other allegedly deceptive and unfair practices.” The suit claims that Match.com took advantage of fraudulent accounts to trick non-paying users into purchasing a subscription through email notifications. Match.com denies that occurred, and in a press release stated that the accusations were “completely meritless” and “supported by consciously misleading figures.”
As the technology becomes more sophisticated, some argue new regulations are necessary. “It’s getting increasingly difficult for the average consumer to identify whether or not something is real,” says Kunze. “So I think we need to see an increasing amount of regulation, especially on dating platforms, where direct messaging is the medium.”
Currently, only California has passed a law that attempts to regulate bot activity on social media. The B.O.T. (“Bolstering Online Transparency”) Act requires bots that pretend to be human to disclose their identities. But Kunze believes that even though it’s a necessary step, it’s hardly enforceable.
“This is very early days in terms of the regulatory landscape, and what we think is a good trend because our position as a company is that bots must always disclose that they’re bots, they must not pretend to be human,” Kunze says. “But there’s absolutely no way to regulate that in the industry today. So even though legislators are waking up to this issue, and just starting to really scratch the surface of how severe it is, and will continue to be, there’s not a way to control it currently other than promoting best practices, which is that bots should disclose that they are bots.”