A Discrete Affair

post-thumb
  • How much do you need to know about a person to fall in love with them?
  • Do you need to see their face or touch their body to form a strong emotional bond?
  • Or can you fall in love with someone over the telephone?

A new reality show called “Love Is Blind” explores this question, having contestants exclusively talk to each other through an opaque wall and then decide whether to get married or not. Real emotions are at play as contestants fall in love with disembodied voices upon which they project all the qualities they cannot observe. Sometimes disappointments follow, such as when deep conversations are replaced with a mundane reality.

The recently released Netflix documentary The Tinder Swindler explores similar themes. It details the exploits of Simon Leviev, a man of unexceptional looks, flashy clothes, and an expensive taste that left his victims with a broken heart and neck-deep in debt. As far as romance scams go, Mr. Leviev used an old strategy from a bygone era – actually meeting with his victims and having an intimate relationship with them in close physical proximity. Pulling off this endeavor successfully against multiple victims, one would suppose, requires the cold calculation of a psychopath. The victims of people like Mr. Leviev can be left with deep psychological trauma and world-views permanently altered.

Online Romance Scammers

To add a modern twist to the on-site approach, the internet has introduced us to a new form of romance scamming. One built upon the success of the new form of intimate relationships, the long-distance romance. Scammers from places such as Western Africa contact their victims, usually using social media, and pose as high-status men with wealth to spare and hearts to mend. These fake personas, often tragically widowed, strike up conversations that quickly turn into steamy romances. They shower their victims with attention and flattery, wearing down their initial skepticism with a sustained volley of positive attention. Then the exsanguination starts, as they slowly suck their victims dry and into massive debt with urgent requests for money. Often it is next to impossible for outsiders to break the spell of the attacker, as the victims beg, borrow, and steal for more money to send their unfortunate loved one.

This model of financial crime has been with us for a long time. There is nothing new to the act of seduction and exploitation. The long-distance version of the crime still requires human interaction with a human criminal exploiting a human victim. But if you’ve walked into a chain-operated burger joint in the past five years, you will have no doubt noticed that human-to-human interaction is starting to go out of vogue fast.

In science fiction, human-computer interaction has been perfected. No, I’m not talking about ridiculously ubiquitous transparent screens or Minority Report-style five-point touch gloves, but more in the style of Star Trek. In that universe, the computer is a disembodied voice, waiting to take commands and provide answers to questions. For a long time, such notions remained on the fiction side of sci-fi, as natural human interaction seemed outside the reach of discrete machines, but as time has gone by massive increases in computational capacity and neural networks have brought us such new friends as Siri, Alexa and the unfortunate Mr. Bixby, who we would rather not talk about.

Modelling with NLP Meets Online Crime

Technologies like natural language processing have brought human-computer interaction to a point where computers can understand and decode the meanings behind sentences and turn those into commands. They can interpret the results and convey them back to us. This is a miraculous feat of engineering and a paradigm shift in how we interact with machines. The technology is still in its infancy, but we can already see the arc of progress taking us towards the starship’s deck.

To source what humans are like, language models like GPT-3 can use the internet’s treasure trove of human interaction and communication. All that you need to know about humans talking to humans can be freely grazed in the pastures of Reddit, Facebook, and Twitter. This gives the computer the capability to mimic actual human conversations. The ever-growing set of source data combined with clever algorithms can power chat engines that can pass the Turing test with an A+ and with extra credit for being funny.

Deep Fake Video: the Final Frontier

The second arc of interest is the advent of the “deep fake”, a video generated by combining an original source with the target’s likeness. These videos aim to be as realistic as possible, literally putting words into people’s mouths, and sometimes other things. A well-built deep fake is very, very hard to distinguish from the real article. If you’re unconvinced, go check out the Tom Cruise deepfakes on YouTube and try to tell the difference. I can’t.



For now, deep fakes are carefully hand-crafted works requiring skill and patience, but social media companies are pouring ungodly amounts of money into perfecting face-altering filters and other ways of mangling our faces into a more presentable shape. These technologies will eventually combine, and some day generate fully-passing deep fakes that only require an iPhone and a three-dollar app.

All of these arcs of technological progress are powered by machine-learning algorithms. These algorithms have opened a new era of computing, because they are excellent at sophisticated mimicry. Currently, their main magic trick is to take a lot of Things and then create a Thing based on common denominators of those Things. Another thing they can do very well is classify Things and Other Things based on small differences, like shadows on X-rays. An ML algorithm can be excellent at detecting cats in pictures, and with sufficient source material, can even generate new pictures of cats based on what cat-like-parts the source-cats are generally constructed of. But it can’t tell you anything about the softness of its fur or whether it purrs when scratched behind the ear. This is why I dislike the moniker “Artificial Intelligence”, as ML is mostly just dumb Mimicry writ Large.

That’s it, man! Game over, man! Game over!

When all of these arcs of progress converge, we get the first chatbots that can gush about your cat photos, that can take the appearance of a handsomely greyed three-star general, that can actually hold their own end of a conversation, and that are learning what makes their victims tick by sourcing their every social media post and Harlequin novel ever written. How hard is it to avoid falling victim to a romance-weapon of this caliber? It might well be that these Tinder-bots that will stalk the lovelorn lonelies of the internet in the future don’t need to be perfect, because we’re very good at filling out the narrative ourselves, maybe they need to be just good enough to make us fall for those dreamy, perfect, computer-generated blue eyes?



If, or rather when that happens, criminals will solve the biggest issue that keeps online scamming a veritable cottage industry – scalability. When you need a human operator to scam a handful of victims and the lead-times from initial contact to final payout are months or even years, the size of the racket is limited by available hands. But if you could spawn a near-infinite number of bots that pass Captchas, write love letters and steal hearts, you are holding the launch keys to something infinitely more dangerous. You would have, in effect, weaponized love.

It might be that in the future, instead of telling our elderly parents how to log into Gmail, we will spend family dinners trying to convince them that their expatriate life-partner is a figment of some scam algorithm designed to expertly extract money from their bank accounts. Removing a virus from your parents' computer is easy, but having to break their heart to save their future? That’s a different affair altogether.


comments powered by Disqus