A chink in the fraud solution armor. Don’t cut corners.

A chink in the fraud solution armor. Don’t cut corners.

The most important reason for this practice can be found in the very nature of fraud. When thinking about fraud and e-commerce, most people immediately picture a criminal using someone else's credit card data to make a purchase. In this scenario, at least two parties are being harmed: the merchant who will lose the merchandise, and the legitimate owner of the card being used. Sometimes the legitimate owner has no idea that his or her data is even being fraudulently used.

Now imagine if this same fraudster tries to make an online purchase from a merchant employing a fraud solution with automatic turn-down. In other words, transactions that are assigned to a very high risk are turned down—regardless of the reasoning—with no further inspection. Let's say that this fraudster did not succeed in its strategy: he didn’t do well when pretending to be the real owner of the card associated with the purchase. The data used to make this purchase gets fed into the statistical model, the model detects the fraudulent behavior and assigns the transaction a high-risk score. The statistical model is intelligent, and the subsequent transaction from this fraudster is then automatically turned down.

So far, so good! The fraudster didn't succeed and the legitimate owner of the credit card— we’ll call him John— had no problem with his credit card bill and didn't have to answer any calls to confirm whether or not he had made a purchase. And all of this was done automatically with minimal cost incurred.

Though everything worked smoothly up to this point, the underlying problems present themselves in the next step of the process.

Since there has already been an attempt to commit fraud using John's data, the model now views his data as high risk as it is associated with a high score order.

But now John— the legitimate owner of the data who is completely unaware of the attempted fraud on his account— learns about an amazing online special on refrigerators and decides to buy one!

The model obviously knows nothing about a special on refrigerators, but it recognizes the purchase data as being high risk.  Thinking it's another attempt at fraud, the software assigns a high score to the transaction which is then automatically turned down.

Shocked that his purchase was rejected, John tries to buy his dream refrigerator again, and the model once again denies it—especially since two attempts were made to use the data in rapid succession.

John is now stuck in a vicious circle; although he was saved from having his card information used fraudulently, he is now unable to make a legitimate purchase despite the fact that he’s done nothing wrong.

Now you can see why automatically declining risky purchases is not the most beneficial practice for appeasing prospective customers. Often, risky orders are placed by good customers who have been the victims of fraud in the past. Punishing the victims of data theft by automatically turning down their orders yields lots of frustration and certainly does nothing for a retailer's image.

If the merchant selling refrigerators had a fraud-prevention system without automatic declines, an analyst would have checked the order and recognized its legitimacy, John would have purchased the product he wanted, and the store would have made money. Besides gaining profit, though, the store would also be gaining a customer with whom they could establish a long relationship rather than frustrating him and losing him for good.

This is just one reason we don't believe in automatic turn-downs as an anti-fraud solution. But there is another, more technical reason why this process cannot work efficiently on its own.

I mentioned earlier that the model would recognize the data as suspicious because of the high score associated with past orders, meaning the chance that this high score will apply to future orders is quite high. This remains true, but the ideal decision-making process teaches the statistical model to understand when data is being used fraudulently and when it is being used by the customer.

Improving this would be obviously outstanding. To understand how we can achieve this, we have to look more closely at the tool known as a statistical model.

When we talk about statistical models and artificial intelligence, it may seem like these are all plug-and-chug methods where we input a set of data at one end, and out comes a number that is the perfect summary of fraud at the other end, and all this is done automatically and at minimal cost. One would also assume that we can improve the performance of this system by simply adding more data.

In very simple terms, yes, this is how it happens. While it is true that we can add the amount of data that goes into the statistical model, and this will make the process of weeding out fraud from good purchases more efficient, the process of adding data is far from easy.

Models identify patterns. They look at huge numbers of variables based on the data presented and look for patterns that suggest either risky or good order. To make it easier to understand how the model works, let's go back to John.

John rarely buys online, and when he does he normally uses his cell phone as he is rarely home. Because of this, he periodically changes his shipping address from work to home depending on the value of the purchase and whether or not someone will be there to accept delivery.

When our model is being developed, it learns about these patterns and notes that orders placed with the same credit card and the same device, mobile or otherwise, are likely made by the card owner and not some fraudster. This is used in conjunction with other data to create a formula that results in scores that reflect the true likelihood that a given order is fraudulent.

An important detail is that the model will learn the patterns associated with GOOD purchases, and those associated with FRAUDULENT ATTEMPTS. In other words, it simply means flagging whether or not a purchase is fraudulent.  This is a key concept that justifies the fact that we do not automatically turn down orders.

To make it easier to understand, let's go back to John and his refrigerator.

If an order is automatically turned down, there is no way to know if that purchase was really being attempted by a fraudster. Nobody bothered to contact the alleged customer, nor was any of the data in the order actually checked. In our little story we know that was John who bought the refrigerator but, in the real word, all that we know is John’s data and his score. That’s it. The model may perform excellently, but it can still make mistakes. Remember that the model was created based on known patterns of fraudulent behavior and not on all of the possible patterns created by the fraudster.

Now let's look at another scenario, where high scores go through manual review. Way back at the beginning at this story the purchase made with the stolen data would have been blocked by the analyst checking the order. This same type of manual check would have resulted in John's legitimate refrigerator purchase being approved.

Though both methods certainly halt fraud, the second one lets us know for sure what was a fraud attempt and what was just John buying his refrigerator.

In short, by adding a manual analysis, we were able to identify which patterns were from a legit purchase, like John’s refrigerator, and which were from a fraud attempt and, with this knowledge, be able to improve the statistical solution we offer our clients. This is now possible because the model can learn from it now that it was flagged which are frauds and which are not.

Furthermore, manual analyses would have provided a friendly approach to John's order, approving it despite the high score resulting from prior fraudulent attempts to use his data.

Automatic turn-downs may be less costly in the short term as they seem to eliminate the need for a skilled analytical team, but the costs of a purely automated solution are more than monetary; these systems does not create the knowledge needed to improve or enhance the statistical model used and also can lead to damaged relationships with customers. To gain customer loyalty and ensure satisfaction, a manual system is a priceless investment.

Download Credit Card Fraud eBook

You may also like

Want to write
for our blog?

Please review our writers' guidelines
and then email guestwriter@clear.sale with your pitch!

Subscribe to our blog