Skip to main content

TransUnion Fraudcast Ep 11: Bot Attacks

Episode 11

In this episode of the TransUnion Fraudcast, Amanda Mickleburgh, director of Merchant Fraud Product at ACI Worldwide, joins us to discuss how organizations can mitigate bot attacks without needing to shut down systems and hurt legitimate consumers.

Jason Lord:
Welcome to the Trans Union Fraudcast, your essential go-to for the absolute linkages between the days emerging, fraud and authentication topics, trends, tropes and travails…delivered with all of the straight talk and none of the false positives.

I'm your host Jason Lord, VP of Cross-Solutions Product Marketing.

In each episode, we narrow in on a specific subtopic within the fraud and authentication universe, bringing on a special guest to help us dive in while keeping it high level enough for a general audience member like me.

And today we'll be talking about that scourge of the online world: bots.

These automated computer programs are used in every aspect of our lives. In fact, it's estimated that 40% to 50% of all Internet activity is bot activity.

Now, not all bot activity is nefarious, but fraudsters are becoming more and more proficient at using them for their own purposes, including PII harvesting, account takeover fraud, and credential stuffing.

Bot attacks cost the average business 3.16% of their annual revenue.

So how can organizations address bot attacks without having to shut it all down and hurt legitimate consumers?

Here to help me explore this topic is Amanda Mickleburgh, Director of the Merchant Fraud Product at ACI Worldwide.

Amanda has more than 15 years’ experience working in fintech, with expertise and payment fraud detection and prevention strategies.

Amanda, welcome to the Fraudcast.

Amanda Mickleburgh:
Thank you. Thanks for having me.

Jason Lord:
Now I gave a little bit of a definition up front, but I would love to hear from your perspective, what are bot attacks and why are bot attacks a common choice for fraudsters?

Amanda Mickleburgh:
Yeah, I mean, bot attacks actually the term bot obviously expands of wide-ranging type of activity because essentially it's sets of usually algorithms, automated scripts that are set up to carry out very specific tasks.

And these different types of bots are utilized differently at different parts of the — well, not only payment value chain, they’re used to target websites, to accomplish a wide variety of different activities and instructions, fraud being one of them, which is an example of when a bot or automated script is set up to either gather information from a website or indeed to almost DDoS attack a website by throwing lots and lots of different transactions at a website to try and either obtain fraudulent purposes or to disrupt the activities or the day-to-day activities of that merchant.

Jason Lord:
And where in the consumer lifecycle are bots normally deployed?

Amanda Mickleburgh:
Umm well in the in the payment life cycle if I can just talk about it from that angle, then we would see it typically in ACI we would see this at the checkout or we would certainly see it attacking one of our merchants by an increased level of activity that when you look into that there's some commonalities and repetition to the types of data that we're seeing in a very short space of time normally.

Jason Lord:
Do merchants normally understand that a bot attack is taking place when it's taking place, or is this something they're finding out after the fact?

Amanda Mickleburgh:
Yeah let me just think about the way you’re asking me that question…

It's really difficult because from a consumer perspective, the consumer won't see it except the consumer would certainly potentially notice a reduction in processing speeds.

They can quite often see a checkout speed is reducing as a result of bot attacks on a particular merchant's website, that can certainly affect them.

From a merchant's perspective, it would normally be the increased level of activity that arise very suddenly when we look into that activity. We see there's some commonalities normally in the types of transaction, in the types of data that we're seeing, and essentially quite often that data is sort of relatively nonsensical.

Jason Lord:
Well, it's brings up a good point because in the detecting of bots you have to sort of weigh it against what you would determine normal consumer behavior.

Jason Lord:
And you've indicated there are some signals that might say, oh, this might be a bot attack, as opposed to normal human traffic.

What are examples of these types of signals you might look for that might indicate a bot attack?

Amanda Mickleburgh:
Quite often it would be around, you know, things like the address inputs, IP details, uh, device ID's.

You know, we would see activity quite often coming from similar devices, or there may be a clutch of devices or IPs or a combination of different data elements that feature in each transaction whilst the remainder of the data can vary.

We would start to see through analysis, some patterns in that activity.

Jason Lord:
And I've heard you describe in the past that bots might be considered a form of synthetic fraud because they have a commonality in the data being used, or how it's being used.

Can you expand on that a little bit more?

Amanda Mickleburgh:
Yeah, sure.

And this is really I think the sort of the crux, I think bots have sort of moved on a little bit certainly in their level of sophistication.

But what we're definitely seeing is an increase in the use of data that is not related to an individual.

So this would be data that's used to create a synthetic identity or to create an identity that doesn't exist, so it isn't Amanda Mickleburgh, per se, but it might be some elements of data that does relate to me, but then there will be elements of data that have absolutely no relationship to me at all.

And this is where using data and leveraging data effectively allows us to be able to identify these types of activity to have machine learning models and algorithms ourselves that can look specifically for this type of anomaly activity and learn to be able to mitigate it as quickly as it arrives.

Jason Lord:
That fraud is a topic that we've discussed on the Fraudcast in the past. It's definitely exploding in both the commonality and the fraud losses involved with it.

So am I understanding correctly that the bots are maybe harvesting PII and then using, combining that with information from other sources to create these synthetic identities and then attacking at a high volume, is that the way to think of it?

Amanda Mickleburgh:
Yeah, absolutely.

And I think you know it's an industry in itself that the obtaining, testing, utilizing and execution of a bot quite often involves lots of different entities.

So, you know, there'll be somebody that's harvesting the information, somebody that's testing the information that's stolen, and then there'll be another entity that's injecting additional information into that into that harvested data.

So all in all, that creates then this synthetic identity that’s then utilized within the script, which is then automated and used to attack a merchant's website.

Quite often with credit card details within it, or alternative forms of payment that can then exploit that merchant's website and attempt to obtain products and services, but equally a large part bot attacks we quite often see is really when they're testing cards, and this is where card details have been obtained and those cards are being tested for future sale to other fraud groups.

Jason Lord:

Interesting. So it's sort of a low-level activity that helps in later instances of fraud.

Amanda Mickleburgh:
Yeah, it kind of exploits the benefit of bots.

I mean, there are bots in the world and in the industry that are used for good purposes, you know, for mass communications bots were used as an example through WhatsApp groups, et cetera, to communicate information about the pandemic and how to manage.

So there's lots of examples of how bots can be used positively, but it's the technology that's being exploited with data, false data, that is being used for nefarious purposes.

Jason Lord:

Right. You sort of touched on this already, but how does the bot activity differ between the account creation and card transaction stages, and how are those attacks related to one another?

Amanda Mickleburgh:
Yeah, I mean it's just different data that's being used within that, but the purpose is it's a sort of time-saving initiative that's designed to, if it's through account creation, it's quite often used to obtain account details which go on to later form either account takeover activity.

So when fraudsters take over an account and utilize a dormant account as an example, so everything appears legitimate but when you look at the level of data associated to that transaction, you see that actually some of it is not related to the original account holder and then at the checkout level, it can quite often be the synthetic identities that are created.

So you sort of have two –– and this is where it's very important to look at the data that is associated to this activity –– it's about sort of almost pinpointing the pain point of the bot attack, because each one is going to be used differently.

Therefore, to mitigate, you need to understand you know what…how is that problem being created? Uh, what data is being utilized and leveraged?

Where is maybe the gap that is leading to this information being utilized and leveraged?

Jason Lord:
What you're describing here is a vulnerability that exists in a lot of fraud organizations…is that they tend to view these attacks as disparate or unrelated activities, when in fact they're part of a larger plan.

As you think about fraud organizations in general, how would you recommend they start to view this as integrated instead of as a separate types of activities?

Amanda Mickleburgh:
Yeah, I think it's really important, actually that uh, certainly from an internal comms perspective that teams do actually share comms and work together because quite often they could be different parts of a business that is solving for an issue.

So it can be an IT technology company looking at the security of the website. You know, does that website protect consumers from data being obtained, again, nefariously?

Is it that there's a script that's attached to see who injected a website that is actually gaining information as customers utilize that website? But then is that data then going on to be used from a merchant?

Sort of fraud checkout payment perspective, which generally would be overseen by a payment or fraud team which is causing problems, but they can be very different teams.

And also it's important to note that equally it's very difficult because a lot of synthetic identity is just that they're not attached to genuine people.

The information is probably obtained from elsewhere, but again it's just very important that payment and fraud teams that see bot attacks at the checkout level are probably making their technology teams aware of that as well.

So it's really down to sort of communication…equally from an organizational perspective, you know being, you know having membership with organizations, Merchant Risk Council being one (there are others) to sort of, again, collaborate and share knowledge and information.

Because, I mean, these are issues that are recognized industry wide, and sometimes it's quite helpful to have contacts in these places to sort of certainly help to understand more about mitigation, as an example, and also future protection.

Jason Lord:
Ideally, fraud prevention isn't thought of as a differentiator, but a common good, right?

We're all sharing information because we all have a common need to prevent these types of activities.

Amanda Mickleburgh:
We do and we should, absolutely.

Do we always? There's still work to be done on that one.

Jason Lord:
Now, if I'm a business owner, and I'm listening to this podcast, I may feel a little nervous that in preventing broad bot activity, essentially I'm having to shut down good consumer activity.

So how do I mitigate the negative impacts of nefarious bot activity without ruining my customer experience for legitimate consumers?

Amanda Mickleburgh:
Yeah, and I think this is the key part.

You know, they are… Bots are probably always going to exist, and then possibly always going to get more complex in the way that they are made and put together, and then utilized.

So we're always going to have to be mindful.

I think it is very important to look for the good transactions, and that's also a very good way because bots are always going to be around us and always going to be problematic.

And you're absolutely right: The last thing you want to do when you’re experiencing a bot attack is to completely shut down your website.

So actually to look and continue to look for the good transactions and be able to do that utilizing technology and sort of multi-layered level.

So this is lots of different types of technology working together to understand the data that is associated to that transaction…it’s really important, because ultimately what we're trying to establish is whether the digital identity of the person presenting for payment is legitimate.

And the only way to really, truly do that is to understand the data that accompanies that transaction.

Now, that data could be gained from the minute that customer interacts from a website.

So the minute they log on to maybe an account, how they navigate through the website, how they add things to their basket, how they browse, how they enter details onto the checkout page.

All of these types of, you know, normal activity, event-based activity, that ordinarily and in isolation you wouldn't really necessarily regard as helpful can actually be really helpful when you're trying to understand who the genuine customer is.

And equally, when it's a bot attack, invariably most of this data won't check out. It won't make any sense because the data is unrelated.

Also, the frequency that that data is coming in and it is also an indicator to a problem.

So making sure that the level of data, the level of technology that is being used to assess transactions, can be adjusted as well in real time.

So when bot attacks do occur, if you need to implement a specific strategy to get you through that period, then that would be the time to do it.

Jason Lord:
I think if there's a thesis statement here, it's that the more data you have not only can you identify potential bots, but also you can identify good consumers and hopefully reduce friction against them.

So it's not one or the other, it's not a zero-sum game, ideally, it's that you're doing both at the same time.

Amanda Mickleburgh:
Yeah, absolutely.

And whilst it's possible to still automate a lot of this, because we know, you know, human intervention is expensive, it could require some form of intervention to sort of switch strategy in the moment where you are incurring a bot attack and your website where you may need to put in a slightly more stringent strategy to identify your consumers than maybe you would have a steady state business as usual pace.

You know, bots can be programmed to be really quite malicious when it comes to, uh, defrauding websites. So to know what you would do in the event of that type of activity.

Whilst you won't necessarily understand every pattern in its behavior, you'll understand there's a general pattern, but to know what you would do to continue to identify your good consumers is an essential strategy to have, you know, up your sleeve.

Jason Lord:
Don't wait until the attack to figure out your plan, have a plan ahead of time.

Amanda Mickleburgh:
100%.

Jason Lord:
Amanda, thank you so much. This has been very insightful for me, hopefully for our listeners as well.

Thank you all for tuning in.

This is the last episode of Season 1 of the Fraudcast, and my last episode hosting before I hand it off to the very capable hands of Richard Tsai.

Trust me, you'll enjoy hearing from him and you'll learn a lot from him and this program.

In the meantime, for one final time, stay smart and stay safe.

TransUnion Fraudcast

Your essential go-to for all the absolute linkages between the day’s emerging fraud and identity trends, tropes and travails — delivered with straight talk and none of the false positives. Hosted by Jason Lord, VP of Global Fraud Solutions. 

For questions or to suggest an episode topic, please email TruValidate@transunion.com.

The information discussed in this podcast constitutes the opinion of TransUnion, and TransUnion shall have no liability for any actions taken based upon the content of this podcast.

Do you have questions? Our team is ready to help.