Skip to main content

TransUnion Fraudcast Ep 9: Combating Synthetic Fraud

Episode 9

In this episode of the TransUnion Fraudcast, fraud analytics expert Colin Dew-Becker joins us to discuss the data organizations can use to combat the increasing threat of synthetic fraud and why synthetics are not merely a financial industry problem.

Jason Lord:
Welcome to the TransUnion Fraudcast, your essential go-to for the absolute linkages between the day’s emerging fraud and authentication topics, trends, tropes, and travails delivered with all the straight talk and none of the false positives.

I'm your host, Jason Lord, VP of Global Fraud Solutions.

We're closing out 2023 and one of the topics we've discussed quite a bit this year is synthetic fraud, and with good reason.

Total US lender exposure from synthetic fraud has reached $3 billion, according to TransUnion analysis, its highest level ever. And bust-out, the stage of synthetic fraud where the fraudster maxes out the line of credit with no intention of repaying, accounts for $1 billion in annualized losses.

And given that the synthetic identity problem is projected only to increase year-over-year, bad news, we can also expect those losses to grow in 2024.

And while we've talked with Emily Sherman previously about the market conditions that have led to the growth of this type of fraud –– great episode recommend checking it out if you haven't already –– we haven't spent as much time discussing how organizations can combat the problem.

And though it's often framed as a problem within the financial services sector, in fact, synthetic fraud is becoming an issue for many other verticals outside of the financial industry.

Fortunately, we have an expert with us today to help unpack what can be done to address synthetic fraud, Colin Dew-Becker. Colin has been working in analytics for 15 years, including building machine-learning models to identify talent amongst professional athletes and to assist in sports betting.

But he's now using that brainpower to develop fraud scores and attributes, leveraging credit data that address synthetic fraud, bust-out fraud and loan stacking.

Colin, welcome to the Fraudcast.

Colin Dew-Becker:
Really nice to be here.

Jason Lord:
Alright, let's dive right into it.

When you're thinking about synthetic fraud and you're talking and working with organizations, what types of signals should organizations consider when they're looking to protect themselves against synthetic fraud?

Colin Dew-Becker:
Synthetic fraud is a difficult beast to contain, because there’s lots of different ways consumers will do it. And it depends on the industry you’re in, what type of engagement you have with a consumer, and what kind of processes you have in place to deal with fraud in general.

So a lot of my focus is on financial services. You know segments of the business.

So I'm looking at how consumers who are applying for loans and credit cards and, you know, auto loans, personal loans, mortgages, whatever it might be.

But also we're looking at rental screening, we look at the gaming industry and we look to see how these different types of fraudsters behave in those industries.

But when we're looking at financial services, I mean that's where the most money is.

Loans can get large and there's just lots of people looking for money and the losses can get kind of absurdly much higher than you might think they could, with how this fraud happens. And what we tend to see is that fraudsters who are building up these synthetic identities, the kind of trail that they leave, there's a few different components of that that we kind of track.

So the first is looking at shared identity elements, so that's sort of like the kind of hallmark of what it means to even be synthetic. To create a synthetic identity, you are taking PII elements, personal identifiable information elements.

So…Social Security, number, name, address, phone number, email address, anything like that.

And you're combining it with other elements from somebody else or that you made up or whatever, to create a completely fabricated identity.

And when you do that, in order for it to look somewhat realistic, you tend to try to use real elements.

You're not just making up, you know, 123 Main Street and US City, America, right? You're trying to use real things, because that's going to help you pass KYC checks.

It's going to help you kind of validate who you are. Again, who you are isn’t a real person, but it's going to help you kind of validate that you might be a real person.

Jason Lord:
And it's part of the reason that happens is because when they're doing these identity checks, these KYC checks, they're checking the individual elements.

Is this name real?
Is this address real?
Is this Social Security number real?

But they're not necessarily looking at it in combination, is that right?

Colin Dew-Becker:
That's right.

And I mean partly it can be difficult to do that, it can be expensive to do that and you know, you're trying to limit the friction for those consumers who are not fraudulent, because most consumers aren't.

Synthetic fraud is very small in the grand scheme of things, it's a very small percentage of consumers who are synthetic is just that they have an outsized impact on losses due to fraud.

So you want to capture them, but you also are trying to limit the friction that you're applying, for people who are just coming in who could be completely normal, but they have elements of their identity that might raise a flag.

And so what happens with synthetics is you get people who you don't make –– I mean, maybe someone does –– but generally speaking, you don't make one synthetic identity, you make dozens.

You make hundreds, thousands, and when you do that, you tend to leverage…you're going to blend different elements together, but you might leverage the same address, you might leverage the same Social Security number.

And maybe it's not all the same time, so you use one Social Security number this month and you don't use that again for another six months or a year, whatever, but you use it every year if you do it over a decade, like that's a lot of times for the same Social Security number to show up.

Same thing for an address. And so what we see is, someone will make an identity and they'll have a credit file show up in the Bureau, because you come in and you apply for a loan and you don't exist.

Like you know, all these elements are in a new combination and we can't find you, but we're required to make a credit file for you.

So you make that file and we look at it and what we see is, oh, this person has an address that 500 other people have, which is a little suspicious.

Jason Lord:
So when we're getting to when we're getting to signals, this is the first signal that you're speaking of, which is that does this address exist or does this PII exist with other types of identities out there?

And does it exist at, either a velocity or just a volume that seems unusual.

Colin Dew-Becker:
Exactly, exactly.

Again the upside is that, like, people move, you know you're going to…you're going to have multiple addresses, but it becomes unusual when you're going to have the same address that other people have…you move into someone's house. Well, they have that address too, right?

It just becomes unusual when it's 500 people at the same address and we can track when was that address first reported on your credit file…and we see that address getting reported 500 times in a six-month period.

That's unusual.

That's a flag for us.

We do the same thing where we look at people who are sharing Social Security numbers, people who are sharing phone numbers.

It's not that these things can't happen.

So, like with a Social, you might input it, if we call it fat finger error, where you put in a 9 instead of 0, and you might look like you have a Social that somebody else has, but that's going to be like you're going to share it with a couple of people, three, four people, just due to these kinds of errors.

It’s when you get into these like, well, you have 50 people who you're sharing it with. That's a little abnormal.

Jason Lord:
Well, and the point you make here, which I think is true across lots of types of frauds, is when somebody showing these risk signals, it doesn't mean they're definitely a fraudster.

What it means is you need to apply friction in these specific instances, while at the same time allowing those legitimate consumers and transactions to move through with greater ease so that they're not caught up in the same net as the potential fraudsters.

Colin Dew-Becker:
Also that, you know, one element that is suspicious is not, you know, the end of the world.

You know, when we are devising solutions to identify synthetic fraud, we are combining all these different factors where okay, you have one thing that looks suspicious, fine, that might be enough to check it out. But if you have 10 of those things now, you're at a higher risk level.

And so you start kind of bucketing people into how [at risk] are they of being synthetic based on this kind of information.

The other area in terms of, you know, indicators that we look at…so this was kind of the like shared identity element aspect.

The other part is we really get into your credit behavior, because our focus generally is on synthetics who are a risk to financial institutions.

They're going to come in.
They're going to get some loans.
They're not going to pay them back and you're not going to find them because they don't exist, right?

That’s a lot of financial risk.

Again, I mentioned like there are synthetics who have other interests.

They're not just trying to defraud a bank and you can design products to find really any synthetic like anyone who's trying to create a fake identity.

You can design those products, but what we do generally is we try to design products that help you find synthetic fraudsters who are going to be problematic.

Jason Lord:
Well, and let let's talk about that for just a second.

Because it's actually something we haven't addressed before.

Sometimes financial institutions look at it as good synthetics and bad synthetics, which might seem counterintuitive, but a “good synthetic” might be an example of somebody who might not otherwise have access to credit but fully intends to use that credit for good purposes, and so may not be a credit risk. Whereas a “bad synthetic” never intends to repay, the whole point is to build up credit to defraud.

Is that the way that you look at it when you're working with financial institutions?

Colin Dew-Becker:
I think it's definitely the way financial institutions look at it, we hear that language a lot.

There are regulatory reasons why even good synthetics are a problem.

Jason Lord:
Sure, a compliance officer may not be satisfied with the term “good synthetic”; all synthetics are bad.

Colin Dew-Becker:
That's right.

So there's that aspect, and the other part of that bucket of the “good synthetics”, quote-unquote “good,” is that you don't actually know what you're looking at.

So again, we use the word “malicious.” It's like, we use buckets for all that, but “malicious” where yeah, you're not going to pay them back.

You are coming in and it's a way to basically attack a bank and you're trying to get money from them, whereas the “goods” aren't doing that.

They want credit because they want to buy a house, so they want a credit card or whatever and they do intend to pay it back –– but what you miss when you're looking at a synthetic identity is you're not seeing their real credit risk.

So you could come in and you have a credit score of 750 based on a fake identity, and now you're able to go and get loans with lower interest rates or higher credit limits and you aren't actually the type of consumer who should be getting those, based on your actual credit risk.

So that's sort of the thing where there's a credit risk aspect to it.

So they're committing fraud and it's boosting their credit profile as opposed to committing fraud genuinely just to take as much money as possible away from you.

But when we're looking at those kinds of consumers and we look at all those, I mean our products are kind of agnostic to your quote unquote “good”-ness.

But what we are interested in is, are you exhibiting behaviors that are indicative of someone who is trying to build up that credit score?

Because when I mentioned earlier, you make this synthetic identity and you go and apply for a credit card for instance.

You don't have a credit file, you don't exist. So the bank will say, well, you don't exist; we're not going to offer this deal.

But just in the application, you've now created this consumer report for this synthetic identity.

That report has nothing on it, it has this one inquiry that they've applied for a loan.

Well, now they have a file.

Still what we often see, it's like a very typical thing we've seen over 10, 15 years of people doing this, is you now take that identity with no history and you add them as an authorized user to another identity, probably also synthetic but not necessarily, but that other identity has been around for a while and they have a bunch of credit cards and they've been paying them off and they have a really good credit score.

And you add yourself as an authorized user –– not to add yourself, but you work with that other identity and you get added as an authorized user to one of their credit cards or two of their credit cards, whatever it might be, and that will very quickly boost your credit score.

So you'll go from having, like, nothing –– I don't actually know how it works when you have a brand-new credit file –– but you go from effectively no score and within a six-month period with just a few authorized user trades where they're getting paid off and they have a long history of getting paid off, you can get into that 650-700 range pretty quickly.

And now suddenly you're going to have access to credit cards.

And so now instead of doing an authorized user trade, you can just get your own credit cards or personal loans, and at that point you start now getting those and pay them off.

So you're building up kind of the history and integrity of a credit file, so that now that credit score goes from 650 to 700 to 750, and you can start getting bigger loans.

You can get personal loans, you can get auto loans, you can get higher limit credit cards…and now we're getting to that bust-out scenario where in one-month period where there's a lag, when I get a new credit card, it doesn't show up on my credit report for a month, give or take.

If I go out with my 800 credit score and apply for, you know, six high-value credit cards and I take out a couple of auto loans and I take out some personal loans, I might have over $100,000 in outstanding debt and I maxed out the cards.

I basically take the cards and never pay for them, I don't pay the personal loans back, and now you're out of luck and again, go and try and enforce something about that.

Jason Lord:
Yeah, that's that signal that you're talking about here.

If I'm understanding you correctly, is what sometimes referred to as trade-line velocity, which is these host accounts that are, you know, legitimate, they show up as good credit, but then they attach other accounts to them.

Sometimes they recycle the accounts, sometimes they use credit-repair techniques to bring the account back, but is the signal that we're looking for, if I'm hearing you correctly, is the amount of credit lines that are attached to that single sort of host?

Colin Dew-Becker:
Yeah, the first thing that we that we look at is was your first tradeline an authorized-use tradeline?

Because that's just a very standard way to come in. Not every synthetic comes in that way, but it's just a very quick way to build up your credit score.

So that's a huge indicator. Again, all those shared identity elements? Big indicator. Another big indicator is what's their authorized user activity?

Did they start –– was that their first trade? Did they have a lot of authorized user trades?

That kind of behavior is just something, if you're trying to make your credit score very strong, just, you know, do that.

Jason Lord:
It's a good signal that that might be the reason.

Colin Dew-Becker:
Yeah.

Jason Lord:
Now, now when we hear about types of data that are used to fight synthetic fraud, we often hear the acronyms FCRA and GLBA.

What do these acronyms mean, and what is the difference between this data and how does it enter into the fraud fighting?

Colin Dew-Becker:
Sure. So the FCRA is the Fair Credit Reporting Act –– I hope I'm right on all these by the way, I'm like 95% sure I have all the names right, but there's a lot of acronyms –– so, Fair Credit Reporting Act.

Jason Lord:
You're speaking very confidently, so I'm going to believe you.

Colin Dew-Becker:
Yeah, that's what I've learned from lawyers. Just talk with confidence and it's fine.

So, the Fair Credit Reporting Act…I couldn't tell you when it's passed or anything, but it's, you know, it's designed to protect consumers’ data that's being leveraged mainly by credit bureaus and other financial institutions, and specifically focused on the data that shows up on your credit report. And the idea is that if that data is on the credit report, it can only be used in certain ways.

The consumer has to kind of know how it's being used. They have to have the ability to dispute the information if the data is used in an adverse actionable way –– so to you know, reject somebody's credit application, you have to tell them why and then the consumer can dispute that.

And there's just a lot of these processes in place to kind of make sure that that data, which is you know highly sensitive data is only being used for, you know quote unquote “permissible purpose,” which for our purposes is going to be things like applying for credit and just kind of general engagements with financial institutions.

They have to have that purpose and they have to use it in a very specific way.

And so when we're building products to address any of these types of fraud, we often as I just was talking about, we're looking at data that's on the credit file.

Generally we talk about below-the-line data and above-the line-data on a credit report.

Below-the-line data is all the trade lines, all the loans, all the credit cards –– all of that data…any delinquencies, payment history, inquiry data.

What kind of loan –– is it authorized-use trade? Is it their own trade line? All that.

And then there's above-the-line data, which is identity data. So it's your name, Social Security number, address, all of that.

This is when I pivot to the GLBA slightly.

The GLBA –– and this is the name where I always get a little tripped up –– the Gramm-Leach-Bliley Act.

I just call it GLBA because those names just confuse me a little bit, but the GLBA is more focused on the identity elements.

Now in this weird exception, just to kind of confuse things a little bit, the FCRA regulates everything on the credit report: below-the-line and above-the-line data. So the trade line data and the identity data.

Currently, there is an exception for that above-the-line identity data that says if it's being used for fraud prevention, then it's governed by the GLBA.

I will say that there is…I don't know all the details of this, you have to talk to legal people…but there is currently, I think, a process in place with the Consumer Financial Protection Bureau, CFPB, I believe that's right, where they're actually considering right now, changing the rules around that, where that header data would be regulated by the FCRA instead of the GLBA.

That would be a major change.

It would affect, you know, every credit bureau and then lots of non–credit bureaus as well that use this kind of data.

But for the time being, this is this kind of breakdown of the credit report: Everything below, FCRA; everything above, GLBA. That's kind of the easy way to think about it.

When we're building products to deal with synthetic fraud, we try to split the data apart and make products that are just using one type of data. Because that way, if I'm making a solution to address synthetic fraud that's using FCRA data, the only people who can use that have permissible purpose.

So they to be a financial institution, in effect. It has to be during some process around credit applications or account monitoring or whatever it might be, they have to have that purpose.

So if you were outside of financial services and you have a risk of synthetic fraud, that FCRA product isn't going to work for you, you can't use it.

Jason Lord:
So part of what we're trying to do by keeping these streams of data separate is make it easy for those who are consuming it for their own internal regulation and compliance that you're only using the types of data that you're allowed to use for the purposes you're allowed to use them.

Colin Dew-Becker:
Yeah, exactly. And so you know, a lot of our interest is with financial institutions where they will have the permissible purpose to use the FCRA product.

But you know, as we talked about at the top here, there are other industries that do not have that permissible purpose and––

Jason Lord:
Which definitely before we close out, I definitely want to talk about those other industries, but please continue your thought.

Colin Dew-Becker:
Well, I was going to say that synthetic fraud is not just limited to financial services.

And if you're not in financial services, you can't access, you can't easily access credit data to deal with synthetic fraud.

So that's why we kind of offer these solutions that are governed by the GLBA, because that's much more easily usable and can be used in a lot more scenarios when you're doing fraud prevention.

That's like, fraud prevention kind of gives companies more latitude than, just like…we can't use these products during credit checks. You can't reject somebody’s credit application with a product that's governed by the GLBA.

But you can, if you use one of these products and they score very highly, they look very synthetic…what you do is you start pushing them through all these kind of stepped-up authentication checks that are hard for a synthetic to actually do.

So, you know, document verification types of things.

So giving me your driver's license or bank statements or proof of employment, anything where you have –– I mean it's possible to fake those things, don't get me wrong, and that's actually again part of the complexity of synthetic fraud –– but it's a lot harder.

And so just that friction will stop a lot.

But again, you don't want to be putting regular consumers through all those extra steps just so they can…well, in some cases it's applying for loans, but outside of financial services it's applying to rent an apartment or get an account on your sports betting platform or a whole number of other areas.

Jason Lord:
Right. Because in addition to, just from the looking after the business element of things and wanting to increase the conversion rate, you might also get into discriminatory lawsuits if you're putting friction against people who don't deserve friction because of attributes that are beyond their PII.

Colin Dew-Becker:
We make sure when we're designing the products that you know that the FCRA and GLBA are both pretty explicit.

I think it's actually –– again, another acronym is the ECOA, the Equal Credit Opportunity Act, which is I think a subsection of the FCRA or the FCRA is a subsection of it. I don't remember, but they're very tightly related.

And they kind of get into what can you not discriminate people against?

So things like, you know, race and age and national origin and things like that.

Right. So when we're designing products that deal with synthetic fraud, it's entirely possible, for instance, I can’t remember exactly where it is; there is a part of California where there are a large number of synthetic fraud rings

It's a specific country in the Middle East and I'm blanking on it right now, but the idea is that it's a large portion of that population where if I were to make a model with that, we could really target it and you would eliminate a large amount of synthetic fraud in that portion of California from that group of people from that country.

But you would also inherently be discriminating against perfectly fine people from that country –– and yeah, we don't do that.

Jason Lord:
Based on their ethnicity. Right.

And that's also a problem with the machine learning when it's allowed to run rampant is that it can end up accounting for any kind of discriminations already exist in the social system.

It just reinforces it and it does it without the receipts necessary for understanding why these choices are being made.

So this is a great conversation. I almost don't want to interrupt it, but I do, I am conscious that we're coming up against time and I do want to get to at least one more thing, which is: We keep hinting at the fact that synthetic fraud exists outside the financial sector.

Can you tell me a little bit more about that?

Where are we seeing synthetic outside of financial?

Colin Dew-Becker:
Sure. So the two I mentioned earlier are the ones that I'm kind of most interested in lately and it's rental screening and the gaming industry.

And they're hard because it's not that they're new –– gaming a little bit, because obviously all the states legalizing gambling in the past decade has increased that industry substantially.

Rental obviously is not anything new, but I think what's different there is that in a lot of cases, is not on the rental screening side at least, it’s not consumers who are trying to take businesses for hundreds of thousands of dollars, as you might see on the financial services side.

But it’s people who are basically going to rent apartments for relatively short periods in the grand scheme of things, like it might be a two- or three- or four-month kind of situation, but what they do is they come in with a fake identity, which makes them look like a good renter.

They get approved for the apartment and then they never pay rent. And it takes however long to evict them and you know you're out all the extra costs of that process.

So I think I saw something like, the average synthetic costs of rental companies, something like $5-10,000 just in all of those costs and loss of rent and everything like that.

So when you're talking about the whole country and all the places that are being rented these days, that can add up.

And there is a secondary aspect to it, which is part of, you know, I mentioned it much earlier that synthetics will there they they're going to take elements from lots of different people and then they need an address.

You always need an address and in a lot of cases they need to…the bank or whoever is sending stuff to you…umm, it's good to have a real address.

You might not actually live there, but if you can use your synthetic identity to get an apartment where we now have a real address where mail can be sent or credit cards can be sent, whatever it might be, that really kind of helps out your, you know, fraudulent activities in all of the other spaces.

So one of the things that we hear about is a little hard to kind of validate analytically, but that people will use that as a way to like if they're ripping off, I don't know Amazon or eBay or any of the number of online companies that do buy now pay later activities where you're kind of using this weird little niche part of the credit market right now…where you buy something for $500, but you're going to pay it off in four installments.

I guess you pay the first installment when you buy it. They then ship it to this address that is associated with this synthetic identity.

You never pay back the people for the other three installments, and now you have the physical thing that was sent to your address that you got kind of illegitimately, and that company is out all that money.

So you can kind of see like all this stuff is interconnected, but what's harder for us and lots of companies with rental screening is they tend to be different types of synthetics.

There's some people who are very traditional like, again, people I talked about who are using that as an address, but as part of their scheme for defrauding banks.

But if it's someone who's just going to get an apartment to rip off the rental screening company for a few months’ rent, basically, they don't tend to have the kind of profile that lots of traditional approaches for solving synthetic fraud would capture.

So it's something where, you know, we're kind of getting into it, I think lots of companies are…to deal with it.

What is the short-term solution for it, and it's the solution for synthetic fraud in general, is you just have to do more identity verification checks.

Jason Lord:
And importantly, you have to do it at the up-front because by the time that has already established a credit line…as far as any credit agency is concerned, it's a real person.

So you have to do those checks at the up-front.

Colin Dew-Becker:
In the gaming industry –– it's a new space in America, obviously it's been around in Europe and Asia for a while, but if you go on to Draft Kings or FanDuel or any of these apps, they're all giving away like you know, come and make a deposit and we'll, you know, make a $100 deposit and we'll give you $250 in free play, or make a $1000 deposit and $2000 in free play, whatever it might be.

And so one of the things that we might see is, okay, you can come in with my real identity and do that, and I'll get my $2000 in free play and that's great.

But what if I just make, like, a dozen fake identities and assume that you're not going to put in enough checks that I can't get through them, and now I can just go after all of these Free Plays …and that money adds up really quickly.

I do a lot of sports betting and you can get through Free Play pretty quickly, and it turns into real money that you can withdraw pretty quickly, especially, you know, there is a high demand from genuine consumers of these platforms that they can get their money out very quickly.

So it's a question of friction again. It's like, you get these people come in…you don't want to put up a ton of friction because you want their business, but when there's a lot of, you know, thousands of dollars at stake, it can be a problem. It's also an issue of tracking that problem.

Where in the credit space we can see all of your behavior on every loan you've ever placed. It's very difficult unless those, you know, gambling companies kind of provide that data to have any idea of what the actual problem is for them.

It's much more like, we know it happens, but they're the only ones you really know how much money they're losing.

Jason Lord:
Well, in so many ways, financial services tends to be the vanguard industry, and the things you're describing now, which is the: Is it worth it? How do we track it? How do we distinguish whether the friction that we might put against it is worth it from a conversion sense?

It's the exact same conversation financial services were having three years ago, so my suspicion is three years from now will be now having the same conversation around the ecommerce, the tenant screening, the gambling world…where they're starting to wake up and realize this is a real problem. We should have addressed it three years ago rather than where we are today.

Colin Dew-Becker:
I mean absolutely. And it's, I think, again, everyone's kind of aware of it. It's just figuring out the best ways to solve it while limiting friction, and you know it's all about –– especially the gaming companies –– about maximizing earnings and keeping your users happy, right?

Jason Lord:
Well, and it shouldn't be a zero sum at the end of the day, if you have better identity data then you're able to not just root out the synthetics, but also let the good consumers through with less friction.

Colin Dew-Becker:
That's right.

And then with gaming companies in particular, especially everyone on mobile apps, there's a lot of good data on device usage. TransUnion has a lot of that. A lot of companies have that where you can see…is there 10 different identities coming in on the same phone? Because that's a red flag…

That's the nice thing –– there's kind of always more. I mean, it's nice and scary –– there's always more and more data out there about the people who are using your product, and I do appreciate the financial regulations in place for that industry, but a lot of those same laws do apply to data, you know, device level data and IP data and things like that.

So it is protected to some degree, but it also means it can be leveraged for these kind of identity risk solutions to help mitigate these problems.

Jason Lord:
Data used for good.

Well, Colin, we really thank you so much for coming on and sharing your insight.

No doubt there's a lot more work to do, but part of the job is just to get the message out there, that one, this is a real problem, two, there are ways of addressing this using the data that's available to you.

So thank you all for tuning in. Thank you for supporting us in 2023. We hope you will join us in 2024 for upcoming Fraudcast episodes.

In the meantime, happy holidays and stay safe

TransUnion Fraudcast

Your essential go-to for all the absolute linkages between the day’s emerging fraud and identity trends, tropes and travails — delivered with straight talk and none of the false positives. Hosted by Jason Lord, VP of Global Fraud Solutions. 

For questions or to suggest an episode topic, please email TruValidate@transunion.com.

The information discussed in this podcast constitutes the opinion of TransUnion, and TransUnion shall have no liability for any actions taken based upon the content of this podcast.

Do you have questions? Our team is ready to help.