Episode 74: From Compliance to Catalyst: AI’s Role in RegTech – Part 2
This week’s COMPLY episode is part 2 of a discussion between John Zanzarella, PerformLine’s CRO, Ed Vincent, CEO of Lumio, Kunal Datta, Head of Product at Unit21, and Anna Fridman, Co-Founder of Spring Labs, as they take a deep dive into the transformative impact of AI on RegTech, exploring how AI-driven solutions are redefining compliance strategies for financial institutions.
This episode highlights:
- Responsible AI Adoption: Technology must be responsible, explainable and auditable, not just powerful.
- Practical AI Applications in Compliance: How to leverage AI tools integrated with existing platforms to improve efficiency and compliance quality.
- Risk-Based Approach: custom automation settings and profiles; maintain transparency, auditability, and human oversight.
Show Notes:
- Listen to Part 1: https://performline.com/blog-post/episode-73-from-compliance-to-catalyst-ai-role-in-regtech-part-1/
- Connect with John Zanzarella: https://www.linkedin.com/in/johnzanzarella/
- Subscribe to PerformLine to stay connected to resources and updates: https://lp.performline.com/subscribe-to-performline
Subscribe to COMPLY: The Marketing Compliance Podcast
About COMPLY: The Marketing Compliance Podcast
The state of marketing compliance and regulation is evolving faster than ever. On the COMPLY Podcast, we sit down with the biggest names in marketing, compliance, regulations, and innovation as they share their playbooks to help you take your compliance practice to the next level.
Episode Transcript:
Jessica:
Hey there COMPLY podcast listeners and welcome to this week’s COMPLY Podcast episode as we continue the discussion between John Zanzarella, PerformLine’s CRO, Ed Vincent, CEO of Lumio, Kunal Datta, Head of Product at Unit21 and Anna Fridman, Co-Founder of Spring Labs, as they take a deep dive into the transformative impact of AI on RegTech, exploring how AI-driven solutions are redefining compliance strategies for financial institutions.
Ed:
Let’s take these conceptual ideas now and let’s go down a level, if you will, into some practical applications and practical approaches of drilling into how are innovations addressing common compliance pain points.
I’ll start out this kind of topic with one of the things which we’re seeing at Lumio where community financial institutions are enacting a modern data platform.
We think of that as a data warehouse, data ingestion tools that aggregates data together from multiple systems, including your core system, your GL, your fraud system, or your marketing compliance system, or your customer interaction system. We’re getting all that together, creating a holistic picture, overlaying a risk profile, and then empowering your folks on the front line with insights to enact risk-informed decisions.
So the days of routing all these inquiries back to a risk management team are waning. Instead, the folks on the frontline now have visibility in the timely data and they can enact decisions themselves in a timely manner.
So in a sense, everyone at the institution is now responsible for compliance and risk management. It’s not sitting in this central area. That’s an example that we’re seeing and living here, in our world.
John, AI is often discussed as a game changer in this space for compliance and risk oversight. So can you give us your view into where AI can deliver the most immediate practical impact today and maybe also the risks or limitations that the financial institution should think about as they are going down this path?
John:
Yeah, absolutely. I’ll talk a little bit about how we think about AI and how we’ve worked with our clients around adopting and some of the practical applications. But, we’ve been investing heavily in AI, but we’ve always tried to do so thoughtfully, conservatively and grounded in what we know about working with regulated companies.
In banking and finance, it’s not enough for technology to just be powerful. It also has to be responsible, explainable and auditable.
And so, we’ve seen and we’ve heard from clients who’ve had some challenges with AI first or AI-only solutions that maybe are newer to market and don’t understand the intricacies of a model risk review or what it’s like to be audited by a federal or state regulator. So I would encourage, you know, the folks on this call–do your due diligence. AI without guardrails can actually introduce new risk; like bias, hallucination, false positives that actually slow down your compliance work.
But if you have an existing RegTech solution, you can definitely be a part of their AI strategy. That’s something that we’ve opened the doors to our clients. We want to hear from them, “Hey, what is your company’s view of AI?”
We’ve heard everything from, hey, we’re still not ready to jump all in on AI yet, to we’re actually getting reviewed in our annual review based on how much AI we use on a daily basis. And that was a big shift from a couple of years ago, especially for some big financial institutions who were much more risk averse. I think everyone’s kind of in their personal lives started using AI and seeing some of the benefits there and companies are starting to embrace it.
So I think for us, we’ve had our CTO on more calls with clients in the last year than we probably had in the five years before that, because everyone wants to hear about what we’re planning from an AI standpoint. So I think it’s an exciting time for companies. If you have an existing RegTech vendor, they’re likely doing some AI things. So be a part of that journey and just be cautious as you’re looking elsewhere.
But from a practical application, I think what we’ve seen is AI has certainly helped with some of the complex challenges when it comes to sales and marketing oversight. It’s helped in the process of content review. We’ve seen more clients leveraging things like contextual analysis of videos. So not just the transcriptions, but actually what’s going on in a video that they might use for large scale marketing or across social media.
We’ve also seen it automate monitoring processes of consumer actions. So think about if you are buying from a Buy Now, Pay Later company and you need to go through a checkout experience and across multiple clicks, there’s different disclosures and APR rates that come up and things that you need to agree to. And so, AI has given our technology the ability to be part of that customer journey in a way that was really hard before that technology was available.
We’ve done a lot with AI rule building at PerformLine, being in business for over 15 years. We have billions of compliance observations in our data. And so we’re using our own data to help train models and provide insights to our customers. And that’s just in the sales and marketing compliance space. We’ve seen peers in AML/KYC doing some really exciting things with AI, as well. So it does seem to be becoming more synonymous with RegTech.
Ed:
All right, you’ve got a handful of real practical applications there, whether it’s content review, right? Conceptual analysis of videos is a great one, right? Monitoring consumer interactions, building models.
Kunal, let’s bring it over to you. Where clearly staying compliant is non-negotiable. You’ve got to adhere to these compliance principles. How does Unit21 think about approaching this in a way that also is going to build trust with auditors, examiners, those stakeholders that you need to have on side along this journey?
Kunal:
Yeah, absolutely. We found in talking to folks, it’s not just about the efficiency, the efficiency gain is great. But the other side of this is also efficacy. And when I say efficacy, what I mean there really is the quality of the reviews, it’s interesting to see comparisons to the world of self-driving cars, where you had at a time like people sitting in the car actually making sure that they’re driving right at first and then eventually no longer holding onto the steering wheel, when over time you have enough data to prove that actually it can do as well or not better than a human.
The interesting thing I think in this space, at least with us, we’ve had this platform, actually people doing normal human reviews of alerts in our platform for years now and using that data to actually evaluate the efficacy of the AI agent, I mean, that’s been really helpful tool for us because now we can actually go back and say like on this process across like whatever the billions of alerts that we’ve done, this is how the AI would perform.
Or even things like, recently we released AI decision making. And the way we did that was before we put that out in front of our users to see, we just stored it in our database to see like, okay, let the humans actually do the review and make the decision on their own and then we’ll compare like how is the AI deciding.
And what we’re finding is, it really depends on the process, right? And for us, I think being in the compliance space, like quality is paramount. So we can only release anything to a customer of ours when the quality is at a high bar. Otherwise there’s the chance for things like automation bias where people might intrinsically trust the AI if they’ve seen maybe five good examples, but maybe the sixth and seventh and eighth examples might not be good. So you want to make sure that like, we have the evidence to actually put something out there that’s good because we know that humans are sort of fallible creatures and may end up trusting something when actually it may not be worth trusting.
So I mean, if anyone is looking for AI vendors, I would highly recommend doing is ask them about their evaluation sets. How are you evaluating the quality? What is your “eval set”? This is sort of the term that people use, right? And where did you come up with that? I think that can be one of the most eye-opening answers if you’re evaluating the quality of any AI product, even outside of the RegTech space.
Anna:
Kunal, couldn’t agree more actually with what John and you were both saying. I think it’s interesting to see all these players jumping into AI space, including the financial space, but a lot of people don’t come from this world, from this background. And there are so many intricacies. So I’m always sort of boggled How/What is the plan, right? Because you need to understand the space. You need to understand the compliance expectations. It’s a very particular beast, as we all know, right? There are all these considerations for vendors and sort of making sure that what you’re putting out is quality and validated and you test it with humans, right? Like how you were saying, Kunal. I think that makes perfect sense.
And I think, another thing that we’re always thinking about in the financial world is risk-based approach. Like how much focus you should have on something depends on the risk.
So all utilization of AI is not the same in the financial space. Like for example, scrutiny on things that are more customer facing, like underwriting, it has certain weight. And then backend analysis for compliance purposes, if it has a human in the loop, has different weight.
So starting out, first of all, with having people work for you who have background in the space, it’s very, very particular. I think it’s very difficult to build a product not understanding the background of the space. You have to make sure that the product works as it should. You should know what it should look like on the receiving end. Like what should it look like? How can you claim quality if you’re not quite sure what that would look like? But also thinking about the proper place to integrate AI because there are certain areas that are more high risk and other areas are less risky, right? So thinking about that on the front end.
Ed:
I want to go back to your comment there about a risk-based approach. Kunal, you brought up this idea of levels and levels of automation. And I thought that was a really great way of explaining this and kind of putting that construct out there. Will you talk a little bit about that whole idea of levels of automation?
Kunal:
Absolutely. So this is borrowed from the world of self-driving cars, which I think we have a lot to learn from actually. Not just there, there’s also other places to learn, right? Like where they’ve gone through these kinds of transformations, like way before GenAI. So like radiology actually is I think a really good example of this, as well as literal autopilot, like in planes, where this was one of the first places where this kind of automation was rolled out and in sort of a life-or-death kind of situation.
So I mean, the basic framework is as follows. It’s not black or white, right? It’s not either automation or no automation.
There’s the level zero through five. So what is level zero? Level zero is you drive your car yourself. So just like normal, right? You go and you come back and you have full control over the vehicle.
Level five on the other extreme is you can go to sleep in your car or read a book or watch a movie or do whatever you want to do, right? And it’ll get you there and do all the things and it’ll do it right, and you’ll be safe.
But in between you have all these other levels of automation. So a simple example of this is like lane assist is a level one automation. There’s a lot of cars nowadays, have lane assist, like most cars do. And it’ll sort of buzz you when you’re kinda going out of your lane, right?
Or another one is a cruise control, like an early version of level one automation, right? Another one, like, even simpler is when the lights, when it gets dim outside and your lights automatically turn on, like that’s automation, right? Like that’s been in there for a while.
And these are all controls that normally a human would have to do, but you’re still kinda driving. But now as you add these up and you can start to see how the human involvement can kind of reduce a little bit at a time. It’s not like all or nothing like it’s driving or not. It’s like, okay, first maybe I’m comfortable with lights being automatic, right? Then maybe I’m comfortable with lane assist. Then maybe I’m comfortable with the accelerator or braking happening, then maybe I’m comfortable with the full on steering being taken over, you know, until and, but I’m not fully comfortable with it taking over all the time. Maybe I want to be there for extreme scenarios, you know, and I wanted to escalate it to me in certain situations.
And then finally, over time with enough data, I’m fine with level five automation. So this is one axis, I think is like the level of automation.
The other axis is the process itself. And kinda what Anna was touching on earlier is like, not all processes, like you may not feel comfortable as well in all processes, right? So what we see, for example, is depending on the company, like certain tasks, that’s how we break it down at least in our product is like, what task do you want to turn on and off.
So, you can have an agent that does like a sanctions task, but you can have an agent that that same agent can also do behavioral deviation if you want it to do, or like impossible travel as a task, or like narrative writing as a task. And you may be like, all right, you know what, for my sanctions alerts, like I’m down, it’s like a name match and I’m fine with it just escalating to me when it sees something that might be sus.
But there might be organizations where like, you know what, actually sanctions can be pretty bad. Like if I miss something, I don’t want to miss anything. I’m down to review all the false positives, to each their own.
And I think that kind of flexibility is actually pretty important because each organization has different priorities talking about risk-based approach, like what does that really mean? It’s like, what do you think? You know, that’s really what it is. If we’re being honest about it, it’s kind of a matter of opinion to a degree, right? Like what is your, like truly, what is your feeling on risk? And you should be able to have the control to automate where you think it’s worth automating for your organization, rather than just giving that up because of some promise of efficiency, which I mean, efficiencies are great, right? Like don’t get me wrong, we’re building all this stuff, but I think doing it safely and in a way that you feel that control.
In addition to things like transparency and auditability, like how is it making the decisions, the ability to test, right? Like, you know, for sure. So it’s like a zero trust environment, right?
You can say, I don’t need to take the vendor’s word for it. I can see it for myself. Like these kinds of things I think are very important, but I don’t know, did that answer your question, Ed? You asked me about levels and I went on a bit of a tangent.
Ed:
I think that’s great. I think that, right? Well, levels is one axis, as you said, another axis of the processes themselves. Right? And then tying that back to the fact that if you’re taking a risk-based approach, most institutions have their own unique risk appetite and risk profiles. And so I think that this dynamic, that it’s not one size fits all, is really important.
Kunal:
Yeah, and I think actually that’s where AI kind of excels. Automation has been around forever, I mean, the simplest things in our lives have automation in them. Like, a coffee machine has automation, or a washing machine has automation.
But I think what LLMs are uniquely good at is what we’ll call the last mile of flexibility, where something like, any sort of “if this-and-that” kind of system may not work.
And even ML is not really great at that because it’s based on hundreds, of thousands, of examples. But you know, if you tell something once to an LLM, like one-shot learning or a few-shot learning, this is sort of the concept with a small number of examples, can tell you what it can extrapolate is in essence, right?
I think that’s really good, uniquely good at fitting to your data, your organization, your processes in a way that other types of software previously could not do.
Ed:
Yeah, that last mile characterization, I think it’s the nail on the head. And I’m sure, John, you went through some examples there before of looking at different marketing use cases. Everyone approaches their marketing materials a little bit differently, I’m sure, right? They’re positioning themselves a little bit differently. They engage with partners in a different way. So I think that probably applies equally there in the marketing space as well, that they need to be flexible and configurable and apply it to the unique use cases of a financial institution…
Jessica:
Thanks for listening to this week’s episode of the COMPLY Podcast! Stay tuned for final episode! As always for the latest content on all things marketing compliance you can head to performline.com/resources. And for the most up-to-date pieces of industry news, events, and content be sure to follow PerformLine on LinkedIn. Thanks again for listening and we’ll see you next time!