AI in Banking: Revolutionizing or Risky?

AI is a frequent topic of conversation but what do we really mean when we say artificial intelligence? How are banks and financial institutions already using AI, and what are the benefits and potential pitfalls?

In this episode of Issues of Interest, BNN’s Information Systems and Risk Assurance practice lead, Pat Morin, sits down with Krystal Martin, senior manager at BNN, and Ryan Robinson, senior director at Mainstay Technology to talk about what AI is and its potential to revolutionize tasks like credit analysis and fraud detection. They also discuss potential risks and why cybersecurity is more important than ever. Tune in to learn more about how institutions can balance leveraging AI’s benefits with safeguarding against its threats.

Connect with Pat Morin on LinkedIn: https://www.linkedin.com/in/patrickamorin/  or via email at pmorin@bnncpa.com

Find Ryan Robinson on LinkedIn: https://www.linkedin.com/in/ryancolerobinson/ or via email rrobinson@mstech.com

Mainstay Technologies: https://www.mstech.com/

Connect with Krystal Martin on LinkedIn: https://www.linkedin.com/in/krystal-martin-14296143/ or via email at kmartin@bnncpa.com

Should I Be Using A.I. Right Now?, The Ezra Klein Show https://www.nytimes.com/2024/04/02/opinion/ezra-klein-podcast-ethan-mollick.html

CISA New Best Practices Guide for Securing AI Data Released https://www.cisa.gov/news-events/alerts/2025/05/22/new-best-practices-guide-securing-ai-data-released

Banks and financial institutions are constantly navigating volatility and change. Here at Issues of Interest we help you stay current on what’s happening in the industry so you can achieve success for your institution. We cover assurance, tax, business advisory, and technology topics and trends affecting the industry. Subscribe today to receive news and developments directly in your inbox.

Episode Transcript

Patrick Morin: Hi, everyone.

Thanks for tuning in to issues of interest, BNN’s podcast for the banking and financial services industry.

I’m your host today. Pat Morin, principal and leader of the information systems and risk assurance practice at Baker Newman Noyes. I’m here with Krystal Martin, senior manager of my group at BNN, and Ryan Robinson, a good friend and a senior partner at Mainstay Technologies, one of the largest independently owned managed IT and information security companies in New England.

Hey, Krystal and Ryan.

Ryan Robinson: Hello. Hello.

Krystal Martin: Hi. Thanks for inviting us on the show, Pat.

Patrick Morin: You’re welcome.

So, Ryan, today we’re going to talk about the role of AI in the banking industry.

I went ahead and invited Krystal to help facilitate our discussion while you and I share our thoughts on AI. Krystal, what questions might you have for us?

Krystal Martin: Sure. So, to get us started, maybe we should help define what we’re talking about when we say AI or artificial intelligence.

Ryan Robinson: Yeah, sure. So, what’s interesting about AI is that I think there’s a misconception. Sometimes it could be helpful to say what it’s not, just as much as it’s helpful to say what it is.

If you’re under 50 years old or 60 years old even, you’ve lived through now, multiple pretty radical technology transformations. You think about the PC revolution that happened 40 years ago.

Then you have the Internet revolution that happened 25 years ago, 30 years ago. And then you’ve got the smartphone revolution that happened, putting in the early 2000s, but especially hit home in 2007 when the first iPhone was developed.

So, we’ve become used to thinking in these terms and we just can assume, oh, you know, AI is just the next phase. Right? It’s just the next revolution.

But there’s something really fundamentally different about artificial intelligence, which is, namely that it is just what it sounds like. Just as the name implies, it is a simulation of human intelligence, which is fundamentally different than what all of those other past technology revolutions that I just referred to have been.

I think of those as tools in the hands of intelligences, right? If a human being is an intelligence, all of those previous past technology revolutions were tools that we were sort of in charge of wielding artificial intelligence. What makes it so different is that it is a technology.

And that’s true. So in that sense it’s the same.

But because it’s a simulation of intelligence, it has the ability to not simply be a tool in the, in the hands of intelligent beings like people, but instead it functions as its own intelligence. And so it has the ability to learn and to reason and to self correct.

And all these things that we think are fundamental to what we are as people now it is able to do.

Patrick Morin: Yeah, great. And so, you know, I was going to comment on there that as popular as it has become in the last two years or so in the popular press, artificial intelligence or items that are included in that have actually been around for a long time. Things like machine learning and data analytics. And as we talk more, I think our listeners should keep their mind open to the larger set of those AI tools both in terms of how it might be effective for their institutions as well as what things they should be careful of.

Krystal Martin: Yeah, those are great points. So segueing on how could somebody that works in the banking space find an opportunity to use AI?

Ryan Robinson: So, you know, I think what we have to realize and if you spend any time listening to or reading the latest thinkers around AI, you’re going to hear two different camps. You’ve got the camp one who are like this is the greatest opportunity of all time camp and we need to leverage these tools and there’s incredible opportunity in front of us and then you have sort of the doomers who are, AI is going to take over all our jobs and AI is going to fundamentally reshape the society in a very non human or subhuman kind of way. And all we see on the other side are risks and you know the ending of Terminator 2.

So I think that the reason why there’s people on both sides of that aisle is because the reality is both are real possibilities. Right. There’s a real opportunity to leverage this both in the banking space in particular and in other industries as well, but then there’s also real genuine risk. There’s risk that’s societal and human in nature and there’s risk that’s technical as well. There’s risks to our business.

And so, I think we have to keep both of those pieces in tension. And if we only focus on how it’s being leveraged, then we’re going to miss an important side of protecting ourselves.

I’m sure later in this conversation we’ll talk about the risks and concerns as well.

But in terms of the opportunity, I mean, they’re tremendous.

When you come up into a revolution this big, it’s really almost hard to overstate it. You know, if you think about the Industrial Revolution that happened in the early and mid-1800s, there’s always this period of incredible opportunity and advancement. And then usually there’s a period of a dip, right, where there’s a lot of chaos that’s created in this society as a result of these technology revolutions.

And so, I have no doubt that something similar will happen here.

But by analogy, I mean the Industrial revolution, as significant as that transition was from an agricultural society to a technological one to a machine-based economy, as big as that shift is, I think many people, myself included, would say that that’s going to pale in comparison to the shift that’s going to happen because of AI.

So just to give some examples, a lot of banks are starting to use AI when it comes to, like lending and credit analysis.

So, for example, there’s a real problem, as many of the listeners of this podcast will understand, that it can often take a tremendously long amount of time for someone to go into a bank or to reach out to an online bank, put in all their information, gather all that needs to be gathered, gone through the underwriting process, all of the back and forths that have to happen between the lender and the underwriters.

And that process sometimes can take months, and it can be very painful for both the banker and for the customer. And so there’s a bank in Pennsylvania, kind of a community bank, and they started using this AI tool called Upstart.

And essentially Upstart uses all of these advanced algorithms that we see in usage on all, you know, and the way YouTube feeds us our data and the way social media companies target us and all of that.

But they’re sort of using it for good and being able to identify much more quickly if that customer is worthy of the credit that they’re requesting from the bank.

And they’ve been able to increase the speed of their approval process by 80% using this AI tool called Upstart.

I mean, that’s just one example of many, but you can see the power of that, where you now take a process that’s very manual, very human intervention dependent, and then automate it using this intelligent tool that will do much of the work that previously the underwriters were required to do previously.

Patrick Morin: Yeah, you know, between your earlier points and this last example, it’s interesting that AI, although it can introduce risk because of all the points and interfaces it’s got, it can also be a very effective tool to do things like accelerate lending decisions or even help detect fraud.

And similarly, it can create some disruption and anxiety. But again, at the same time it can save banking leaders a lot of time, allowing them to really apply efforts and being strategic and working towards bank growth and instability.

Ryan Robinson: Yeah, I think you’re exactly right. I mean, picking up on your fraud example there, that’s. I think one of the other big uses that banks and financial institutions are finding is that their, their fraud detection rates have dramatically increased.

So as the listeners will know, fraud detection is all about analyzing transactions and looking for oddities.

You know, hey, that person or that institution is not normally purchasing that kind of thing at this time of the year or whatever.

So it’s about a massive amount of transaction analysis.

In the past you’ve had technology systems that have done fraud analysis. So that’s not new per se, utilizing computer technology to do that.

But the difference is that in the past it was very, very difficult for those systems to really intelligently understand the patterns and to be able to recognize that human beings, we are very good at recognizing patterns, but we’re extremely slow at it.

And historically the machines have been really slow at it as well. So now you have AI fraud detection systems now that will be able to analyze a hundred thousand transactions, I’m not exaggerating that number.

A hundred thousand transactions in about five seconds.

Something that was never able to be done before, even in the computer technology that existed.

So, you know, not only is it faster, but it tends to reduce false positives, which were another huge problem. I know I bank with a pretty conservative bank and I find that I am suspicious that they are not using some of the most advanced AI tools out there to do the fraud detection.

And the reason I say that is because I’m constantly getting transactions that I personally am making get hit up and rejected. And I think that that’s largely because that bank is still utilizing non AI specific technology.

Patrick Morin: You know, that’s a great point. By not adopting them, they have a personal, their understanding of how you transact. And in your world with those rejected transactions should really be getting authorized.

Ryan Robinson: That’s right.

Patrick Morin: Again, that’s a place where it can impact customer satisfaction.

Krystal Martin: So as you guys talk about these opportunities, you also have identified some risks. So where should banks be cautious when they’re using AI?

I think first you talked a little bit about that bias in an AI that’s apparent.

Ryan Robinson: Yeah, I mean, I think that there’s a myriad of ways, and frankly, I think that right now you think about what sells, right? Like, what sells tends to be, at this moment, tends to be a lot of the opportunity of AI tends to sell.

And so that’s the thing that, you know, when I go to speaking engagements and seminars and conferences on AI, so much of the conversation is around the opportunities, kind of the things I was talking about a minute ago, right. Like all the things that this technology can do.

But what I think is just not being talked about enough is all of the risk that surrounds this. And I should be careful there. There are some people talking about it a lot, but I tend to find that those are more like the philosopher types, those are the intellectuals that are talking about, you know, what is going to happen in our society as a result of this job loss, wealth, wealth gap increases, et cetera. But I find once you jump into the business space, once, once you’re in the, in the, you know, the folks who are, are talking about these tools and all their practical uses, there’s not enough concern and discussion about where we have to be careful.

So I mean, to give you some examples, one, one is the probably about five, six years ago, right? The beginning, before ChatGPT came onto the national consciousness, the thing that everyone was talking about was cybersecurity. I mean, I remember, you know, I work for Mainstay Technologies. We’re an IT and information security services firm.

And we were constantly being asked, come talk to us about cybersecurity and how we can protect against all these malware and phishing and all that. And then once ChatGPT came around, it was like all of the concern about cybersecurity just went out the window.

Now we are not worried about that at all. And because, of course, AI is the kind of new, sexy, interesting thing to talk about. But the reality is, I think I would argue that we need to be talking about cybersecurity more now than we were five years ago, more now than when these, before these AI tools became much more popular. Because now think about it, you’ve got these incredibly powerful tools in the hands of hackers and bad actors in a way that they never had before, tools that they never had before.

So I’ll give an example. I think all of us have been hit by phishing scams, right? Where we get an email purports to be someone who, it’s not asking, you know, it’s the CEO asking for us to buy a bunch of gift cards or whatever it is.

Well, in the past, most of those phishing attempts were what are called just general phishing attempts or generic phishing attempts, or net phishing attempts. Think of like the analogy where you go into the water, you throw out a big net, you hope to cast it across a bunch of different fish, and you just hope for a numbers game where the fish get roped in if you throw the net wide enough.

So most phishing scams were sent out to hundreds of thousands of people, hundreds of thousands of email accounts and so on, hoping for just a small percentage of people that would click or follow the social engineering scam. Well, now with the rise of AI, now at the same cost, quote unquote, in time and money, these hackers or hacker organizations can send AI on the same mission and do something instead called spear phishing. Spear phishing, just like it sounds right, is when you go into the lake or into the ocean and you grab a spear and you are aiming at a very particular fish.

And when you are spear phishing, you can be a lot more strategic, right? And it’s going to be a lot more effective because you’re going to know, okay, this fish is about to swim here. I’m going to aim right towards it.

So with the same cost and investment in time and money, now that same phishing attempt can go out to a hundred thousand people, but now it’s a spear phishing attempt for each of those, a hundred thousand. So AI can scrub all of the data on the web that that person has posted on their social media accounts or and speaking engagements that they’ve done that have been posted online or anything.

Right. Like the information about you on your company website.

And now that AI can develop a very specific spear phishing attempt aimed right at you and your personality and the things that are going to be easy to manipulate about you in particular.

And so we need to be talking about cybersecurity, and that is something that we’re encouraging all banks and financial institutions to start really doubling down, first in cybersecurity and information security, and second in AI, not the other way around.

Patrick Morin: Yeah. So, Ryan, that’s a great insight. I think what you’re saying is to not let the appeal of AI take your eye off the prize of having to make sure you stay up in cybersecurity.

But, you know, some of the other areas when using an AI, there’s this term called shadow AI, and that’s where organizations, especially those that have been entrusted with personal information may not have yet rolled out either standards or policies around it or even identified an appropriate tool set.

And as you said, it’s out in the wild and individuals are learning and they’re probably trying to do the best for their bank or the best for their customers are using those tools to search for information, all perhaps unbeknownst to them that they’re inadvertently sharing what is otherwise bank entrusted data into one of these tools. So you know, it’s important for organizations to identify those risks.

They can work with someone like you to give them a framework to work within to go and find out how they can protect that information.

And Ryan, I don’t know if you saw just earlier today CISA, the Cybersecurity and Infrastructure Security Agency, they issued a new white paper on new best practices for securing AI data.

And now I know it’s typically government organization related or organizations that work with federal funds, but what do you think? Would that be a good resource someone might take a look at?

Ryan Robinson: Yeah, I haven’t, I’m not familiar with that one in particular that in the white paper that just came out today, although I would say that the federal government in particular an organization called NIST, the National Institute for Standards and Technology, they have a number of really good cybersecurity frameworks. One of them is called NIST 800171. There’s another one that’s a little more advanced called NIST 853.

These are standards that a lot of banking organizations will utilize. And essentially, they give you a list of controls and you can walk through all of those controls. Like NIST 800171 has 110 controls and they span physical controls and administrative controls and technical controls.

Essentially you can walk through each of them through your organization’s lens and say, all right, are we meeting those these controls? Are we partially meeting them?

And absolutely, and in fact, I’m really glad you brought that up because I think one of the misconceptions when it comes to security and its relationship to AI is how we secure ourselves, especially a bank or financial institution, that it’s so necessary that the bank is concerned with the risk.

Patrick Morin: Right.

Ryan Robinson: Banks always are talking about risk mitigation.

But there’s two parts of risk mitigation, both that are equally important. One is the cybersecurity, which is all like the technical controls, things like your firewall and your multifactor authentication and your monitoring and so on.

But then you have this whole other category that sometimes gets ignored, which is called Information security.

And I know, Pat and Krystal, I’m preaching to the choir, this is I know this is where the two of you spend most of your time, but the information security side, which is all the non-technical controls, these are just as critical as the technical ones, oftentimes, depending on the organization, more critical.

And they include things like your policies and your procedures and your workflows and your user training and all of these things that are not technical primarily, but are so critical. You know, I love the analogy of if you build the technical wall really high, right, you think about building this wall, it’s a hundred feet high, but then you leave open all of the doors and windows in that wall.

Well, how good is your investment? How valuable was that? And organizations have this strong tendency, I think, because cybersecurity is a term that we hear a lot, we don’t hear the term information security.

And a lot of times if people hear the term, they wouldn’t even know the difference between the two.

And so, because they hear about cybersecurity, they think to be secure as an organization is to be cybersecure, it’s to be technically secure. But the reality is, is that organizations really need a good information security program which is going to include all those technical controls and the information security controls.

And as we move into this sort of new world of AI threats, I think this is even more critical than ever.

Krystal Martin: That segues nicely into kind of third-party risk management, which is something that me and Pat work closely with our clients on. Pat, do you have any considerations that the bank should think through when they’re engaging a tool that has artificial intelligence in it?

Patrick Morin: Yeah, you know, I was going to comment something that Ryan touched upon, which is program, the term program with regard to cybersecurity and the like.

And that means it’s ongoing, it’s live, and there is no one size fits all for every organization. It needs to be assessed based on the risk they’re willing to take on, based on their risk appetite and where it is they’d like to go.

And so taking that into mind, when you contract with one of these providers, it’s essential to have a contract that stipulates all the terms, conditions, agreements, who is expected to do what, and in particular, at what point is data exchange and who owns it, where is it retained?

If at the end of the agreement, who gets to keep the data? Especially if the model has been trained, there are certain agreements that will make it that any training that accrues in the model becomes a property of the AI provider.

And that may be something that an organization has the right to retain for themselves.

And then at the end, making sure that you have an exit strategy built into the contract so that you aren’t suddenly surprised with having to incur even more cost to just get what you’ve invested out of it.

Ryan, anything more to add there?

Ryan Robinson: Yeah, I think that’s such a good question and a great point, Pat. I mean, you know what banks tend to be aware of, right? Is that they already know about data protection, right? They already know about GLBA, they already know about GDPR. These are things that, you know, bankers talk about all the time. They know about cyber liability insurance. They know, generally speaking, about vendor risk management too, and they think about that.

But I think what they often miss when it comes to AI and AI vendors in particular is this fundamental reality that AI isn’t just software, it is decision making infrastructure.

AI is not software, it is decision making infrastructure. And that changes the contract fundamentals, right? It changes what a contract needs to be with those in particular with the AI vendors that they’re using. Right?

So they need to have things in their contracts about human oversight.

I mean, how often are contracts including that kind of language. Not often, I can tell you.

So they need to make sure that there’s not just, okay, here’s what the AI agent or AI tool will do, but here’s the human level of monitoring and oversight and how those decisions are made and are they sent past human beings with particular compliance expertise. Pat, I know you were at the recent event that the Sheehan Phinney law firm put on around this and you know, the attorneys there had a lot of really good things to say about this in particular, because there’s a real sense in which people will put information into these tools, whether they’re, you know, the really common ones like ChatGPT or maybe less common ones that are used like Perplexity or the very specific AI tools like Upstart that I was mentioning before that are used very specifically in the banking industry and people will put sensitive data into them or have workflows that shove sensitive data into them.

And yet there’s, there’s not a recognition that, look, if there was some sort of actual fraud that took place or there was some sort of illegal activity, that sensitive data could then be subject to internal justice review. You would not have any way to sort of hold back that private data in a way that you have promised your customers that you were going to do.

So, you know, there’s a bunch of provisions that need to be added into these contracts. And that’s why it’s important to work with, you know, a good cybersecurity or information security attorney. That’s why it’s important to work with folks like the two of you at Baker Newman Noyes.

Because we really need to be thinking about these contracts very carefully in a way that is easy to miss, because these are new kinds of contracts that most bankers have just never really had to evaluate before.

Patrick Morin: And I would just want to add to a little bit of what you’re saying where these tools, if you don’t understand where the data is being stored, and in particular your prompts, you know, the questions posed to the AI tool are and how long they’re kept, you could run afoul of some issues if you had a discovery claim against you.

All of those prompts, if they are still there, are subject to discovery.

Ryan Robinson: Yep, that’s exactly right. I mean, I think the whole idea of bias management, bias mitigation is a whole category of legal review now that a lot of attorneys that specialize in human resources in particular, are recognizing that a lot of these tools, they have inherent biases that are essentially pre built into them, and that if you just let the AI tool sort of have its way with that data, that you could end up not allowing someone to get a loan when all of your actual banking policies would say that they need to be accepted for it because those biases are built into the system.

One of the things that’s really important to understand is that we’re about to move into a more clear public consciousness of the shift from AI as AI tools to AI agents.

AI agents is really when an organization is going to be able to say, you know, give a prompt to an AI software, but the prompt isn’t going to be just, hey, answer this question for me, or even do this task for me.

But it will be, here is a goal or an outcome that I, as an individual or we as an organization, want to achieve.

You, AI agent, figure out the best methods to achieve that goal and then independently do them by connecting with our software, by going onto the Internet, by sending messages to people within and without our organization, essentially, it’s like a digital employee.

And when we move from, you know, Pat and Krystal typing something and doing a prompt and then having that AI engine spit back out an answer to actually giving these AI agents a level of autonomy into the world.

Well, now we really have to think a lot more carefully and a lot more clearly about what is in the contracts with these companies that are delivering these AI agents to us.

So we think this is important. Now, when we’re just talking about ChatGPT and perplexity, how much more is it going to be important in another year or two as this becomes much more common and into the public consciousness?

Krystal Martin: Yeah that sounds kind of like my nightmare, trying to AI.

So we’ve hit a lot of good topics around banking and AI.

Is there anything else that you would want your listeners to walk away with this topic before we start to wrap up here?

Ryan Robinson: Yeah, I guess a couple things. I mean, one is underscoring what I said earlier.

Just like any success you achieve in life or business, you need a good team. Right. Almost no matter what it is, you need a great team.

And so, you know, I think that that team needs to include good financial advisors, good auditors.

Baker Newman Noyes, obviously has a tremendous amount of expertise. I’m not just saying that because I’m on your podcast. It’s just the case your reputation in the Northeast is. Is really so good that you’re an easy one to recommend, but you also need really a great attorney or a great firm that specializes in AI and information security.

And I say this not meaning to pat ourselves on the back or to just plug Mainstay, but you need a great partner who knows both this world from the IT and from the information security perspective.

And I think probably Pat and Krystal, you’ve both seen this where you’ll have IT companies that are really not sophisticated when it comes to either information security. They may know cybersecurity, they may know the technical side, but they’re not very sophisticated when it comes to information security. And so you have this massive gap there, and they’re not super sophisticated when it comes to AI, what’s here and what’s coming.

And so they’re not able to effectively prepare their clients for it. And so, you know, one of the things that we do at Mainstay is we provide these AI and security audits, for example, where we’re.

We’re going through things like those NIST controls. We’re reviewing what they have for both technical and information security controls and how they’re utilizing AI currently.

And so that kind of audit can be done for anyone whether you’re using AI or not. Right now, it’s both an introduction to how you could leverage it and how you can protect from it, as well as what your current information security program looks like and where the gaps are.

So I think that’s one is, you know, you need to Have a great team around this. That team is going to be legal, financial, and technical.

And then I’d say the second one is really, AI needs to start becoming a fundamental part of every organization. And I think banks, probably at the tip of the spear, the top of that pyramid.

Banks need to be having AI as a fundamental part of their company strategic planning.

There’s still often the case that you’ll have banks that will talk about everything almost. But AI, when their board of directors comes together, they’ll talk about the human, human resources, they’ll talk about their own financial stake, they’ll. They’ll talk about customer service and a million other things, but they’re not talking about what is going to be the primary disruptor of their whole industry.

And I can’t imagine a more important topic. So once you set AI as a fundamental part of your company’s strategic planning.

Cascade down from that. Right. Then you are going to want to make sure you’ve got a great committee of people that are interested in this topic within your company to analyze and look at what the opportunities are.

You’re going to naturally think, oh, yeah, how are we protecting against the threats of this? Oh, how are we going to manage our regulation and compliance side of this? Like, once you put it as a core element of your company’s strategic planning, a lot is going to just happen naturally from there and you’re going to stay focused on it properly. So I’d say those are the biggest, two big takeaways.

Patrick Morin: Yeah. And I’ll just kind of segue a little bit from there. So one thing I like to say is to be curious or allow your team to be curious, but careful to see what’s potentially available.

And the main thing is always think strategically how it can enhance what you already do. Great. And are there ways that you can streamline what is repetitive? And then the last is when you proceed, do it on a measured approach, leveraging on critical partners or trusted partners who can help steering the direction in particular ones that have already done it. And Ryan, you didn’t plug yourself in terms of the fact that in your organization, you yourself are hosting a private AI model where you’ve been learning more and more about how it works so that you can be a knowledgeable advisor in that space.

Ryan Robinson: Yeah, yeah, well said. Yeah, I totally agree.

There was a recent podcast with Ezra Klein from the New York Times, and I thought he said it very well, which is that AI, you know, they were talking about the. This question of how many jobs are going to be taken by AI.

And in this podcast they articulated this idea that AI is not going to take most jobs.

Most jobs are going to be taken by people who know how to use AI.

Patrick Morin: Well, that’s a good point. Yeah.

Ryan Robinson: And, and so, yeah, for us, it’s the same thing. It’s like in order for us to help organizations protect against AI, we need to be playing in that sandbox a lot. And so we, as you alluded to, we built an AI platform ourselves that helps our team speed up and improve our troubleshooting and customer service. So we’re trying to eat our own dog food, as they say, and really jump into the AI pool as much as possible.

And that gives us an ability to consult and help organizations understand both how to use it and to protect against it.

Patrick Morin: Ryan, it was great to chat today. We really appreciate you taking the time to speak with our listeners.

How can people get in touch with you if they have questions or want to learn more about you or Mainstay Technologies?

Ryan Robinson: Yeah, absolutely. So I’m very easy to find on LinkedIn. In particular, I spent a decent amount of time there. So please hit me up on LinkedIn. Just search for Ryan Robinson, Mainstay Technologies, and you should find me pretty easily and then feel free to send me a direct email as well.

My email address is rrobinson@mstech.com and I’m happy to connect there.

Patrick Morin: Yeah, and thanks for that. And just to be sure, we’ll be sure to include that contact information in the episode description.

And if anyone wants to reach out to me, I can be reached at pmorin@bnncpa.com and the team at BNN is always monitoring and sharing updates and developments. So do stay tuned for more articles, podcasts and resources from our team.

Thanks all. Goodbye.

Disclaimer of Liability: This publication is intended to provide general information to our clients and friends. It does not constitute accounting, tax, investment, or legal advice; nor is it intended to convey a thorough treatment of the subject matter.