E37 – Exploring AI in Modern Business with Mahmood Rashid
In this episode, Gary discusses the critical aspects of AI governance and compliance with Mahmood Rashid, President of Staferm Solutions. Mahmood shares his extensive experience in IT governance, risk, and cybersecurity, emphasizing the importance of properly managing AI within organizations. The conversation covers the dangers of shadow AI, the necessity of using paid AI models with adequate privacy controls, and the significance of keeping up with AI advancements to avoid obsolescence. Mahmood also addresses the future impact of AI on various industries and provides practical advice for businesses looking to implement AI responsibly.
Discover:
00:00 Introduction and Guest Welcome
00:45 Guest Background and Experience
02:58 The Importance of AI Governance
04:45 Privacy Concerns with AI
08:57 AI Hallucinations and Prompt Engineering
12:47 Future of AI in the Job Market
16:06 Private AI Models and Industry Applications
21:58 Conclusion and Contact Information
Transcript:
[00:00:00] Gary Ruplinger: Hello and welcome everybody to another episode of the Pipelineology podcast. And today I am excited to be joined by Mahmood Rashid. He is the President of Staferm Solutions and he’s been in the IT industry now for about 20 years working in enterprise architecture, information security, risk, privacy, and all that other good stuff.
But, today what we’re talking about is gonna be AI. But before we get into that, Mahmood, can you introduce yourself a little bit? Oh, and by the way, welcome to the show.
[00:00:33] Mahmood Rashid: Thank you, Gary. Thanks for having me on the show, on the show. Appreciate the, the time and, exposure to your, audience. So a little bit about myself.
I’ve been in the industry for over 20 years, mainly in the, in the areas of governance risk and compliance on the IT side, as well as experience in enterprise architecture. So that basically talks about the design of enterprises, both from an IT perspective as well as from a business process perspective.
Held board level positions with ISACA Vancouver and other nonprofit organizations and certified in, as a certified information security, system security professional, CISSP, through ISC2. Also a certified Information Systems auditor, certified in risk and information security controls, and certified in governance of enterprise IT.
And these are all from ISACA. So that’s a little bit about me. What we generally do is help organizations from a compliance perspective and help them achieve compliance readiness for certifications such as SOC 2 and ISO 27001, amongst other certifications.
[00:02:04] Gary Ruplinger: Gotcha. Well, I know when we were kind of talking about some ideas of things we could, could have you on the show for the one that kind of came up and, and perked my ears was the things not to tell AI.
Right? AI is all the rage right now and different models, and it’s Gemini this and ChatGPT that and, Meta Llama what, what? I can’t keep up with them all anymore because it seems like every week it’s like the next great model or the next LLM is, is here for us, but I think in everybody’s, you know, excitement to, you know, oh, what can I, let me see what I can do.
When you give it all your information, and I think there’s, some things we probably shouldn’t be telling them. So can you give us some insight? I’d love to kind of know what we shouldn’t be telling AI, and what information we should be freely giving to these, you know, giant trillion dollar organizations.
[00:02:58] Mahmood Rashid: So I think the first, the first important element that, you know, individuals, business leaders need to think about is around the governance, governance of AI within their organizations. So if you look at some business leaders such as the CEO of Shopify, what they’ve come back and said is that there’s a hiring freeze.
And if there is a need to hire, look at AI first. If AI can’t fill the gap, then, you know, they would look at augmenting the, their staffing. So this is something that we have been seeing in the industry is organizations, especially in the small and mid-sized areas, is that organizations are actually looking at AI, whether it’s ChatGPT or other AI models to help augment their staffing needs. Now that’s all good and said. but the important, important thing to understand, and especially when we’re talking about the governance, is the rules and regulations around, around the use and adoption of AI and organizations. So having, appropriate, controls, making sure that the models that are being used for the organization are paid, paid versions. The, the free versions are generally don’t have the security and privacy controls that organizations should need, need. But even with the paid versions, making sure that your settings, have been set up appropriately so that the data that you upload into, the AI models are not used for public training. I’ll give you an example. Came across, an organization or an individual that was working in the healthcare field. And what they had actually done was, to make their life easier, they had adopted, ChatGPT to upload data that they had with regards to patients and ask ChatGPT to help them recreate a report. that’s good because that’s what ChatGPT and other LLMs are for, but the effect of not having a properly configured account meant that the data that they had put up in there, was actually considered as a, as a privacy breach.
Now given that the data wasn’t, didn’t necessarily have any of the health information that, would consider as a, as a real privacy breach, but it just serves as an example of how AI should not be used. So you may, Gary, you may have information about clients, such as, privately identifiable information.
You know, so making sure that that type of information is protected and especially, you know, different areas of the US, Canada, and the globe, we have certain privacy regulations that we need to adhere to. Right. So, GDPR is a very well-known, privacy regulation out of the, out of Europe. But the similar privacy regulations, in the US, CCPA and other regulations across the different states, as well as PIPEDA up here in Canada.
So understanding those and making sure that you’re compliant with those regulations becomes extremely important.
[00:06:52] Gary Ruplinger: Are there AI models out there that are privacy compliant or whether it’s GDPR, like you said, for Europe or like for the ones for America or even like HIPAA ones for medical, information here in the United States I know is another, is another big one.
[00:07:10] Mahmood Rashid: So the, the different elements or the different AI models can become compliant. but they have to be set up appropriately. So the first thing always starts off with understanding what the rules and regulations of the adoption of AI within an organization are. There’s a common term that we, we, we talk about in industry, and that’s called shadow AI.
So there’s certain organizations that don’t allow, and it’s a blanket no, you cannot use AI in, in, you know, as part of your work and what individuals to make their lives easier. They’ve had a taste of what AI can do, have their own personal AI models that they pay for, and they basically do a swivel chair, where they have their corporate, devices.
Then they have their personal devices right next to it, and they use their personal devices to, to basically send out these prompts. Information that comes back is what’s basically fed back into the corporate devices. so that’s one of the areas that, that we’ve been seeing, you know, from a shadow AI perspective.
More to your point, Gary, with regards to do models have, you know. Compliance from things like GDPR, CCPA goes back down to whether these are public models or private models. generally speaking as an organization. Pace with these models adopts these models within their organization. They’re able to set up boundaries, in terms of what can and can’t be used for training public models.
The other element that organizations need to be concerned about is hallucinations. So hallucinations is basically the data that’s been uploaded to train the models. And it may be your own data that you use to train the models. You wanna make sure that you’re testing your models so that there is no hallucinations or biases that may be introduced as part of the use of AI within the organization.
[00:09:33] Gary Ruplinger: You know, the hallucination thing is, it’s one of those things where you’d think by now, like they have a pretty good handle on it, but I was trying to do some, some data extraction on Gemini Pro, so, which 2.5, which as we’re recording this is considered one of the best, if not the very best model out there.
And I, I gave it this, it’s got a huge, huge context window, so I thought, okay, I’ve got this. Big ugly HTML file. I’m gonna feed it to this and I need it to pull out the, some LinkedIn information from it. And it started on it just fine. And it did the first, probably 20 just fine. And then I started looking at, at it and nope. I got, yeah, about, about line 25.
It’s like, that doesn’t look like a real name and that certainly doesn’t look like a real profile URL, it’s got the right structure, but it just made up letters. And sure enough, I said, did you just start making this up? Yes, I did. Sorry about that. It apologized to me when I called, but.
So it’s, it, it’s wild to me that, yeah, still to this very day, the, the AI systems will hallucinate and they’ll make things up if the, the job seems too, like too much tedious work. I finally got it. Instead of asking it to do the work, I said, write me a script, some software that will do it in Excel. That solved for the problem.
But I thought, wow, I, I figured you could give it, give it the issue, or give it the problem and it could handle it. But no, it, was, it was wild to me that it was still making things up.
[00:11:08] Mahmood Rashid: So, you know, you, you raised an important point. So, for adoption of the AI within any organization, small or large, for consistency, what you want to look at is prompt engineering.
With prompt engineering, what you’re doing is, creating a database of prompts with, certain elements that become variables. And as you basically utilize variables, what you’re able to do is then, improve the output that you get, but also making sure that the prompts are designed in such a way to minimize the hallucinations and, and minimize the biases.
But, you know what? Yeah. If you remember at the beginning of the, of the, our discussion, one of the things that I said was that we’re not there where AI can totally replace humans yet. And again, it’s, it’s the, this word yet, because AI is still, it’s advanced, but still not fully mature to a point where it can detect hallucinations and biases.
And that’s one of the reasons why human looking at the data or the, the output from an AI model becomes extremely important. So it may be able to get you there 80%, but the other, the rest of the 20% is something that you still need the human to provide their input, in their, their governance and their, and their view on whether that data is accurate.
To your point, Gary.
[00:12:47] Gary Ruplinger: So I guess I’d be curious to see where you think AI is kind of headed in terms, I know the big fear for everybody is it’s gonna replace my jobs and then you, your CEO is like the, the Shopify one saying, we’re not hiring anybody if we want, because we think the jobs can be done by AI.
So I guess what, what jobs do you see going away? Maybe, you know, I know, I know some of that. Some of this is, forecasting is hard to do, but near term kind of, you know, maybe, you know, the next couple years are, are there, what, what jobs are going away, what and which ones are gonna stick around?
[00:13:23] Mahmood Rashid: So I think, I think this is, a wake up call for everyone, including myself, as a cybersecurity professional is if I’m not keeping up with what’s going on in industry today, I’ll become obsolete.
So by keeping up with, the advances in AI, understanding, where things are important for me. Governance, risk, compliance, how does AI affect those particular elements for myself? And I’m keeping, you know, abreast of all the changes that are coming along. Doing my own research and development, with AI tools, AI models that I can use in my own business, in my own profession, makes, it, makes me a bit more, protected, if I may call it that, from being replaced as a result of AI.
So. The worst thing anyone can do is, bury their heads in the sand and say that AI is, is, you know, not gonna affect me. It’s gonna affect everyone. But those that adapt to the changes that are coming into industry by AI, are the ones that are gonna survive. The ones that, you know, don’t do anything about it are the ones that are, that are gonna be affected, and it doesn’t really matter what industry you are in, this is gonna affect each and every industry out there, apart from, you know, the trades. For example, you’re a carpenter. AI is not gonna be able to build a cabinet for you. You still need someone to design, cut, and, also, you know, assemble the, the elements. Same thing when it comes to plumbing and electrical work.
You know, you have some level of security, by, you know, by being in those particular types of industries, but even in those industries, when you start looking at the adoption of AI, design work becomes that much easier using, things like computer aided design, computer aided manufacturing, you know, we’re, we are seeing robotics taking over.
And as AI is introduced into those robotics, some of those jobs will become redundant as well. So again, regardless of the industry you’re in, understand where AI or what AI models are gonna be brought in to help augment the low end work that, that you may be doing. And stay ahead of the game.
[00:16:06] Gary Ruplinger: Are you seeing any benefit or from organizations running their own LLMs, basically, where they take a model like, Meta Llama or something like that, and they buy the hardware.
And this is, this is one of those weird rabbit holes that went down on YouTube. This is where this is coming from, but it’s like, oh, this is fascinating. I don’t know why I’m so fascinated by some of these guys who are buying these like clusters of, of Mac Studios that two, two ter-, you know, hooking up two terabytes of RAM to it so they can run these models. But for, for organizations where they have this data security concerns where they wanna make sure that the information stays local, are you seeing that with some organizations that running their own models locally is, is kind of solving that whole people doing the shadow AI thing is what you called it, where they pull out their local device to get the information because their organization doesn’t allow it. Is that, is that a, a viable solution for, for some of these who’ve got a lot of information they need to keep private?
[00:17:10] Mahmood Rashid: So definitely seeing that in the industry in a few of the industries that I’ve seen. So, when I’m talking about financial services, accountants now utilizing AI, to help with their, you know, accounting needs for their clients. So data gets fed into an, into an LLM. And the accountant basically gets some information back with regards to areas that they need to look at. In the, in the reporting elements. same thing with the legal side as well.
Lawyers have been using, you know, they upload reams and reams of data for case, you know, the cases that they’re relevant to them. And then, you know, use the AI to basically build a case out of that or, you know, legal proceedings based on the information that’s been uploaded in there. So having those particular models in a private, LLM, most likely on an onsite or in a cloud environment that is, again protected.
So you’re talking about a fort in a moat. So the data can’t be leaked outside of the, that particular environment becomes extremely important. even in the areas of cybersecurity when we’re talking about vulnerability assessments, penetration testing, looking for security vulnerabilities with, within different areas of an organization.
We wanna make sure that that data that we’re collecting. For that particular client or for our, our organizations, is remains private and confidential. So, you know, it for organizations, in some instances it doesn’t make sense to have your own personal LLM that you training on your own. But if you do have, you know, the data scientists, on hand.
And you wanna make sure that you’re maintaining those LLMs, you know, that’s a viable place to go. It becomes extremely important for organizations, especially in the three industries that I talked about. the other industry that I haven’t talked about that becomes extremely important for is on the medical side.
I think Microsoft announced, dx, which is an AI model for the medical field where doctors are now able to diagnose, rare diseases and come up with treatment plans. Again, individuals may not want their data. If it has been used in developing those particular LLMs, to actually go out, to insurance companies, for example, or be leaked out, in, in a general, or a public LLM.
So those are the areas that, in that, you know, these are the, the main industries that I’m kind of seeing where individuals or organizations, should be investing in their own, LLMs. But making sure that you have the right governance in place to be able to keep tabs and controls over, you know, over AI within the organization.
Just, just to add to that, the initial comment that I made about shadow AI, organizations actually need to be careful about. Not allowing AI in their, in their organizations. I think it’s a good idea to have, go through the full understanding of what LLMs are pertinent to their organization, to their industry, and putting the right controls in place for individuals, for employees to use those LLMs, within their workflows.
[00:21:20] Gary Ruplinger: I think that’s a good, that’s a good insight is that whether, whether you are accepting of it or not, somebody within your organization is, is already using it. So put put the right stuff in place because it’s, it’s coming for you whether you want it or not. I guess unless you’re a plumber, then you got a few years, which I, I would, I would love, my wife and I did a, tried to put a sink, a new sink in or a faucet a few weeks ago.
Boy, it would’ve been nice to have an AI plumber come in and just, you know, turn the right things and stuff. Because we, we were, we did not have a good time with it because, but anyway, if somebody’s looking for some, some help with this because, right? Because they, they wanna make sure that they’re, they’re doing it, that they’re not gonna get huge fines or anything like that because they’re putting, you know, private information out
on public, in, on public infrastructure, how should they get in touch with you? What’s the best way for them to kind of talk to you?
[00:22:15] Mahmood Rashid: So best, best way to get ahold of us is, either through LinkedIn, look me up on LinkedIn, book some time on my calendar or, or some on my team’s calendar. And would be happy to have, a discussion in terms of where, you know, what your AI journey is.
And, what you need to do to, basically ensure that you are not on the headline news of a data leak, because of the inappropriate use of AI within your organization.
[00:22:47] Gary Ruplinger: Gotcha. Well, I mean, it’s been great chatting with you today. Really appreciate all the insights and just the, the things you don’t really think about when it comes to using this cool new technology.
It was kind of that, well just pump the brakes here for a second of, you know, think about these things. Let’s make sure that we’re not, you know, being un- you know, that we’re being compliant with, you know, the, the existing laws and, you know, privacy. So appreciate the insights today. So if you are interested, look Mahmood up on, on, LinkedIn or, or I think I assume they can get in touch with you at, staferm.com.
[00:23:20] Mahmood Rashid: That’s correct. Yes. And that’s staferm.com or info@staferm.com.
[00:23:24] Gary Ruplinger: And that’s, S-T-A-F-E-R M.com. So info@staferm.com. Mahmood, thanks so much for joining, today, and you have a great rest of your day.
[00:23:33] Mahmood Rashid: Thank you, Gary. Appreciate your time and thank you for having me on your call, on your podcast, sir.

Leave a Reply