December 11, 2024

Balancing Security and Privacy in Insider Risk Management

In this episode

Join Manish Mehta as he sits down with security strategist and risk advisor, Munish Walther-Puri to explore the evolving landscape of insider risk. This episode dives deep into the complexities of hybrid work environments, the blurred boundaries between personal and professional devices, and the importance of balancing security with employee privacy. Tune in for actionable insights and cutting-edge strategies for managing risks in today’s interconnected ecosystems.

Resources mentioned in this episode:

Learn more about Ontic’s Incidents, Investigations, and Case Management.

View the transcript

+

Ch 1: Introduction

+

0:00

Manish Mehta:

Hello, and welcome to the Ontic Connected Intelligence Podcast. I’m Manish Mehta, Ontic’s Chief Product Officer. Join us as we delve into valuable insights and practical advice that will empower you to navigate the complexities of modern corporate security and risk. We’re here to share knowledge from experienced leaders and innovators in the field. All right. Get settled, and let’s dive in. Munish Walter-Puri is a seasoned security strategist with expertise in cybersecurity, insider risk, and geopolitics, having worked on critical infrastructure, supply chain, terrorism, and fraud. He has served as the director of cyber risk for New York City and advises on cyber and supply chain challenges as an adjunct fellow at the Institute for the Security and Technology. Munish also teaches graduate courses on critical infrastructure at NYU, and is actively involved in organizations like the Council on Foreign Relations and the Intelligence and National Security Alliance. Please welcome Munish Walter-Puri to our Connected Intelligence podcast. Munish, welcome.

Munish Walther-Puri:

Thank you, Manish. This is my first time being interviewed by another Manish.

Manish Mehta:

You know, it’s a bit odd. I’ve been doing this for a long time and I don’t think I’ve ever met another Manish on a podcast. I do have to ask. I love the pronunciation of your name. I’m just not loving the spelling. How did the spelling come about?

Munish Walther-Puri:

That’s crazy. It should be spelled with an A, the way it’s pronounced. It should be spelled that way. There’s a longer story, but the short version is that I think that the origin of our names might be a little bit different. So I don’t, I’m not 100% sure, but the origin of mine is Muni, as in wise man. And funny, longer story that I will share on another time about my relationship with that name. But I asked my parents at one point, why did you spell it with a U? Because nobody’s getting that right ever. And when we had kids, I was so adamant that their names are in the phonetic pronunciation matches the spelling. But as long as someone’s within the ballpark, I will turn my head.

Manish Mehta:

I love it. And look, from my perspective, every superhero needs an antihero.

Munish Walther-Puri:

So there you go. There you go. There you go. Cool.

Manish Mehta:

Well, great. Great to be with you. And thank you for making the time to join us today. Can you tell us a little bit about yourself and your career?

Munish Walther-Puri:

Yeah, I would say that I have had a weird and wandering and winding career. For the people watching or listening, I think the most important thing for me to tell you about my career is that I’ve been unemployed a bunch. That sounds weird to say when people are like, tell me about all your, because usually it’s a reflection of accomplishments. And for me, the time where I’ve been looking for a job or been partially employed has been as instructive because it’s given me time and space to think not just about what I’m doing, but also the issues that I was just working on. There was other stressors like looking for a job. It’s gotten me to think a lot about organizational culture. What kind of place do I want to go next? And probably most importantly, it’s given me the opportunity to connect, reconnect, and reinvest in my community. And it’s because of my community that then I’m able to find the next thing and find find my way through. So that’s, I would say, is probably the most important thing that I would want to share. The other is that I didn’t know, I don’t often know where I’m going next, but I do know what’s important to me about what I’m, what I’m going to be doing, if that makes sense. It does. You know, and over the course of my career, I’ve I’ve gotten more refined about how I do what I do and for whom and with whom. And the what has mattered less. which has been interesting. People are like, what do you want to do? My answer is like, it doesn’t matter. There’s really a wide variety of things. I’ll tell you what does matter is the how and the who in the context.

Ch 2: Boundaries in the world of hybrid work

+

4:30

Manish Mehta:

I love that. And I think to your earlier point, the power of having such a important deep network, it pays dividends in the long term. So I admire that. Thank you. Let’s jump into it. You know, our audience is very, very interested in insider risk and insider threats. And let’s just jump into our first topic and our first question. So you think about the era that we’re in today, hybrid work, bring your own devices. Where’s the boundary? Where’s the boundary between personal and professional digital activity, especially when it comes to insider threat detection? Where is that boundary today?

Munish Walther-Puri:

I like that framing because the bring your own device moment and movement for a little bit, but moment represented a shift of accountability, of agency, of responsibility in a way that we had not seen before. So I’ll take it from a few different angles, the technical, the psychological, the organizational. So first from the technical, that’s the easiest to think about, okay, before you had some property, and most likely technology, but some property that was given to you by the organization, it belonged to them, it was theirs, that was very clear. Now you have something that you possess, and you’re going to do office work on it. That is a hybrid notion. Before we had hybrid work from home and office, that was the first real hybrid, and the blurriness of other boundaries that we had. So there’s that technical piece. That also brings up questions around agency and responsibility. That individual pays for that device or purchased that device, and responsible mostly for the caretaking of it, which involves the digital security of it, obviously the physical security of it as well. That used to be split. Now that responsibility is really on the individual. But the organization is going to experience most of the impact if there’s a compromise in some way. That compromises the organization, obviously. individual as well so there’s an interesting shared piece that’s the technical piece and then you can talk about how do you monitor that and there’s a psychological part of it where an individual I bought this and also I don’t want an organization watching me for everything there’s very much the sense of me and the organization separate and not that you join as an employee and here you go, welcome, but what are you bringing? Literally, and also like, as part of your skill set, what are you bringing to the organization? What sorts of assets and liabilities are you bringing? So that was a precursor in many ways to then what we dealt with, you know, around the pandemic of people’s relationships to the organizations that they had. And that brings me to last, which is the organizational element of it. And again, devices were seen as assets for, again, fully within the organization’s purview. And now devices are seen as instruments of that individual and something that they possess. And so an organization at best can influence, at best influence, but rarely can dictate, direct, control, you know, deny those. Um, so back to your question about boundary, I think the best way to answer that question is to reframe it and that it’s no longer about a boundary. And people like to say that there’s no boundary anymore. Okay, great. What happens when there’s not a boundary? It’s a shift to an ecosystem, an ecosystem where there’s relationships, there’s dynamics, it’s shifting. When you talk about an ecosystem, we rarely talk about boundary, perimeter. We tend to think more about nature, seasonal. It depends on what’s going on around. It’s changing over different periods of time. There’s ways to affect that ecosystem. You can change incentives in that. But you’re talking about system dynamics. And I think even though an organization comprises individuals, system dynamics is a really good way to think about it. So to answer your question more clearly, what do you do? How do you secure an ecosystem? How do you secure an ecosystem? I don’t think you do. You manage risk in an ecosystem.

Ch 3: Ethical considerations for balancing insider risk with employee privacy

+

9:22

Manish Mehta:

Well, that brings us, and I love those thoughts, brings us to an interesting discussion around balance. How do organizations balance? Because there is a critical need for insider risk and insider threat monitoring. And then, of course, the other side of the pendulum is employee privacy rights, all of the ethical considerations that come into focus. How do organizations balance that entire narrative that you just gave made sense, but where’s the balance?

Munish Walther-Puri:

I think the first is acknowledging that there is a balance, that it is no longer a world where you’re going to look at insider threat. Security is going to be either paid attention or paid resources by the simple fact that it is a threat. That first alone, that it’s balancing other things. Bouncing ethical privacy cultural there’s other elements in competition so to speak. The other that i think is very important for anyone in an organization and organizations with large to think about. Is that there is something that are the same identical. And there are some things that vary widely. So let’s start with what’s identical. The threat itself is ubiquitous and insidious. That is uniform. The language around insider, we even say, you even said insider threat or risk. And I’ve heard, you know, we’ve there’s an evolution to talk about trust and safety now take about the take out the words threat and security, because of how it might be perceived by the organization. So the language varies even within people in the insider threat community. And to that end, The Intelligence and National Security Alliance, which I mentioned, has an insider threat subcommittee. A few years ago, they published a paper on categorization of insider threats. And they looked at, I think, three dozen different definitions and taxonomies, and really distilled down here the common terms. So that’s a very helpful research, just inside categorization of insider threats. So the language varies highly. And then the manifestations of it what CISA, the Cybersecurity Infrastructure Security Agency, government agency, talks about in their paper on insider threat. It depends a lot on the manifestations, the conduct and the character of the threat is going to show up differently depending on the organization, industry, the assets, the culture. You have this thing that everybody can agree on is there, and then how it shows up and the behaviors, and then therefore the programs you need to build vary widely. I think accepting that is very important and knowing that there isn’t going to be a blueprint necessarily that you can take from others, but there is a framework that you can follow to help build it. That’s one. The second on the balance is, almost more than any other type of threat that I have worked on. I think about that a little bit, but if not the most, it is one of the most codependent on digital and physical. There is a team I worked on, I worked on a number of different areas, geopolitics, cyber investigations, fraud, terrorism. Each of those has a physical and digital component. But we were working in a large organization where there were teams, whole teams dedicated to just one part of that. Our responsibility was to find the connection that overlaps between those. And when, very simply, when organizations create silos and how they deal with these threats, they create opportunities, seams for the adversaries to exploit. And they will. An adversary very rarely, very rarely thinks, what’s my physical mode of entry? What’s my digital mode of entry? They just think, how do I get in? How do I accomplish my objective? And in many ways, I think of them as my nemesis. Like I told you about my career and how I think about, it doesn’t matter to me as much what I do. It matters more how I do it. It’s the same. They’re not deterred by the what. I can’t go in that way or go in this way. The why and the how is what motivates. So the balance is that those two are so, it’s not just closely connected, they’re intertwined. They’re like at an atomic level. The DNA, double helix, chrome, like that’s what I mean. It’s very, you have to think about it that way. Otherwise, when I say have to, okay, or what else? Or you’ll be surprised. You’ll be surprised all the time.

Ch 4: Cyber-physical convergence

+

14:18

Manish Mehta:

So I have a controversial question for you in that regard. I am incredibly passionate about slaying the dragon, which are silos. Silos exist everywhere. But it is fascinating in my time in this industry, and I’ve met with many chief security officers at the largest private companies in the world, and they all talk about cyber to physical convergence, and they talk about their sophisticated fusion centers. But when you walk inside and you talk to a cybersecurity analyst or a physical security analyst, there’s not a lot of crosstalk or collaboration. They’re siloed, even in the same room. Have you seen true cyber to physical convergence or is it mythology?

Munish Walther-Puri:

Not a controversial question. I think it’s the answer to your question is in. You said cybersecurity, cybersecurity analyst, physical security analyst. There’s where the silos are. Should be an analyst. Maybe a security analyst, but analyst and I have seen them. I have seen fusion centers exist in really. Truly fusing and it happens three ways. or there’s three factors that I’ve seen going. One is that true collaboration is messy. So if we talk about the cliche of silos and breaking silos, there’s the other cliche of stepping on each other’s toes. There’s a lot of toe-stepping in true fusion centers. That’s number one. Number two, the people representing the, call it their home organizations or groups, are empowered to do so. and the information sharing happens through only through people. So they have their setup. Let me explain it. There’s a combination of things. There’s people that represent home organizations, then there’s people that represent that actual fusion center. People that represent the home organizations, they have access to their systems, their data, and nobody else around them does. So if any of these people want to access this information, they come through the human. and vice versa. It’s that human’s responsibility. They know I’m the only person that has access to them. So that’s one part of it. The second part of this, and there’s a group of people that are part of the fusion organization itself. And their job is to mix and to synthesize. So you have two functions, but it’s not physical security and cybersecurity. It’s representing a home organization fusing. And The third is organizations that have built fusion centers. And I’ve seen this both in private sector and public sector that fundamentally understand the principles of social network analysis. How people organize, how information, power, everything moves through social networks, hubs, spokes, weak links, strong links, creating those even, I mean, Then when they do that, they design support or interventions to make sure that people are socializing together. People connect not along the obvious lines, or they have them move positions in the actual location. All of these things are actively built in. Then and only then have I seen the fusion of intelligence produce new and different and better outcomes.

Ch 5: Malicious insider actions versus accidental human error

+

18:31

Manish Mehta:

Let’s shift gears for a moment. I was in the technology industry for almost two decades. At one point, I almost had a thousand people work for me in various technical roles. So they had access to the network, they had access to large amounts of IP and data. And there were certainly cases where employees resigned or were under investigation for malicious activity. And then there were employees that inadvertently or unknowingly, or certainly let’s just use the broad non-malicious, they thought it was their IP or their work product, or they made a mistake along the way. How do you differentiate the two and how do you think about that? And how can we help security teams think through that?

 

Munish Walther-Puri:

Yeah, I’m glad you asked this and use the framing malicious accidental. I’ve thought about this a lot, both in roles within enterprises supporting large organizations, so outside large organization, but supporting and then outside of both of those constructs. So as an individual, as a consultant, as a contractor, or someone looking for a job, all of those. And what I want to share with you is come from the census of those experiences. The first thing that I would say when I think about insiders is to go back to what I understand. I’m not claiming this in any academic sense, but what I understand is the origin of the term insider. It comes from financial services. And insiders as people who have access to a larger, deeper amount of information. And it really emerged, I think, in the American lexicon after the Great Depression. There’s the Securities Act of 1933, which basically oversaw interstate commerce on securities. You know, security is a tradable financial asset. So it was the first major federal. And it was built on this basis of disclosure and that complete and accurate information before you purchase something is going to make the markets and the investors safe, the integrity. So that to me is really important, that history of insider. What is an insider? I think it’s rule 44. I don’t remember, but there’s, you know, there’s been a number of insider cases. But the idea that an insider has access to more information and can use that for gain is a notion that is core tied to its definition. So even though in finance, they haven’t been calling insider threat, insider risk, they’ve been dealing with insider dynamics for a very long time, almost 100 years. When I think about what the elements of what drives the insider and what’s important there, what’s very powerful and alluring and difficult is that it’s got a core psychological component. So I think about stressors, predispositions, permissions. What are the stressors that are happening? What are the predispositions of that individual or in their environment? What are the permissions that they are granted that are created? What is the permission structure that’s created by there? The way I think about it is, you know, early on, I would say I fell along the same lines of like malicious, not malicious, negligent. And then I was thinking about that. The challenge i sign i will be a little more forward the problem i saw with that is that you can rarely know what the intent or motivation is. The actor is period if you do it exposed so. How does that help you frame it when you’re facing it i would positive doesn’t and similarly. I felt like there was this financial, non-financial break, financial gain, non-financial gain. Those two are closely intertwined. So I started to push my thinking out and say, all right, well, the intentionality, is it about how careful they are? Right? They’re careful or care less. I also avoided using the word negligence. I found that in large enterprises and in organizations, understandably, it has legal implications. So if I use the word negligence, even on paper, people are like, whoa, whoa, whoa, whoa, Start using careful and careless but that was sort of a judgment called me that’s all a judgment call that was much more of a judgment call like what do you mean a careful. Insider and i was like gathering very deliberate ok like it was it was too analytical is a little too clever. I also started thinking about. whether they’re oriented more around themselves or externally influenced, you know, coercion, foreign nation states, I started working more on that. Eventually, so here’s the two by two that I came to. I’ll save you all the stuff before. Here’s the two by two that I came to in my head. This is only works in my head. If it works for somebody else, great. This is the way I think about it. So in the top, you have like personal state and alignment, personal state and alignment. And then here, the other two are individual and situational. So individual motivation, situational motivation. So let’s talk about individual motivation and the personal state, state of the insider. There, it’s very simple to me. Are they trying to get more or are they trying to get even? Get more financial gain, professional gain, get even, personal satisfaction, and a number of different ways, revenge. So that’s individual personal state. Then you have individual motivation and alignment. And this is about alignment of beliefs, values, ideology, loyalty, organizational fit. It still is the individual motivation, but it’s that external alignment, the alignment with the organization, in a sense. And values, ideology, I mean, ideology is more systemic. Loyalty is You know, it is broader in some ways. It could be nation state. It could be it could be loyalty for or against. There’s a number of different elements to that. I’m not trying to compress those down so much, but it is roughly about beliefs. Then you have situational, and I find that a lot of insider threat taxonomies ignore this. Or they minimize it. But the situational motivation. Let’s talk about situational motivation and personal state. There you have things that are like an urgent or proximate need. Or something there that is creating pressures, stressors. That’s what I think about stressors. Individual health, mental health, physical health, financial health. And then this is the one that’s lost a lot. Those same things for a loved one. I’ve been part of cases where the insider was motivated by a loved one’s health and the financial and psychological stressor that created. So and then, you know, mental health is a big part of this. Obviously, psychological factors go through this, but there’s an element there where personal state is really important. And then the last personal state and alignment This is where you have like environmental permission or pressure. Workplace environment, organizational culture, industry practices, regulatory environment. These all create those conditions. And so and this isn’t meant to be a two by two where it’s like in this category, not in this one. It’s more like where does it move? Where does it go? Which ones just include? This is how I do it to make sure that I’m including everything. And this has helped me a lot. Because sometimes we’re focused on one, I go, well, what about this? And I just have to tilt the scenario or tilt the case as a situation may be. And we can open something up and examine something. So probably should have created a visual for that. And I will. But I hope that makes sense in terms of how I think about it.

Ch 6: AI’s impact on insider risk

+

26:56

Manish Mehta:

No, I think that’s a helpful framework for our listeners. So thank you for sharing that and narrating it. I do think it would be great to create a visual and maybe even publish an article on that. We have time for one more question related to Insider, and it would be irresponsible for me to not have a question related to AI. So how could AI have an impact on insider and insider risk now or as you think about in future from two perspectives? One, companies are racing to continue to invest in AI. And then if you think about a technical insider, they could be deploying or using or taking advantage of AI. How could security teams be thinking about AI?

Munish Walther-Puri:

I think technical insider is the right frame, whenever people are talking or thinking about artificial intelligence, and they want to substitute AI for tech, they want to substitute technology for AI, I encourage them to instead substitute the word helper. Because that’s how a lot of people are thinking about AI is a helper. And so a technical helper, but a technical helper who is an insider. So someone’s thinking about AI, replace it with technical helper, and see how that sentence affects how that sentence changes your thinking about it. So a couple thoughts. The first is the race to adopt. I have not seen technology include mobile or internet or anything else that so many people across the enterprise, including CEOs and leaders of organizations, are adopting. Not excited to adopt, adopting. Time from its pronouncement, promulgation to propagation, you know, it’s really, it’s, to say fast is an understatement. And people are genuinely excited to use it. We’re still trying to figure out, we largely are trying to figure out where does the revenue come from, but the value is not, not really a mystery, what the value of it is. And people haven’t been excited about a technology like this in a very, very long time. And I think that’s the foremost thing that’s propelling it forward, technology. There are other things, fear, anxiety, other things that come with it, but excitement is really their foremost. And so you’re competing with excitement. Think about that. You’re competing with excitement. What other human emotions can compete with excitement? And there are things that require a really high level of cortisol, which is hard to sustain, like fear. So if you are in security or defense, you can counsel caution. That’s very hard to go against excitement. I don’t know if some of your listeners have young kids excited about something. They get a new game or toy that has some dangerous element to it. Now, how do you get them to be cautious? Yes, you can get them a helmet on or anything else, but they’re driven by that excitement. The second piece is on the technical. So I’m sure many of your listeners are familiar with MITRE ATT&CK. It’s a framework for thinking about different tactics, techniques, and procedures. Within MITRE, there’s a group called the Center for Threat-Informed Defense. They’ve looked at the taxonomy of the MITRE ATT&CK techniques and really distilled down and focused on insider threat. This, along with some of the other ones that I’ve mentioned, that CISA guide to insider threat mitigation, I’ll share those with you. Because both of those are relevant here. The first is that from a technical standpoint, you have on cybersecurity, it’s very much like a common set of things that the average insider is going to use. Leveraging valid accounts, leveraging valid accounts for initial access, for persistence, for evading defense, and then data that they’ve gathered from the outside, and then using the physical medium for exfiltration. Almost all of those have a digital component, and AI would accelerate and automate some parts of those, and it has already. This group has studied how the techniques have evolved, and we can see that leveling up, and I think we’re going to see acceleration of that. So what does that mean? It means a couple of things. One, this user-level analysis is necessary and insufficient. You’ll now need a system-level analysis to understand the impact of user actions on an overall system. So that’s the only way you’re going to get a broad picture of the security posture of an organization, really. And the combination of those two, it’s not do one instead of the other, it’s really the combination of those two. And AI is absolutely going to be an essential part of that because we’re talking about scale. The other piece that has really broadened my thinking, so CISA has this guide on insider threat mitigation, and they call out specifically two types of insider threats, collusive insider threats and third party insider threats. Now, the collusive insider threat, they’re thinking about an insider recruited by an outside organization. But I would push that further, and the collusion can be where there’s real collaboration and creation between human and artificial intelligence. There’s a potential threat there. We’ve seen it from a harm reduction standpoint adversarial, but the two conspiring together, there are some organizations that are thinking about this, but few. The other is around third party. It does a very good job of defining all the different types of insider and it’s not just someone in an organization, it’s someone who’s been granted permissions or has knowledge of the business or has knowledge of the strategy. Like there are a lot of different elements of being an insider. And if you look at artificial intelligence again, you go check, check, check, check, check. Yes, it has a lot of them. The last, I would say, something that I’ve been seeing for a while, but I think is going to be accelerated by the adoption of artificial intelligence is the combination, the synthesis of data loss protection, DLP programs, and insider risk programs. For a long time, technical CIOs, CTOs, and chief security officers have tried to figure out, where’s our data? Where’s our data? And AI is going to turbo boost that. in a good and bad way. Absolutely. So I know there was a lot of elements, but I feel like those are those are some of the key components.

Ch 7: What does Connected Intelligence mean to you?

+

33:40

Manish Mehta:

Terrific. Munish, I really enjoyed our time today. It’s a lot of fun. Please do share those assets or links to those assets. For those that have been listening, you know that Munish and I share the same, at least, pronunciation of our names. For those of you watching our video, you know that we’re doppelgängers. We look exactly alike.

Munish Walther-Puri:

I mean, people get us confused all the time.

Manish Mehta:

It’s uncanny. One last question for you, Munish, and if you can give our audience a soundbite, what does Connected Intelligence mean to you?

Munish Walther-Puri:

Connected Intelligence is insight that comes from collaboration and creates resilience.

Manish Mehta:

Terrific. Our guest today was Munish Walter-Puri. Thank you for joining us on our Ontic Connected Intelligence Podcast.

Munish Walther-Puri:

Thank you. This was a pleasure.

What you’ll learn

How hybrid work and “bring-your-own-device” practices are reshaping insider risk management

Practical frameworks to balance insider threat monitoring with employee privacy and ethical considerations

How emerging technologies like AI are transforming insider risk detection and mitigation strategies

More about our guest

Munish Walther-Puri is a seasoned security strategist with expertise in cybersecurity, insider risk, and geopolitics, having worked on critical infrastructure, supply chain, terrorism, and fraud. He has served as the Director of Cyber Risk for New York City and advises on cyber and supply chain challenges as an adjunct fellow at the Institute for Security and Technology. Munish also teaches graduate courses on critical infrastructure at NYU and is actively involved in organizations like the Council on Foreign Relations and the Intelligence and National Security Alliance.

Connect with Munish