Our MVP, Nikki Chapple attended The Inform Team’s webinar about Data and trust in Microsoft 365 Copilot.
Before you roll out prompts and pilots, your Data Protection Officer is already asking the big questions. Where’s the data going? Who can see what? What does Copilot actually do with sensitive info? This session is your starting point for answers. Straight-talking. No scare tactics.
Full Transcript
Introduction (00:00:00 – 00:01:25)
Mark Thompson:
Hello, everybody. I’m Mark Thompson from the Informed team, and I’d like to welcome you to our webinar, Data Trust and Microsoft 365 Copilot.
As AI becomes more of a core part of our digital workplace, Microsoft Copilot is transforming how we work. It unlocks productivity, creativity, and efficiency across Microsoft 365. But with this innovation come important questions: Is our data safe? Are we compliant? What happens if we don’t adopt AI responsibly?
In today’s session, we’re going to talk about trust, governance, and readiness. We’ll explore how Copilot respects your existing Microsoft 365 permissions, how it’s governed by enterprise-grade compliance tools, and how you can monitor, manage, and safely scale it across your organization.
But first, let’s meet Nicki.
Nicki Chapple is a Principal Cloud Architect at CloudWay and a dual Microsoft Most Valuable Professional in Microsoft 365 and Security. Nicki has over 35 years of experience bridging IT and business, specializing in helping organizations embed Microsoft 365 data security, governance, and compliance into their digital transformation strategies and projects.
I’m so pleased to have Nicki here. Nicki, is there anything you’d like to add, and how can people get in touch with you?
Balancing Productivity and Security (00:01:25 – 00:02:35)
Nikki Chapple:
Yeah, I suppose the thing to add is that I’ve always been what I’d call the translator between IT and business. It’s really important to get the balance right between productivity and security.
Too often, security says no, maybe not allowing Copilot or other tools, and then we risk shadow AI. Being able to break down complex concepts into business language, and then articulate business requirements back to IT, helps create a balanced, risk-based approach.
If people want to get hold of me, I’m on LinkedIn — Nicki Chapple. I also have a blog at nickichapple.com, where I post about Microsoft 365, Purview, Copilot, governance, risk, and compliance. I also co-host a podcast, All Things M365 Compliance, with Ryan John Murphy from Microsoft. It’s available on Spotify and YouTube.
Thank you for having me today. It’s a real pleasure to be here.
Does Copilot Show Data I Shouldn’t See? (00:02:35 – 00:06:47)
Mark Thompson:
It’s a privilege for us, Nick. I’ve seen you speak at conferences a number of times, and how I’d say that you humanize what’s often a very complicated topic. I’m so grateful that you speak, and I’m so grateful that you’re here.
But folks, why don’t we get started? One of the most common concerns we hear is, what if Copilot shows me something I shouldn’t see? Now, that is a fair question, especially where data sensitivity and access control is critical.
Let’s start by clearing that up. Nicki, can Copilot access or expose data I shouldn’t see?
Nikki Chapple:
Well, I suppose that’s a difficult question. Copilot doesn’t introduce any new risk. Copilot can only see the data you have access to. If you’ve got access to a SharePoint site or a Microsoft Team, if you’ve been shared files, for example, then you can see it.
However, we need to unpick that a little bit. One of the issues is that we’re not governing Microsoft 365 tenants correctly. If you’re not reviewing the membership, the file permissions, etc. If you’re in a team and that team’s got a million files, maybe you shouldn’t be seeing them all.
So it’s important to say that it’s not introducing new risks because that oversharing problem was already there. It’s increasing the probability of data that you already had access to by being surfaced.
So my analogy is it used to be like finding a needle in a haystack with search. You had to know which SharePoint site to look at, what was the name of the file, etc. If you knew that type of information, and if you were clever enough with the search, you could find it, but it was like a needle in the haystack.
Now, with Microsoft 365 Copilot, I can be stood outside of that haystack, and I’ve got a big magnet, and all of those needles are going to come and find me because I could write a prompt to say, show me all the information about the pay rise information of the board members of the company, for example.
It’s going to surface any of those files that you have access to. It’s going to search your OneDrive, it can search your emails, but any SharePoint sites that you have access to or files that have been shared.
So having that good governance is important. Another risk, and a big risk that people are probably not aware of, is when you create Microsoft 365 groups and teams, we have a setting called public and private.
If you’ve got teams, check this out when you go back. Important. Check to see if you’ve got any public teams and groups. If it’s public, that means people can self-join.
Now, even if I haven’t self-joined that team, the SharePoint site, which is where all the files are stored in the back-end of that team, has got a set of permissions called everyone except external users.
That means by default, even without being a member of that team, I have access to the data in that SharePoint site. And as I said earlier, SharePoint respects my permissions. I’ve got access to every single public SharePoint site in your tenant. Therefore, Copilot has.
So that would be like my number one top tip. Really quick thing, go and look in your admin centers, see which of those ones are set to public, and check they really should be available to everyone in the organization.
For example, an intranet can be available. If you’re using Viva Engage, maybe for social communications, sharing the pictures of your cats and dogs, that’s fine if everyone has it.
But if you suddenly see that the HR team has been set to public and that HR team has got sensitive information about our employees, then go in and change that setting from public to private, and that will immediately change the permission and say only members of the team have access.
Mark Thompson:
Nicki, I think there might be a few people having a lot of shudder thinking about the fact that they might have an open team and they look and they say, only these ten people are members of this team.
But the fact that Copilot can still access it, I think that is a super point. This reminds me of when Delve first came out. People used to blame Delve for surfacing information, but it wasn’t Delve’s fault. It was just showing you there was a problem.
Protecting Sensitive and Regulated Data (00:07:34 – 00:15:39)
Mark Thompson:
Okay. This is saying we can only protect what we know. When it comes to AI, that is true. Nicky, how does Copilot ensure that sensitive or regulated data stays protected, even while we’re all trying to work faster, trying to work smarter? How does that still work?
Nikki Chapple:
Okay, so with Microsoft 365 Copilot and the enterprise version of Copilot Chat, what you can be assured of is that data is not being shared with the public domain. That data remains in your tenant. The models in the back-end are not learning from your data.
That’s the big distinction between using enterprise-ready tools as opposed to ChatGPT, which is a consumer thing. If you put your prompts and uploads into ChatGPT, that information is used by the model to improve itself and could potentially be resurfaced.
There have been stories in the press where large companies have fallen foul of this. People, with the best intentions, use ChatGPT because Copilot isn’t available, and they end up sharing confidential information.
With Microsoft 365 Copilot, there are several guardrails in place. The first is sensitivity labels on files and emails. If they’re labeled as internal, confidential, or highly confidential, Copilot is aware of those labels when referencing content.
This could be a file uploaded into a prompt, a new file created from another file, or content being pulled together. Copilot checks all references and applies the highest sensitivity label.
When you configure labels in the Purview Compliance Portal, labels have priorities. Public is lowest, highly confidential is highest. Copilot uses those priorities.
So if Copilot references both a public and a highly confidential document, the output will be labeled highly confidential. This is especially important in Word and PowerPoint when creating new documents based on existing ones.
If that label includes encryption, those permissions are applied automatically.
Another benefit of sensitivity labels is controlling what Copilot can access. For example, if content is labeled highly confidential and marked to block Copilot, you can create a DLP policy to enforce that.
When someone tries to reference that content in Copilot Chat or upload it, Copilot will say it’s restricted. Inside Word, the Copilot icon will be grayed out. Copilot can see the file exists, but it can’t summarize or interact with it.
Copilot will only ever show files you already have access to. You can open the file yourself, but Copilot can’t interact with it.
If you’re more technical, in the Microsoft Purview Admin Center there’s a service called Data Security Posture Management for AI — DSPM for AI. This provides reports on Copilot usage and, potentially, third-party Gen AI usage as well.
You can see what sensitive data is being shared. Microsoft provides hundreds of built-in sensitive information types, such as credit cards, addresses, or credentials, and you can create your own.
This allows you to understand what data is being shared with Copilot and with third-party Gen AI tools. Once you understand that, you can start applying controls.
Purview also lets you look beyond Microsoft 365 Copilot — including Security Copilot and other agents.
Ultimately, it comes down to data classification and DLP. Microsoft has a Secure by Default blueprint in Purview that walks through this in stages.
The common question is, “We’ve got millions of files. Where do we start?” My view is, don’t start there. Put governance in place for new and edited content first.
Microsoft even recommends default labels. Start with new content, then focus on priority existing content over time.
Trying to do everything at once is overwhelming. Start simple, put guardrails in place, then tackle legacy content gradually.
The Risk of Doing Nothing & Shadow AI (00:15:39 – 00:22:00)
Mark Thompson:
Okay, Nicki. What I take from that, the thing I love from what you just said, is a couple of things. We can apply a label and Copilot is not allowed to access anything with that label.
Of course, people will be terrified of how much they need to do. Anything new that we create, you apply the labels as it’s new, and then as it’s edited, we do it as we go along.
Okay, folks, let’s move along. We’ve talked about some big stuff there now, and you might think, I’m not ready to use Copilot yet, but how about the fact that when we hesitate to adopt Copilot, we do it out of caution, and that’s absolutely right.
But ironically, that can create new risks. If people can’t use trusted tools, they will find workarounds, and you’ve heard one from Nicky already, such as ChatGPT.
We’re going to talk about the risk of doing nothing. Also, folks, in the chat, I would like you to just put to list all the AIs, all the things that are like ChatGPT that you can think of, all the things that you’ve heard of, or other shadow IT that people might use.
So, Nicki, what are the risks of not using Copilot in our organization? And how does shadow IT come into play there? While you’re doing that, I’m going to look at the chat. Folks, I’d like to see some AIs in there, please.
Nikki Chapple:
Yes, so some of the risks of not using Copilot. There are instances where security say, no, I think our data is not good enough quality. We think we’re going to be surfacing all of this data.
Last year, it was May 2024, Microsoft and LinkedIn had a report, and in there they’d stated that 78% of people that were using AI were using their own favorite AI tools. Often that ChatGPT tends to be the most common one people know, but somebody’s put in there Deep Seek, which is a Chinese one, Gemini from Google, etc.
People are using these tools. If your company is not providing them, people are finding that actually these Gen AI tools are actually a benefit and it’s helping them with their productivity.
For me, I’ve been in this industry, as Mark said, for over 35 years. It’s the first I’ve actually seen end users coming to IT and say, I want some technology.
We’ve had to fight people kicking and screaming. We had to move them from paper into email, and more recently, email into Teams, and that’s still going on. Teams has been around for six years now, and still people are wedded to working in emails.
That’s a long, drawn-out process to get that cultural change.
ChatGPT hit 100 million users within three months. Super, super fast adoption. Everybody wants it. It’s like a gold rush. It’s a modern-day gold rush.
So your company is not offering it. What will happen is they are using shadow AI tools. So 78% of people have found their favorite tool.
Now, worryingly, people get used to using those tools. We know what it’s like. Change management is always hard.
People have had a couple of years to get used to ChatGPT. So the question when you open up Microsoft 365 Copilot is, well, I’m happy using ChatGPT. That does everything I need. Why do I need to move into Copilot?
People do not understand the risks of using these models. People don’t understand that every single thing you put in there is being consumed by those models.
You can put your CV in there with your date of birth and your address and say, update my CV. Well, you’ve shared your personal data, and that could be surfaced out.
I think it was Samsung in 2023 — if you Google that — their Gen AI breach.
There was a lot of publicity where people had shared secure code with the models and said, here’s my code, help me improve it.
Of course, that’s confidential to the company, and you’ve lost it immediately.
If you’ve shared customer data in there, that’s effectively a data breach.
People are not interested in security in their day-to-day job. They want to get on, want to do their job, and want the quickest, simplest way of doing it.
Security is not always top of mind. So we need to educate users.
This is user adoption and change management. Make people aware of what would happen if somebody shared their personal data.
So then it’s making sure that the tools are available, making sure people are aware of using the right tools in the organization and the consequences of not using them.
We’ve got options. We can monitor third-party AI usage, or we can put technical controls in place to block it.
Somebody mentioned Deep Seek. A lot of companies don’t want anything to do with Chinese AI tools. Those can be blocked at the firewall, Defender for Cloud Apps, Defender for Endpoint — multiple places.
If someone goes to Deep Seek, it says, no, this is blocked in the organization.
We can also block sharing of sensitive data using data loss prevention policies.
This is not a technology change. This is a strategic transformation. People have to unlearn and relearn, and that can take six months or so, can’t it, Mark?
Monitoring, Auditing, and Adoption Insights (00:23:01 – 00:27:00)
Mark Thompson:
I think what’s interesting about ChatGPT is I know so many non-technical people who pay for it themselves. Like you said, it’s just been amazing.
But folks, let’s get back to work. Speaking about work, if you don’t adopt Copilot in some form and educate users, they will use something else.
The longer you leave it, the more they have to relearn a different system.
If they’re free, then you’re the product, not the customer.
We’re going to go on to our next topic. Visibility is key to trust. If we’re going to roll out Copilot responsibly, we need to know how it’s being used, by who, and for what.
Microsoft gives us the tools to do just that.
Nicki, how do we monitor and audit the use of Copilot once we put it out there?
Nikki Chapple:
We’ll be using Microsoft 365 Copilot as part of the Microsoft 365 suite. All activities are already audited.
It knows when you open a file. It knows when you’re using Copilot. Prompts and responses are recorded for eDiscovery purposes.
Audit logs are stored for 180 days unless you’re an E5 or A5 customer, in which case it’s a year unless you export it.
That’s quite technical, but DSPM for AI in Purview simplifies this. It shows Copilot, Copilot Studio, and Security Copilot usage.
We can drill down and see who’s doing what. We get rolling 30-day views.
This is useful not only for security but for adoption.
Initially, you may see more ChatGPT than Copilot. As adoption improves, we want Copilot usage to rise and shadow AI to drop.
We also have usage and adoption reports in the Microsoft 365 Admin Center.
And we have Viva Insights.
With Viva Insights, you get Copilot usage by departments and groups.
You can also include qualitative data — surveys, feedback — not just numbers.
Soon, you’ll be able to bring in KPIs like sales velocity to see if Copilot is actually making a difference.
Moving Forward Without Perfect Governance (00:27:00 – 00:30:04)
Mark Thompson:
Lovely. Nicki, before we move on, I just want to say I love how data security posture management for AI just trips off your tongue. I can just about say it. It shows how many times that you’ve said that.
Okay, folks. We have six minutes left. Remember, this runs till 12:35. We’re going to be a little bit tight for time, but this one is definitely worth us covering. We’re going to spend maybe three minutes on this.
Nicki, a big concern, what we’re hearing here is our data governance isn’t perfect. Can we still move forward with Copilot? You can. Nicki, can we still go for Copilot if our data governance isn’t quite perfect yet?
Nikki Chapple:
Yeah, I think so. I think because the risks of people will use AI tools. I think that’s going to happen. If you’re not providing the tool, people will find alternative tools to do that.
The idea is you can roll this out in phases. You don’t have to have everyone in the organization enabled. You can actually put the governance in place, have a phased rollout of Copilot with the adoption, and then in parallel to that, let’s put in some good data security and governance.
Key things we can do quickly are to get the sensitivity labels in for new and edited files. Let’s put some governance at the SharePoint site level and the workspace level.
If you know what your sensitive teams are, you have something called SharePoint Advanced Management. You probably don’t notice or be aware that this product is available — even if you’ve only got one Copilot license, you can use all the functionality.
We can put controls in to restrict Copilot. We can say, actually, I want to start off with only these up to 100 sites. That’s restricted SharePoint search.
You can also use restricted content discovery. That’s super important if you’ve got sites with thousands or millions of files in them and you’re not really sure what’s in there.
Restricted content discovery means Copilot will only show files in that site that either you’ve created or you’ve interacted with in the last 28 days.
When you go inside the site itself and search as an end user, you can search for any content and access any content. But Copilot is limited.
We all know we’ve got historical sites where we’ve migrated content from on-premises systems and we don’t really know what’s there.
So there are controls that can go in. The Microsoft Purview Secure by Default blueprint is useful. It’s a staged process.
Do it in parallel with adoption. It’s too late to wait. Copilot is needed unless you’ve got some other alternative enterprise-ready solution.
If security continues to say no, people are going to be using these other products.
Wrap-Up and Final Thoughts (30:04 – 31:41)
Mark Thompson:
Nicki, that’s going to be emblazoned on my mind. It’s too late to wait because of the appetite from the consumer side of the world — people want to use ChatGPT.
I think that’s a really good point. It’s too late to wait.
I think something else you said to me, Nicki, is Copilot doesn’t create new risks. It simply reveals existing risks. If it reveals them, you can fix them, right?
I like the canary in the mine analogy. The risks are there. You might as well get them surfaced and get them dealt with.
Okay, folks, call to action for you. We’ve got about 90 seconds, two minutes or so left.
I want you to think about this as you go away.
Who in your organization needs reassurance about data security and compliance? Your stakeholders — IT, legal, compliance, and end users. Who needs clarity for you to be able to move forward?
What guardrails and governance do you already have? What can you use? Think about sensitivity labels, DLP, access reviews, and how you can create that safe environment for Copilot to operate.
And how will you enable and encourage safe adoption and avoid shadow IT?
Think about your plan to support users, educate them, and keep them within the boundaries of trusted tools — and not just end users, but also the people behind the scenes who have to look after the admin side of the world.
Okay, so, Nicki, before we just wrap up, we’ve talked about quite a lot of topics, and it was rather technical.
Is there anything that you would like to say to wrap up, or just put a lid on what we’ve talked about?
Nikki Chapple:
I think for people that haven’t started the pilot journey yet and they’re not blocking the other AI tools, then you have an issue.
It’s not a risk. It is an issue.
People will be using those tools.
Go into the Data Security Posture Management for AI portal in Purview. If you’ve got E5 licenses, you can hook up your endpoints and see third-party AI usage.
That can be a wake-up call.
If people are saying it’s too risky to use Copilot, but you can demonstrate that people are already using tools like Deep Seek, then you can show this is the current issue.
The question becomes: how are we going to mitigate it?
Will we mitigate it by bringing that activity into Copilot?
you demonstrate the real-world risks and justify bringing activity into Copilot where it’s governed and secure.
Mark Thompson:
Thank you, Nikki. And thank you to everyone who joined us today.