Are your employees unintentionally risking your confidential data?

By Published On: May 22, 2025Categories: Cyber Security, Karl Wilkinson, News

There’s no doubt that generative AI has transformed how we all work. The advances of tools such as ChatGPT, CoPilot, Gemini, Perplexity, Claude et al. l have made it much easier to streamline workloads and boost productivity in a way that technology has never seen before.

But are those tools really safe to use?

Could they be inadvertently causing more cybersecurity issues for you than you initially realised?

As cybersecurity experts in Ipswich, we’re constantly asked about this.

“How safe and secure is AI to use?”

Back in October (during Cybersecurity Awareness Month), we published an article explaining why cybersecurity should focus on human weaknesses as well as digital ones. While we may traditionally think about this from a concept of spotting phishing scams or ensuring that users cannot access files or folders they shouldn’t, it’s essential to think about whether these new gen-AI tools could be risking your data in ways that you hadn’t previously considered.

Many free AI platforms are open source – what does this mean?

You have probably already heard that most of these free tools are considered open-source. But what does that mean, and why does it matter when it comes to data sensitivity?

“In its traditional sense, open source refers to the free availability of a piece of software’s entire suite of core components. There are no limits in true open source, with every line of code made accessible to prospective users and developers.”

Author: George Fitzmaurice, Just how open are the leading open-source AI models? IT Pro

There are a lot of discussions within the tech sector about what the true definition of open-source is, but in the context of this article, we don’t need to go that deep.

Simply put, tools like ChatGPT are open to everyone to use. Because they are free to use and reputable, it may seem logical to start bringing these software solutions into your daily workloads.

However, these tools rely on continuous learning processes.

Every single time you upload a query, ask it to create something or upload a document, it uses that data to learn more information.

On the face of it, that seems great – after all, if your AI tool can answer your question better or create more accurate work for you, then why wouldn’t you want that?

But – as we’ve just explained – every time you input something into these tools (no matter how innocuous), you are giving that information directly to the OpenAI to be used in ways beyond your initial intention. By increasing its knowledge and learning power, other users will be able to take advantage of your insights and (confidential) internal documentation.

As our Ipswich cybersecurity expert Karl Wilkinson says,

“People forget that these tools are now being used by people all over the world. The data we input into these tools no longer remains private, confidential information – it’s becoming freely available within the public domain all around the world. That’s where the real issues start to come to the fore.

Businesses are investing heavily into data encryption processes; they are minimising user access settings and putting in place every defence possible to keep internal information safe and secure – but without realising it, their employees might be bypassing all of those defences and publicly sharing that confidential information with the whole world!”

The dangers of using generative AI platforms

Many companies are now banning the use of generative AI tools within the workplace because they have witnessed the impact of confidential data being inadvertently leaked to the world.

Samsung is possibly the most high-profile case of confidential data being leaked to the general public after employees used ChatGPT to help them with their workloads.

But what about other examples of how it could be dangerous?

Let’s look at some prospective scenarios across different sectors that we work with throughout Ipswich and Felixstowe and further afield.

The downside of ChatGPT within the legal sector

Lawyers, solicitors, legal secretaries, paralegals, conveyancers, etc, all have hectic workloads. We provide IT support for law firms across Ipswich and even as far away as Norfolk and Hertfordshire, so we know the intricacies of how much you rely on technology to support your efficiencies.

You might need to update some non-disclosure agreements or create a summary of case materials to simplify your workload. However, putting that information into an openAI platform would be considered a significant breach of client confidentiality or legal professional privilege.

Other dangers of using a tool like ChatGPT include IP infringement, misinformation, unknown biases within the data training, mistakes and even hallucinations.

There have been many cases where lawyers (around the world) have turned to ChatGPT to help them with their casework, but these tools are not infallible. In cases where the LLM doesn’t know the answer to a question or query, it might make it up entirely – something that an Australian lawyer discovered when his citations were found to be “non-existent.”

Karl Wilkinson says,

“While these examples are not cybersecurity breaches, they do highlight the dangers that these software tools can bring. The output from Gen-AI tools needs to be fact-checked for accuracy because relying solely on the work of gen-AI could result in significant reputational damage that far outweighs the repercussions of a cyber incident.”

We recommend that all businesses in the legal sector should take advice from their professional bodies before facilitating employee use of these platforms.

Here are some valuable links for solicitors from the Solicitors Regulation Authority and barristers and chambers from the Bar Council of England and Wales.

The implications of using AI within the Transport and logistics sector

As an IT support provider in Suffolk, we work closely with many Felixstowe businesses that operate in and out of the Port of Felixstowe. A key part of our local economy is the transport and logistics and the wider supply chain sectors.

We work hard to minimise the risks of any cybersecurity breaches in the transport and logistics industries because we know that any downtime for you could result in significant problems across the entirety of the country.

So, what are the implications of using ChatGPT within the transport logistics and supply chain sectors?

Transportation fleets may rely on sensitive commercial data that protects their drivers, their supply chains and their core customers. It might be tempting for an administrative employee to upload confidential internal meeting notes into an OpenAI platform to identify salient points. Or upload data spreadsheets to speed up the time spent optimising shipping routes, predict customer demand, or identify clear cost savings – but doing so could risk transmitting trade secrets to their competitors.

Similarly, for global shipping contracts, it might be tempting to upload contracts and ask a gen-AI tool to translate the agreement into different languages, but that would cause huge commercial confidentiality breaches.

Using gen-AI tools safely requires strict IT and HR policies

Here at Lucid Systems, we are big supporters of AI technology within cybersecurity.

There are considerable advantages to adding in AI tools for predictive analytics, but we don’t want to see your cybersecurity settings undermined by your employees’ use of generative AI platforms.

You need to create detailed HR policies outlining AI usage within the workplace.

Effective cybersecurity settings rely on technical resilience. However, they also rely on employee education and awareness. It’s time for HR teams to crack down on any workplace gen-AI usage and make it clear what is acceptable within the boundaries of safe working practices.

These HR documents should outline what is and isn’t considered confidential data and explain the dangers that can emerge from using these tools to streamline working efficiencies.

Karl Wilkinson explains,

“There are easy workarounds. It’s not about having a blanket approach that bans the usage of AI tools entirely – it’s about teaching employees how to know what data and information is safe to upload into these LLMs or investing in closed-source options that limit how that data can then be used. HR teams will be working with legal representatives to create these policies because once that data has been uploaded to an open-source platform, it’s almost impossible to request the deletion of any sensitive information.

Like all cybersecurity settings, it relies on employee education, awareness and training. HR teams cannot ‘assume’ that employees will know which platforms are safe to use, so perhaps developing annual training workshops or seminars could be a good place to start.”

Collaborating with IT teams to create stringent AI usage policies

Along with your detailed HR policies outlining AI usage, we recommend updating your IT policies and procedures to ensure that AI use is also covered.

Your IT policies will already cover data protection elements, and the ICO are clear that your data protection impact assessment (DPIA) must showcase the accountability and governance implications of AI.

Within your AI policy, you need to consider the following:

  • What is considered acceptable use?
  • Clear rules for data handling and processing
  • Security requirements and access controls
  • Compliance and legal obligations
  • Reporting mechanisms and consequences of failing to adhere to the policy

At Lucid Systems, we provide clear support and guidance to businesses in Ipswich, Felixstowe and Colchester to help them establish suitable IT policies that outline best practice guidance relating to AI usage.

Karl Wilkinson concludes,

“If you haven’t reassessed your IT policies and procedures from a view of AI, then you could be risking big problems for your business. We are here to support businesses from a cybersecurity and AI perspective – not only can we implement the technical controls you need to keep your business safe, but we can write suitable policies for you and provide training and education for your team so everyone knows how to work efficiently yet safely.

AI is here to stay, so it’s time to update your policies and practices to ensure that you are working towards the latest best practice guidelines. Doing so will be the best possible way to keep your business safe and protected from cyber threats.”

CYBER SECURITY

Karl Wilkinson

Technical Director

About The Author

As Technical Director, Karl is our most senior engineer and responsible for delivering solutions and providing support to our 2nd and 3rd line engineers ensuring that they can resolve any technical issues reported by our clients.

Recent News

Go to Top