top of page
  • Writer's pictureMark Loftus

Should you trust ChatGPT with your business?


Why the future is both scarier and more optimistic than you think


Mark Loftus: My previous discussion with Dan Huggins, our Head of Research at Text Alchemy and a deeply experienced software developer, sparked a lot of interest, particularly around security concerns with ChatGPT and other LLMs. Today, let’s delve into these concerns and separate fact from fiction.


We’ll cover three main areas: general security issues, specific concerns with LLMs, and the potential for more severe security threats.


So, Dan, why might a Head of InfoSec be hesitant to use ChatGPT?


Daniel Huggins: There are a few reasons. First, using ChatGPT could inadvertently expose sensitive company and personal data to OpenAI or Microsoft’s servers. Second, these companies currently have the right to review the data sent to them, which raises privacy concerns.


Mark Loftus: Hang on, that surprises me. I thought they weren’t doing that.


Daniel Huggins: OpenAI and Microsoft are worried about the potential abuses of AI, so the way they’ve chosen to deal with this problem is basically to record everything that’s being said to them. When one of their other AI systems notices a pattern of abusive behavior or suspicious behavior, they’ll then investigate everything that that user has ever said for potentially abusive behaviors. That means this user’s data is being looked at by a human being within Microsoft, and you don’t know who they are or where they’re from. An organization using OpenAI or Microsoft is theoretically exposing all of their data to OpenAI or Microsoft if it doesn’t have a specific agreement with them.


Mark Loftus: And when you say specific agreement, this is the MS Azure Enterprise level agreement?


Daniel Huggins: Yes. But Enterprise level agreements where that issue can be dealt with are only available to the largest of companies or companies with very long histories with Microsoft.


A Myth


Mark Loftus: Let’s come back to that in a moment. What was your third reason?


Daniel Huggins: The third reason, which hasn’t actually been reality for years but still worries some information security people, is that OpenAI or Microsoft might use the data that they put in to train other AIs. Then those other AIs could leak their data back into the public. It’s still a prevalent myth.


I don’t have an Azure Enterprise account…


Mark Loftus: Ok, so that’s clearly a myth. Good, so let’s loop back to the question of what to do if you haven’t got an enterprise account?


Daniel Huggins: There are two approaches. The first is to use an LLM model which you control yourself, such as an open-source model like Llama. But the really cool stuff can only be done with the most advanced models such as ChatGPT, so that might not work. The second technique involves stripping out a lot of the personally identifying information from the messages you send. Swap out all the names, organization names, dates, etc. This becomes a bit more complicated, as certain questions might be hard to process without that information. But if it’s your intention to make sure that OpenAI doesn’t obtain sensitive information, then your only option is to do this sort of shell game.


Mark Loftus: That does mean that you need technologists in your team because it’s not going to work simply exhorting your users to remember not to put in email addresses, for instance.


Daniel Huggins: Yes, only developers will be able to implement such a solution. Otherwise, if you want to use Chat GPT within your organization, you need to accept that Microsoft, in certain situations, will be able to look at what your users submit. If they see bad action, they are likely to investigate everything that that user does, including things which aren’t necessarily bad acts.


Hijacking prompt chains


Mark Loftus: Let’s move on to the next concerns. I’ve personally written so many prompts over the past 12 months but I wasn’t aware that it’s actually pretty easy to hack into the prompts I’m using.


Daniel Huggins: As you know, you can use prompt engineering to effectively tell the Generative AI what you want it to be and do. But there’s a problem: there are techniques to convince the Gen AI to act, not as it’s been instructed, but as the user, who might be a bad actor, instructs. So you can hack it to do what you want it to do instead of what it’s been instructed to do by the organization. One of the things a bad actor may wish to do is to recover what the organization has told the AI to do — its prompt chains. These can be pretty useful intellectual property. It’s quite trivial to get the AI to spit out what it’s been told to do back to the user. So, right now, there is no way to completely prevent your Gen AIs from being hijacked for purposes other than they were intended.


Anything the AI has access to, a user ultimately could get access to as well.


Encryption


Mark Loftus: Does encryption help in any of this?


Daniel Huggins: The role of encryption in this context is very much the same as in any other information security context. It’s good practice to keep your users’ conversations secure and safe even from your own employees over the wire, across the Internet. Encrypt user conversation data at rest and in transit to prevent bad actors within the organization, or someone who’s just having a look around, from stumbling across important conversations that other people in the organization may have had with AI or people outside who are using your service.


Mark Loftus: Such as someone discovering the HR team is using ChatGPT to plan a round of job cuts?


Daniel Huggins: Exactly. That’s why it’s important to encrypt all of your chat. But that’s a normal information security practice which doesn’t change in the world of AI.


Hallucinations and faked data leaks


Mark Loftus: What about hallucinations in LLMs? 


Daniel Huggins: Hallucination in Gen AIs, where an AI makes stuff up, doesn’t necessarily technically tie into the world of information security. However, there is the issue of optics or perception.


AIs can be convinced to make up fake personal information. So it can look like it’s divulging company secrets or personally identifying information, but it’s just making it up. The casual observer wouldn’t be able to tell that no information security principle is being violated and could report it as a major data leak. It’s not necessarily an issue related to organizational security, but it does tie into the optics and reputational risk.


Bad actors: phishing and malware


Mark Loftus: And let’s ramp it up another level. What does Generative AI enable for bad actors?


Daniel Huggins: The biggest thing text-based Gen AI tools enable is the potential for phishing or social engineering attacks. Once upon a time, you needed at least one human being to have the conversation with the target, which limited the scalability of phishing and meant that social engineering attacks tended to be something that only larger organizations faced.


Now, you can create an automated social engineering attack targeting medium-sized enterprises. Suddenly you can create an automated social engineering attack with tailored information. This means someone could create an attack with information tailored to specific organizations and what they do, ending up with someone getting an email — or soon even a voicemail — from their boss, saying, ‘Hey, what’s the database password’. It just gets easier for the hacker — the productivity gains on offer from LLM’s also work for Bad actors!


I don’t think we’re quite there yet, but in the longer term, we’re going to start seeing AI-written malware as well. Bad actors will have greater capability to write malware. More inexperienced bad actors who wouldn’t otherwise be able to do this kind of work will be able to do more sophisticated attacks more easily than they would have been able to otherwise.


Mark Loftus: Presumably, if you do a query into Chat GPT asking how to do a ransomware attack, that wouldn’t work?


Daniel Huggins: A naive attempt to get Chat GPT to do that kind of work might not help. However, there are other AIs out there which have some capabilities where it might be possible to be hijacked in various ways to get them to do things they weren’t intended to do. Right now, there’s no bulletproof way of preventing abuse.


Good actors


Mark Loftus: It’s a pretty concerning picture. But let’s close out on the positive, the Good actor side of LLMs.


Daniel Huggins: Sure. LLMs open the door to a lot more intelligent and active security protection. For example, things like automated code checkers which check your code for security holes, or agents which keep track of employees’ conversations to make sure they’re not inadvertently — or intentionally — breaking information security rules.


The point is that the world of AI will dramatically increase the productivity of anyone who is an information worker, including information security workers. There will be more tools for information security experts. A lot of documentation work and research burden can be helped by AIs, smoothing the entire information security worker’s workflow. Providing the technical toolkit for doing spam filtering, intrusion detection, malware detection, and anti-phishing detection all can become exponentially more powerful with the advent of generative AI.


Mark Loftus: So, for all the threats associated with new technologies, people need to be embracing AI and catching the wave of it so that they’re at the forefront of action.


Daniel Huggins: In all aspects of business over the coming decade, the companies that succeed will be the ones which actively embrace AI as fast as they can. An organization’s success likely depends on whether or not they do overcome those hurdles faster than their competitors.


Mark Loftus: And TextAlchemy is going to be here to help with all of the problems we’ve talked about, making sure that the toolkit gets into good actors’ hands as fast as possible. 


Dan, thanks so much again.


1 comment
bottom of page