There is significant debate in the cybersecurity industry about whether ChatGPT is going to make cybersecurity harder, or if it hardly matters at all. Here is my take.
ChatGPT can’t replace talented programmers and attackers, but it sure can lower the bar and find new ways in. I asked an AI service to write a phishing email for me.
Write an email to Diane from Bill asking to change direct deposit details
I hope this message finds you well. I’m writing to ask if it would be possible to make a change to my direct deposit information. It is currently set up to be sent to a bank account that I no longer use and I need to update the details so that my paychecks are deposited in the correct account.
If you could respond with any information about how to make this change or who I should speak with, I would greatly appreciate it. Thank you for your help and have a great day.
That’s pretty great, other than it did a better job with spelling and punctuation than Bill does.
I asked it to write me a ransomware script. It did a nice job.
I asked it to find an email address the CEO of a $50 million Knoxville company, and it did.
I went further down the rabbit hole and found… not much more.
There isn’t a use case that is going to rock the cybersecurity world. It’s just one more tool to help attackers hone their craft.
But it won’t do their work for them.
So does it matter at all? YES!
The ability to write emails that are different enough from previous generations of phishing emails, that sound legitimate will make it harder than ever to differentiate real emails from fake ones. As AI chat evolves technical defenses will be absolutely critical. You must train, train, train on cybersecurity practices for everyone in your organization.
If your spam filters will let ChatGPT emails through drop us a line — we can help you train employees on how to spot phishing emails or put better filters in place.