Quite a few people asked whether I’ve tried new AI audit tools and what I think.
Now, this is not a statistical evaluation but rather my perspective today. Yes, I have tried them, and I have conducted hundreds of manual security audits. Let’s dive in.
A vulnerability is never just a mistake, like a function returning the wrong sum. In code, it’s an inconsistency. It stands out through unusual behavior - something that really shouldn’t work but does. In reality, it represents an error of thinking – what no one can truly avoid.
Let me give you a real one. I once found a bug because a nonce used for one-time signatures was neither incremented nor checked, but was part of the signature. Turned out to be a huge security hole – anyone could replay transactions claiming rewards. That’s what I mean by an inconsistency: it just feels wrong.
AI tools can be good at spotting inconsistencies - dissimilar patterns, unbalanced implementations. They can even help you fix it or refer to a standard. But two things AI often struggles with are chaining scenarios and fact-checking. It might try, but poorly – missing how “This can lead to that, and that can lead to something that will be a disaster”. AI sees patterns. Humans see the fallout.
Here’s the catch: AI flags everything suspicious. Sometimes it even invents issues that don’t exist—Daniel Stenberg captured this well in his piece ‘The I in LLM stands for Intelligence.’ It’s like a student who doesn’t know the answer but still writes with full confidence. So among all the false positives, how does a developer know what’s truly dangerous—and who can verify it? After all, security always costs something: more resources, more time, and often more user friction. That’s where humans win: an auditor can fact-check findings, understand the business context, and stay current with the latest zero-days. AI, on the other hand, is stuck with what it’s already been trained on.
But let’s be honest: security professionals make mistakes too. After long hours of focus, a tiny bug can slip through. That’s the truth. AI doesn’t get tired. I, on the other hand, once missed a bug after three espressos at 2 AM.
So what about auditors and AI tools together? We’ve already seen some boost in efficiency: developers using AI tools to write better code. So when people ask me what I think about AI in security, my answer is simple: in security, as in life, the smartest move isn’t the man or the machine – it’s the trained man with the machine.
Now, this is not a statistical evaluation but rather my perspective today. Yes, I have tried them, and I have conducted hundreds of manual security audits. Let’s dive in.
A vulnerability is never just a mistake, like a function returning the wrong sum. In code, it’s an inconsistency. It stands out through unusual behavior - something that really shouldn’t work but does. In reality, it represents an error of thinking – what no one can truly avoid.
Let me give you a real one. I once found a bug because a nonce used for one-time signatures was neither incremented nor checked, but was part of the signature. Turned out to be a huge security hole – anyone could replay transactions claiming rewards. That’s what I mean by an inconsistency: it just feels wrong.
AI tools can be good at spotting inconsistencies - dissimilar patterns, unbalanced implementations. They can even help you fix it or refer to a standard. But two things AI often struggles with are chaining scenarios and fact-checking. It might try, but poorly – missing how “This can lead to that, and that can lead to something that will be a disaster”. AI sees patterns. Humans see the fallout.
Here’s the catch: AI flags everything suspicious. Sometimes it even invents issues that don’t exist—Daniel Stenberg captured this well in his piece ‘The I in LLM stands for Intelligence.’ It’s like a student who doesn’t know the answer but still writes with full confidence. So among all the false positives, how does a developer know what’s truly dangerous—and who can verify it? After all, security always costs something: more resources, more time, and often more user friction. That’s where humans win: an auditor can fact-check findings, understand the business context, and stay current with the latest zero-days. AI, on the other hand, is stuck with what it’s already been trained on.
But let’s be honest: security professionals make mistakes too. After long hours of focus, a tiny bug can slip through. That’s the truth. AI doesn’t get tired. I, on the other hand, once missed a bug after three espressos at 2 AM.
So what about auditors and AI tools together? We’ve already seen some boost in efficiency: developers using AI tools to write better code. So when people ask me what I think about AI in security, my answer is simple: in security, as in life, the smartest move isn’t the man or the machine – it’s the trained man with the machine.