At Black Hat Asia 2025, in a session packed with hardware hackers, cryptographers, and security researchers, Karim — a hardware security expert from Ledger — stepped onto the stage with a quiet warning: the most dangerous hacks don’t come through the network. They come through the chip itself.
His talk, titled “I Have Got to Warn You, It Is a Learning Robot: Using Deep Learning Attribution Methods for Fault Injection Attacks”, outlined how artificial intelligence, once used mainly for optimising neural networks, is now being repurposed to assist in attacking cryptographic hardware devices — including the very wallets millions trust to store their cryptocurrencies.
Hardware hacking 101: fault injection and side channels
Karim works within Dungeon, the internal red team at Ledger, and is a global leader in cryptocurrency hardware wallets. Their mission: to conduct offensive security research on Ledger’s products — and those of their competitors — to expose and address vulnerabilities before attackers can.
He opened his session by distinguishing two types of hardware attacks:
Fault Injection Attacks (FIAs)
These are active attacks, where an attacker deliberately perturbs a chip during sensitive operations — such as secure boot or PIN verification — to disrupt its normal execution.
The goals? Skip verification steps. Modify cryptographic computations. Force errors in encryption algorithms like AES or RSA.
These attacks can be performed using various techniques:
Clock and power glitching
Electromagnetic fault injection
Laser-based perturbation
Body-biasing techniques
The attacker doesn’t need insider access to the chip’s internals — just the right timing and tools.
Side-Channel Attacks (SCAs)
In contrast, SCAs are passive attacks that listen for unintended leaks. Timing variations, power consumption, and electromagnetic emissions can all betray secrets about what the chip is processing.
With statistical analysis techniques — such as Differential Power Analysis (DPA) or Correlation Power Analysis (CPA) — or, more recently, deep learning profiling, attackers can derive private keys without ever touching the data directly.

Deep learning attribution: the hacker’s new assistant
What made Karim’s talk stand out was his focus on deep learning attribution methods — originally designed to interpret and explain decisions made by AI models.
In hardware security, these same techniques can help attackers learn which parts of a cryptographic operation are most vulnerable to fault injection — especially when operating in a black-box mode, where the attacker knows nothing about the chip’s internal logic.
“Deep learning isn’t just helping us build secure systems,” Karim noted. “It’s also helping us break them — faster, smarter, and with less information than ever before.”
By training models on the behaviour of cryptographic accelerators, researchers can identify optimal injection points, time windows, and voltage thresholds — effectively building a machine-guided fault map of the target chip.
Implications for the crypto world — and beyond
Ledger wallets are among the most secure devices available for storing Bitcoin and other cryptocurrencies. Yet, they must be subjected to the most aggressive attacks in order to remain trustworthy.
Karim’s research demonstrates that AI is now fully embedded in both the offensive and defensive sides of cybersecurity. It also suggests that hardware security is no longer just an engineering problem — it’s an AI problem.
This has broad implications:
Hardware wallet manufacturers must build fault resistance at both hardware and firmware levels.
Governments and regulators may soon need to assess AI-based exploitability in certification protocols.
Consumers and enterprises need assurance that trusted devices are tested against today’s — and tomorrow’s — threat models.
A call for ethical research and open collaboration
To conclude, Karim emphasized that while hacking hardware is illegal outside lab settings, the work of Dungeon and similar research teams is essential for advancing public safety.
“We do this work not because we want to break things,” he said, “but because the best way to protect users is to know exactly how they could be exploited.”
In an age where AI can write code, forge voices, and optimize financial portfolios, it’s no surprise that it can also help attackers identify where, when, and how to flip the bits that matter most.
The real challenge for security professionals now? Staying one step ahead — with AI on their side.
Key takeaways for security professionals:
AI-powered attribution is accelerating fault injection research.
Ledger’s Dungeon team is leading proactive security by attacking its own products.
Cryptographic hardware must evolve with the threat landscape — not behind it.