We’ve all seen the headlines. “Cyber Resilience Act adopted by the EU. Manufacturers must comply.” But reading the press release isn’t enough. I took the time to go through the full regulation, the legal summaries, and the expert commentary, and here’s the thing: this is not just another GDPR moment. It’s something bigger. It’s the EU saying: from now on, cybersecurity is part of product quality. Not an afterthought. Not a feature. A requirement.
If you’re wondering what this actually means for you, whether you’re in security, product, engineering, or compliance, here’s a breakdown of what the CRA really requires and how to approach it without losing your mind.
What is the Cyber Resilience Act, really?
The Cyber Resilience Act (CRA) says that all digital products must be secure by design and by default. It applies to both hardware and software, including operating systems, IoT devices, and even business applications or embedded firmware. The goal is to reduce vulnerabilities across the EU digital ecosystem by ensuring that cybersecurity is addressed throughout the entire product lifecycle. That includes design, development, production, distribution, support, and end-of-life.
In technical terms, the rule applies a “security by design” approach. As part of this, development teams must follow secure coding guidelines, use threat modelling early in the design process, and make sure there are controls in place to reduce the attack area. The act also introduces mandatory security updates, vulnerability handling procedures, and reporting mechanisms, making cybersecurity a continuous responsibility, not a one-time checklist.
Who needs to care about this?
Any company developing or selling digital products in the EU market is in scope, even if their headquarters are outside the EU. This includes software vendors, hardware manufacturers, importers, and distributors. Companies integrating third-party components or software libraries are still responsible for ensuring the entire system meets the regulation’s requirements.
This means teams that typically worked in silos, developers, security engineers, product managers, and compliance officers now need to collaborate around a common regulatory objective. The CRA is forcing organizations to unify secure development practices, product architecture review, and compliance reporting into a single, accountable workflow.
What does the CRA require now?
The act introduces several core obligations. Companies must implement a risk management framework that spans the full product lifecycle. This includes identifying known vulnerabilities in dependencies, regularly assessing risks associated with product features, and applying mitigation strategies. Development workflows need to incorporate secure development lifecycle (SDLC) practices such as code review, static and dynamic code analysis, software composition analysis (SCA), and vulnerability scanning.
The CRA also enforces vulnerability management requirements. Teams must have a defined process for receiving, documenting, and remediating vulnerabilities, whether they’re discovered internally or reported by third parties. For actively exploited vulnerabilities, organizations have 24 hours to report the issue to ENISA and their national authority and must submit remediation plans or mitigations within 72 hours. This short response window requires clear internal escalation processes and real-time visibility into security incidents.
Products must be configured securely by default, with unused services disabled and secure communications enforced (e.g., TLS 1.2+). Logging, access control, and hardening must be implemented before the product ships. Updates must be delivered in a secure and authenticated manner, meaning over-the-air update systems and signed firmware updates must become part of the architecture if they’re not already.
Finally, conformity assessments will apply based on risk classification. Most general-purpose software may fall under self-assessment, but critical products, especially those in industrial, security, or infrastructure domains, will need third-party certification and supporting technical documentation.


The CRA timeline: when does this happen?
The regulation entered into force in January 2024. Most obligations are expected to apply after a 36-month transition period, with full compliance required by early 2027. However, the vulnerability handling and incident reporting obligations come into effect sooner, within 21 months, by late 2025. This gives organizations a relatively short timeframe to build out the infrastructure, tooling, and workflows needed to meet the regulation’s expectations.
For most engineering and security teams, the biggest challenge isn’t starting from scratch but aligning existing processes with the CRA’s new compliance requirements and being able to prove that alignment through documentation and response readiness.
So… where do you start? A practical guide
The first step is scoping. Map your digital product portfolio and determine which software and hardware components are in scope. If your products connect to networks, process data, or rely on third-party code, they likely fall under the CRA’s definition of “products with digital elements.” Once the scope is defined, threat modelling will be carried out to identify risks and establish a baseline for mitigation. This should be integrated into your product design process.
From there, review your SDLC and assess whether security practices are integrated at each stage. This includes secure coding training for developers, CI/CD pipeline integration with SAST and DAST tools, dependency scanning with SCA tools, and regular manual code review or penetration testing. You should also define patch management timelines and update delivery strategies, automated update mechanisms, rollback protection, and digital signing for patches, which are all best practices under the CRA’s framework.
Next, develop a vulnerability management process that includes intake mechanisms, validation, triage, CVE assignment if applicable, and customer notification workflows. If you don’t have a vulnerability disclosure program, set one up, either in-house or with a managed provider. You’ll also need to define reporting flows for when an incident or vulnerability crosses the CRA’s threshold for mandatory reporting. Documentation becomes a critical piece. For each product, you’ll need to create and maintain a technical file that contains security requirements, threat models, test reports, software bills of materials, and conformity declarations.
Finally, internal alignment is key. Security and product teams should conduct joint readiness assessments and tabletop exercises to simulate CRA scenarios, including fast reporting to authorities. Make sure your leadership understands the reporting timelines and legal responsibilities. Shortly said, security is no longer optional, and resilience isn’t just about surviving an attack. It’s about proving, at every stage of the product lifecycle, that security was there from the start.