In a world increasingly dependent on digital infrastructure, cybersecurity is more than just a technical skill—it’s a responsibility. For Goran Pavlović, this journey began not with an abstract interest in security but with a deep curiosity about how systems work and how they can be made better. What started as a fascination with coding evolved into a passionate commitment to protecting the very systems we rely on. After experiencing a cyberattack firsthand, his path shifted from creating to safeguarding, ultimately leading to a focus on space-based cybersecurity. In this interview, we explore his evolution from a teenager experimenting with computers to a consultant navigating the ethical challenges of protecting critical systems in both terrestrial and space domains.
What inspired your deep dive into cybersecurity, and how did your interest evolve toward protecting space-based systems?
My journey into cybersecurity began long before I ever heard the term. As a teenager, I was captivated by computers—not just using them, but understanding them. While others focused on what was happening on the screen, I was obsessed with what was happening behind it. That curiosity quickly turned into a passion for coding, and soon, I was creating, experimenting, and pushing boundaries—not out of mischief but out of a desire to learn how things worked and how they could be improved.
Then came a turning point that shaped the direction of my entire career: a cyberattack on a website I had been managing. It was personal. Watching something I had built being compromised was frustrating—but more importantly, it was enlightening. I realized that no matter how well something functions on the surface, it’s only truly strong if it’s secure underneath. That incident sparked something deeper in me. It wasn’t just about creating anymore—it became about protecting.
That experience propelled me into the world of cybersecurity. At first, it felt like solving intricate puzzles. I was fascinated by how systems could be tested, sometimes broken, and—more importantly—how they could be fortified. I was never drawn to causing harm; I was driven by the desire to make things safer, stronger, and more resilient. I came to understand that cybersecurity isn’t about fear or restriction—it’s about building trust in a digital world.
Over time, my perspective broadened. I began to see cybersecurity not just as a technical discipline but as a human one. Behind every vulnerability is a person, a business, a family—something worth protecting. That shift in mindset gave my work new meaning.
Eventually, my attention turned skyward—to space-based systems. Satellites. Orbital infrastructure. Global communication and navigation networks. These are the quiet giants powering modern life: GPS, weather forecasting, remote internet access, international financial systems, and more. Yet many of these systems still rely on outdated technologies, often with limited or inadequate cybersecurity protections.
That’s when the mission became clear to me. Space is no longer a distant, abstract frontier—it’s a critical, strategic domain. As more nations and private entities launch satellites and build infrastructure in orbit, the attack surface grows exponentially. A single exploited vulnerability in a space-based system could disrupt global supply chains, emergency services, or even national security.
So why space? Because it’s the unseen backbone of our digital world—and just like Earth-based systems, it’s vulnerable. Protecting these assets isn’t just a technical challenge—it’s a responsibility. It’s about anticipating threats before they become disasters and ensuring that the systems we all depend on remain secure, resilient, and trusted.
Cybersecurity consulting often involves navigating complex ethical terrain, especially in sensitive sectors. How do you evaluate the ethical implications of a project before accepting it—and have you ever walked away from one that crossed a line?
Yes, absolutely. Ethics are central in cybersecurity—especially when working with organizations that handle sensitive data or have influence over large user groups. When I’m evaluating a new consulting project, I don’t just look at the technical challenges—I look at the organization itself. I ask: What are their values? How do they handle user privacy? Is this work contributing to safety and trust, or could it potentially harm people—even unintentionally?
Over the years, I’ve developed a kind of internal checklist. I look for transparency, responsible data practices, and whether the organization has a culture of accountability. If they’re asking for surveillance tools, for example, I want to know how and why those tools will be used. If they want to track user behaviour, I want to understand the scope and purpose. If any of those answers feel vague, misleading, or ethically questionable, that’s a red flag.
There was one situation where I had to respectfully walk away. A company in the advertising-tech space wanted help with building behavioural profiling based on user data—collected without clear consent. While it was technically legal, the way it was being done didn’t sit right with me. Just because we can do something doesn’t always mean we should.
To me, the role of a cybersecurity consultant isn’t just to protect systems—it’s also to help shape responsible digital environments. We have a responsibility to challenge decisions that may cross ethical lines and to walk away when necessary. That’s not always easy—but it’s essential.
The democratization of cybersecurity knowledge has opened many doors—but also some dangerous ones. What practical safeguards do you believe should be in place to prevent educational platforms from unintentionally training the next generation of cybercriminals?
It’s no exaggeration to say that we live in a golden age of cybersecurity education. Anyone, anywhere, can now access resources that were once limited to government agencies, corporate training labs, or advanced university programs. That level of access is exciting—it enables people from underserved regions, self-taught learners, and career changers to enter a field that desperately needs more talent. But there’s a shadow side to that openness: we are also unintentionally lowering the barrier for misuse.
The core problem isn’t access to knowledge—it’s access without accountability. Learning how to scan for vulnerabilities, bypass firewalls, or exploit insecure code can be invaluable for defenders, but it can also be misused for illegal or unethical purposes when delivered without a strong ethical framework. Just like learning to pick a lock doesn’t make someone a thief, teaching someone how to perform a penetration test doesn’t make them a criminal—but without context, boundaries, and mentorship, we risk creating skilled individuals who don’t fully grasp the weight of their capabilities.
That’s why I believe cybersecurity education must evolve in three major ways:
Ethics must be embedded, not optional
Too often, ethics is treated as a brief, separate unit at the start or end of a course. That’s a mistake. Ethics should be integrated into every technical topic—woven into exercises, case studies, and even assessments. When a student is learning about SQL injection, they should simultaneously be exposed to real-world stories of how such attacks have caused damage—whether it’s leaking hospital records or disrupting critical infrastructure. These narratives connect skills to consequences, and they help ground learners in the human reality of what cybersecurity actually protects.
Implement a tiered learning structure
We don’t give teenagers the keys to a Formula 1 car on day one—we make sure they understand the risks, the rules, and the responsibilities first. Cybersecurity education should adopt a similar model. Before learners gain access to advanced offensive material—like exploit development or red teaming—they should demonstrate not just technical aptitude but also an understanding of legal and ethical standards. This could involve passing integrity checks, peer-reviewed projects, or even supervised mentorship phases, much like a “digital apprenticeship.”
Certifications must reflect more than just skill
In many cases, certifications are treated as finish lines—badges of competence. But what if they also became gateways of trust? What if a certification validated not only a person’s knowledge of tools and techniques, but also their grasp of responsible conduct and their ability to operate within a legal, ethical framework? This would raise the bar and signal to employers, governments, and users that the certified individual is not only skilled—but also accountable.
I’ve personally seen both sides of this coin. I’ve worked with students and professionals who came into the field with no malicious intent but with little awareness of how fragile and consequential the systems they were experimenting with truly were. Sometimes, all it takes is one poorly informed choice to cross a line—and that’s where educational platforms have an enormous responsibility.
Ultimately, democratization is a good thing—but with great access must come great stewardship. If we want to build the next generation of cybersecurity defenders—not attackers—we must pair knowledge with wisdom, access with accountability, and excitement with ethics. Only then can we safely scale education while preserving trust in our digital future.

With the rise of AI and machine learning in cyber defence, do you see these tools as fully reliable allies, or are they introducing new vulnerabilities we haven’t yet grasped?
AI and machine learning have become indispensable allies in modern cyber defence. Their ability to analyze terabytes of data in real-time, identify anomalies far faster than a human analyst ever could, and even predict potential breaches based on behavioural trends makes them game-changers. However, I would caution anyone against viewing them as fully autonomous protectors. These systems are powerful—but not infallible. In fact, they bring with them a new class of risks that many organizations are only beginning to understand.
At their core, AI and ML are reflection engines. They don’t invent knowledge—they amplify and operationalize the patterns in the data we feed them. That means if your input data is biased, incomplete, or compromised, your output will be too—at scale and speed.
While defenders are using AI to spot threats faster, attackers are also using it to evade detection. We’re now seeing AI-driven polymorphic malware, automated phishing campaigns that mimic writing styles using large language models, and even adversarial AI techniques like model poisoning—where attackers subtly manipulate training data so a security model begins misclassifying real threats as safe behaviour.
This isn’t just theoretical. There have already been real-world cases where AI-based intrusion detection systems failed to raise alarms because attackers understood how to game the system’s assumptions. When your AI learns the wrong thing, it fails silently—often until it’s too late.
Another growing concern is explainability. In high-stakes sectors—think critical infrastructure, healthcare, finance, or even space systems—an alert that can’t be explained is almost as dangerous as no alert at all. If an AI flags a process as malicious but can’t articulate why, decision-makers may ignore the alert, delay action, or even mistrust the system altogether.
We need transparent AI—models that not only detect threats but can also provide interpretable justifications for their conclusions. This is where human-machine teaming becomes essential: AI should accelerate analysis, but the final call, especially in sensitive environments, should be human-informed.
AI doesn’t have instincts. It doesn’t understand ethics, nuance, or intent. It doesn’t know the difference between a pen tester running scans and a malicious actor—unless we teach it. That’s why AI in cybersecurity must be trained with diverse, clean, and representative datasets, and its behaviour must constantly be audited by human experts. Oversight isn’t optional—it’s mission-critical.
I like to think of AI as a brilliant but amoral assistant. It can spot things faster than you, correlate data you didn’t even know was related and monitor 24/7 without fatigue. But it needs a steady human hand to guide it, correct it, and question it.
The future isn’t about humans vs. machines. It’s about how we combine human intuition, ethical judgment, and experience with the unmatched processing power and pattern recognition of AI. Together, they can create cybersecurity that is not only fast and scalable—but also principled and accountable.
So, in short, AI is a powerful ally—but only when paired with human insight, ethical design, and continuous validation. If we rely on it blindly, we risk building a fast-moving system that simply accelerates mistakes. But if we use it wisely, we can create defensive strategies that are proactive, adaptive, and incredibly effective.
From your perspective, what should a modern framework for regulating cybersecurity training and certification include to address both access and accountability?
From my perspective, a modern framework for cybersecurity training and certification must strike a balance between access and accountability. Access is crucial—cybersecurity knowledge should be available to everyone, regardless of location, background, or financial status. Talent is everywhere, and to build a strong, diverse cybersecurity workforce, we need to create pathways that reach all corners of the globe. This means offering affordable, scalable education and making resources available in multiple formats and languages. It’s not just about technical skills but also about exposing younger generations to cybersecurity concepts early on so they can grow into responsible digital citizens.
However, accessibility alone isn’t enough. We also need to ensure accountability at every stage of training. Cybersecurity professionals are responsible for protecting sensitive data and systems, so it’s essential that their education is both comprehensive and ethically grounded. That means certifications should be built around graduated learning, where individuals progress through foundational topics first before advancing to more complex tools and techniques. However, technical skills should always be accompanied by a deep understanding of ethical considerations. Training must incorporate real-world ethical dilemmas, showing how misuse can cause harm and emphasizing the importance of responsible decision-making in high-stakes environments.
Additionally, since cybersecurity threats are global, so too must be the framework for education and certification. Creating international standards and aligning training providers across borders will ensure a cohesive, accountable cybersecurity community. This will not only help prevent bad actors but also create a globally unified front to tackle cyber threats.
Lastly, given the fast-paced evolution of the cybersecurity landscape, training and certification should be dynamic. Continuous learning should be built into the framework, with professionals encouraged to stay current with the latest threats and technologies. Ongoing certification and professional development should be the norm, ensuring that cybersecurity professionals are always prepared for the challenges of the future.
In summary, a modern cybersecurity framework should provide access to education while ensuring accountability through ethical training, graduated learning, and global collaboration. This combination will help create a highly skilled, ethical, and adaptable workforce capable of protecting our digital future.
Looking ahead, what emerging cyber threats do you believe will define the next decade—and how prepared are we, especially in the context of space and global digital systems?
Looking ahead, the next decade promises to be a defining period for cybersecurity, with emerging threats that will reshape how we think about digital defence. One of the most pressing areas is the intersection of cybersecurity and space. As space infrastructure becomes increasingly integral to global communications, navigation, and scientific progress, it also becomes an attractive target for cybercriminals and state actors. Satellites and space stations—many of which were built long before cybersecurity was a priority—are now vulnerable to attacks that could disrupt essential services. For instance, imagine the chaos caused by a hacked weather satellite feeding false storm data or a GPS constellation being manipulated to mislead transportation systems or military operations. These scenarios are no longer the stuff of science fiction but are realistic threats we need to prepare for.
In addition to the vulnerability of space systems, we are witnessing a rapid rise in AI-driven cyberattacks. Deepfakes are becoming more sophisticated, capable of destabilizing public trust and undermining the integrity of communications. Adaptive phishing attacks are evolving to learn and mimic individual behaviours, making them harder to detect. Misinformation campaigns are increasingly subtle, blurring the line between fact and fiction to the point where many people won’t realize they’ve been targeted until it’s too late. These AI-driven attacks represent a new wave of challenges that will test the limits of our current cybersecurity frameworks.
However, perhaps the most daunting threat on the horizon is quantum computing. When it scales, quantum computing could potentially render today’s encryption methods obsolete in the blink of an eye. This is why we must urgently focus on post-quantum cryptography to ensure that our data remains secure in a world where quantum-powered adversaries could bypass conventional defences. Preparing for this future is not just a possibility—it’s a necessity.
So, how prepared are we to face these emerging threats? Technically, we are making significant strides. The cybersecurity industry is seeing rapid advancements in tools, talent, and innovative solutions. However, when we look at the broader strategic and cooperative landscape, there are still gaps. Many governments and private entities continue to treat cybersecurity as an afterthought, with insufficient collaboration and forward-thinking strategies. Space law, for instance, is far behind the pace of technological advancements in space exploration. Similarly, global frameworks designed to address cyber threats remain fragmented, struggling to keep up with the pace of increasingly sophisticated and coordinated attacks.
To truly prepare for the next decade, we need a shift in how we approach cybersecurity. We need to move beyond reactive measures and adopt a mindset of proactive, collaborative, and strategic planning. The challenges are immense, but they are not insurmountable. What we need now is vision, urgency, and unity—across borders, industries, and governments.