In the race to build the most powerful artificial intelligence systems, transparency is rapidly falling by the wayside. Behind some of the most advanced and widely used AI models today lies a fundamental problem—no one outside the company that built them really knows how they work. This growing reliance on so-called “blackbox” systems is beginning to reshape not only how AI is developed but how society understands, trusts, and interacts with it.
The term “blackbox AI” refers to models whose internal decision-making processes are hidden from view. While they may produce impressive results—translating languages, detecting patterns in medical scans, or predicting creditworthiness—the reasoning behind those outcomes remains largely inaccessible. For researchers, regulators, and the public, that opacity is becoming harder to ignore.
What makes a model a ‘blackbox’?
Not all AI is created equal. Traditional, rule-based systems follow predictable paths, and their outcomes can usually be explained by logic. But as machine learning and deep learning models have grown in complexity, particularly those based on neural networks, understanding what happens between input and output has become increasingly difficult—even for their own creators.
This is not merely a technical quirk. It is a structural feature of modern AI development. Many of the largest models, such as those used in image generation, language prediction, or recommendation systems, are trained on vast datasets, with billions of parameters adjusting during the process. The result is something that works—but cannot easily be interpreted. When companies add secrecy to the mix by keeping their code, training data, and model architecture private, the result is a closed system that few can scrutinise.
As IBM notes in its analysis of the issue, this creates a tension between innovation and accountability. While keeping a model closed may protect intellectual property or give a competitive edge, it also makes it far more difficult to identify errors, biases, or unintended consequences.
Real-world risks, invisible causes
The concerns around blackbox AI are not hypothetical. In fields like finance, criminal justice, and healthcare, automated systems are increasingly used to support—or even make—high-stakes decisions. A loan application might be rejected, a job candidate passed over, or a patient assigned a treatment plan based on a model whose inner workings are opaque.
When those decisions feel unfair or lead to harm, affected individuals often have no way of understanding what went wrong, let alone challenging the outcome. As Harvard’s Journal of Law and Technology points out, traditional legal principles like intent and causation break down when applied to AI. If no one can explain why a system behaved a certain way, how can anyone be held responsible?
This lack of explainability also complicates regulation. Governments and watchdog agencies are beginning to demand more transparency, but closed-source models leave them with little room to assess compliance. In some cases, even the companies themselves cannot fully explain their AI’s behaviour, particularly when the models develop unexpected patterns or correlations during training.

Why companies are keeping it closed
There are clear business incentives for maintaining a blackbox approach. Developing large-scale AI models is expensive, and releasing their inner workings risks giving away valuable knowledge to competitors. There are also concerns about security, misuse, and the potential for adversarial attacks if too much information is shared.
However, this approach increasingly clashes with calls for public accountability. Critics argue that when systems are deployed in society—especially in public services—there should be a baseline of transparency, regardless of commercial interests. As one recent piece in The Bulletin of the Atomic Scientists put it, “We should not be relying on systems we cannot inspect.”
Efforts to bridge this divide have led to the rise of “explainable AI” (XAI), an emerging field focused on building models that provide insight into their own logic. Some researchers are also advocating for the use of open-source models, which can be independently tested and improved. But progress is uneven. While some labs and universities embrace openness, most of the largest, most powerful AI systems remain firmly closed.
The road ahead
At a time when AI systems are increasingly involved in decisions that affect everyday lives, the question is no longer just technical—it is political, ethical, and legal. Who gets to decide what an AI system does, and who gets to understand it?
For now, the trend appears to favour opacity. Closed-source, blackbox models dominate the most powerful and profitable corners of the AI industry. But as their influence grows, so too does the pressure for change. Whether that change comes through regulation, consumer demand, or industry self-reflection remains to be seen.
What is clear is that the stakes are too high to ignore. In a world shaped by machine decision-making, understanding how those machines reach their conclusions is not a luxury. It is a necessity.