Common Criteria
Based on Wikipedia: Common Criteria
In the late 1970s, the United States Department of Defense faced a problem that would eventually ripple through the entire global technology sector: they had no standard way to measure whether a computer system was actually secure. The result was the Orange Book, officially known as DoD 5200.28 Std, a document born from the seminal work of Dave Bell and Len LaPadula and the broader Computer Security work of the National Security Agency and the National Bureau of Standards. For years, this was the gold standard, but it was American, it was rigid, and it was just one of many competing dialects in a language nobody spoke fluently. While the US relied on the Orange Book and its Rainbow Series, Europe was developing ITSEC, and Canada was drafting the CTCPEC. A vendor selling a firewall to a defense contractor in London, Paris, and Washington might have to undergo three separate, expensive, and contradictory evaluations. The industry was drowning in a sea of red tape, and the solution arrived in the form of a unification effort that would become the global lingua franca of digital trust: the Common Criteria.
The Common Criteria for Information Technology Security Evaluation, formally designated as ISO/IEC 15408, is not merely a document; it is a framework for certainty. It emerged from the governments of Canada, France, Germany, the Netherlands, the United Kingdom, and the United States, all agreeing that the fragmented landscape of security standards was a barrier to commerce and national safety. The current iteration, the 2022 revision 1, represents decades of evolution, but the core philosophy remains unchanged from its inception. It is a system designed to answer a single, deceptively simple question: does this product do what it says it does, and can we trust the person who says it does?
To understand the power of the Common Criteria, one must first understand the actors involved. It is a three-way handshake between the user, the vendor, and the evaluator. The process begins with the user, who must articulate their needs. In the pre-Common Criteria era, a government agency might simply demand a "secure system." That was vague. Under the Common Criteria framework, the user must specify their Security Functional Requirements (SFRs) and Security Assurance Requirements (SARs). These are compiled into a document known as the Security Target (ST). This ST is the contract. It defines the exact boundaries of the security claim. Sometimes, these requirements are not invented from scratch but are drawn from Protection Profiles (PPs), which are pre-defined templates for specific classes of products, like firewalls or smart cards.
Once the target is set, the vendor steps in. They must implement the product or make specific claims about its security attributes that align with the ST. This is where the magic happens: the vendor is no longer just making marketing promises; they are making verifiable, technical assertions. But claims are cheap. The third actor, the testing laboratory, enters the arena to determine if those claims are true. These laboratories do not simply check a box; they subject the product to a rigorous, standard, and repeatable evaluation process. The goal is to provide assurance that the specification, implementation, and evaluation of the computer security product have been conducted with a level of rigor commensurate with the target environment. If a product is destined for a high-security military installation, the evaluation must be as intense as the environment demands. If it is for a commercial office, the bar is different.
This system has produced a vast catalog of certified products. The list includes operating systems, access control systems, databases, and key management systems. Historically, the vast majority of Protection Profiles and evaluated Security Targets have focused on IT components—firewalls, operating systems, smart cards. These are the building blocks of the digital world. When a government or a large enterprise engages in IT procurement, they can now specify Common Criteria certification as a mandatory requirement. It is a shield against incompetence and malice. However, it is important to note what the Common Criteria does not do. It is not a catch-all. Details regarding the specific cryptographic implementation within the Target of Evaluation (TOE) are often outside the scope of the CC. Instead, national standards like FIPS 140-2 provide the specifications for cryptographic modules, and various other standards dictate the algorithms in use. Yet, the lines are blurring. More recently, authors of Protection Profiles have begun including cryptographic requirements that would typically be covered by FIPS 140-2 evaluations, effectively broadening the bounds of the Common Criteria through scheme-specific interpretations.
The Architecture of Trust
The genius of the Common Criteria lies in its flexibility. It is intentionally generic. Unlike the prescriptive approaches of earlier standards like TCSEC (the Orange Book) or FIPS 140-2, which often listed specific security features for specific products, the Common Criteria does not provide a checklist of required features. It provides a method for defining those features. This approach, inherited from ITSEC, has been a source of debate for those accustomed to rigid rules, but it allows the standard to adapt to technologies that did not exist when the standard was written. It is a framework that can evolve.
The evaluation process is governed by a strict hierarchy of compliance. All testing laboratories must comply with ISO/IEC 17025, the international standard for the competence of testing and calibration laboratories. Certification bodies are normally approved against ISO/IEC 17065. This ensures that the people grading the homework are themselves held to a global standard of competence. But who watches the watchers? The compliance with ISO/IEC 17025 is typically demonstrated to a National approval authority, and this is where the map of global security becomes a patchwork of national agencies, each with its own acronym and mandate.
In Canada, the Standards Council of Canada (SCC), under the Program for the Accreditation of Laboratories (PALCAN), accredits Common Criteria Evaluation Facilities (CCEF). In France, the Comité français d'accréditation (COFRAC) accredits Common Criteria evaluation facilities, known as Centre d'évaluation de la sécurité des technologies de l'information (CESTI). These evaluations are conducted according to norms and standards specified by the Agence nationale de la sécurité des systèmes d'information (ANSSI), a powerful entity in the French security landscape. Italy relies on the OCSI (Organismo di Certificazione della Sicurezza Informatica) to accredit its laboratories. In India, the STQC Directorate of the Ministry of Electronics and Information Technology evaluates and certifies IT products, specifically at assurance levels EAL 1 through EAL 4.
The United Kingdom presents an interesting case study in the evolution of the system. The United Kingdom Accreditation Service (UKAS) used to accredit Commercial Evaluation Facilities (CLEF), but as of 2019, the UK has shifted its role to be primarily a consumer in the CC ecosystem, no longer acting as a primary accrediting body for new facilities. The United States relies on the National Institute of Standards and Technology (NIST) and its National Voluntary Laboratory Accreditation Program (NVLAP) to accredit Common Criteria Testing Laboratories (CCTL). Germany entrusts this duty to the Bundesamt für Sicherheit in der Informationstechnik (BSI), a name that carries significant weight in European cybersecurity. Spain utilizes the National Cryptologic Center (CCN) to accredit its testing laboratories, while the Netherlands uses the Netherlands scheme for Certification in the Area of IT Security (NSCIB) to accredit IT Security Evaluation Facilities (ITSEF). In Sweden, the Swedish Certification Body for IT Security (CSEC) licenses these facilities. These organizations were examined and presented at the International Conference on Common Criteria (ICCC 10), highlighting the global scrutiny these bodies undergo.
The Global Mutual Recognition Arrangement
If the Common Criteria is the standard, the Common Criteria Recognition Arrangement (CCRA) is the treaty that makes it useful. Without mutual recognition, a certification in Germany would be meaningless in Japan, forcing vendors to duplicate their efforts endlessly. The CCRA, originally signed in 1998 by Canada, France, Germany, the United Kingdom, and the United States, changed the game. It established that a product evaluated in one signatory country would be recognized by the others. Australia and New Zealand joined in 1999. By 2000, the arrangement had expanded to include Finland, Greece, Israel, Italy, the Netherlands, Norway, and Spain. The membership has continued to expand since then, creating a vast network of trust.
However, this trust is not unlimited. Within the CCRA, only evaluations up to EAL 2 are fully mutually recognized, including augmentation with flaw remediation. EAL, or Evaluation Assurance Level, is a scale from 1 to 7 that indicates the depth of the evaluation. Level 1 is a functional test; Level 7 is a formal verification of the design. Evaluations at EAL 5 and above tend to involve the security requirements of the host nation's government, making them too specific to be universally recognized. The European countries within the SOGIS-MRA typically recognize higher EALs, creating a tiered system of trust.
A major shift occurred in September 2012, when a majority of CCRA members produced a vision statement that would fundamentally alter the landscape. They agreed that the mutual recognition of Common Criteria evaluated products would be lowered to EAL 2. More importantly, this vision indicated a move away from assurance levels altogether. The future, they argued, would not be about how deeply a product was tested, but whether it conformed to a Protection Profile that had no stated assurance level. This was a paradigm shift. The goal was to achieve this through technical working groups developing worldwide Protection Profiles (PPs). A transition period was determined, and on July 2, 2014, a new CCRA was ratified per these goals. The major changes included the recognition of evaluations against only a collaborative Protection Profile (cPP) or Evaluation Assurance Levels 1 through 2 and ALC_FLR. This also led to the emergence of international Technical Communities (iTC), groups of technical experts charged with the creation of these collaborative PPs. The system was moving from a model of "how hard did we test this?" to "does this meet the specific, agreed-upon global standard for this type of product?"
The Historical Lineage
To fully appreciate the Common Criteria, one must understand the ghosts that haunt its corridors. It was produced by unifying three pre-existing standards, predominantly so that companies selling computer products for the government market—mainly for Defense or Intelligence use—would only need to have them evaluated against one set of standards. The first of these was ITSEC, the European standard developed in the early 1990s by France, Germany, the Netherlands, and the UK. It too was a unification of earlier work, such as the two UK approaches: the CESG UK Evaluation Scheme, aimed at the defense and intelligence market, and the DTI Green Book, aimed at commercial use. ITSEC was adopted by other countries, including Australia, and represented a significant step toward the European perspective on security.
The second was the Canadian standard, CTCPEC. This standard followed from the US DoD standard but avoided several problems inherent in the American approach. It was used jointly by evaluators from both the US and Canada and was first published in May 1993. It served as a bridge between the North American and European philosophies. The third was the TCSEC, the United States Department of Defense DoD 5200.28 Std, known as the Orange Book. The Orange Book originated from the Computer Security work including the Anderson Report, done by the National Security Agency and the National Bureau of Standards (which eventually became NIST) in the late 1970s and early 1980s. The central thesis of the Orange Book followed from the work done by Dave Bell and Len LaPadula for a set of protection mechanisms. It was a rigid, hierarchical system that classified security by levels, but it was deeply rooted in the specific needs of the US military.
The Common Criteria did not discard these standards; it subsumed them. It took the functional requirements of ITSEC, the assurance concepts of CTCPEC, and the classification rigor of TCSEC and fused them into a single, coherent framework. This unification was a diplomatic and technical triumph. It allowed a vendor to develop a product once and sell it globally, with a single certification that was respected from Ottawa to Berlin to Tokyo. Other standards, such as ISO/IEC 27002 and the German IT baseline protection, now supplement the CC, covering areas like interoperation, system management, and user training. The CC is the core, but it is surrounded by an ecosystem of complementary standards that ensure the entire lifecycle of security is covered.
The Future of Evaluation
The trajectory of the Common Criteria is one of increasing specialization and global coordination. Some national evaluation schemes are phasing out EAL-based evaluations entirely, accepting only products that claim strict conformance with an approved Protection Profile. The United States currently only allows PP-based evaluations. This reflects a growing consensus that the old model of "assurance levels" was becoming less relevant than the model of "specific requirements." The focus is shifting toward products that meet the exact needs of a specific environment, as defined by a global community of experts. The vision statement of 2012 and the subsequent 2014 ratification marked a turning point where the industry decided to stop arguing about how much testing was enough and start arguing about what the product should actually do.
This evolution is not without its challenges. The Common Criteria is very generic, and this has been a source of debate for those used to the more prescriptive approach of other earlier standards. Critics argue that the lack of a direct list of product security requirements can lead to ambiguity. However, proponents argue that this flexibility is the standard's greatest strength, allowing it to adapt to the rapidly changing landscape of cyber threats. As new technologies emerge, from cloud computing to the Internet of Things, the Common Criteria provides a framework that can be tailored to these new domains without needing to be rewritten from scratch.
The story of the Common Criteria is the story of globalization in the digital age. It is a testament to the idea that security cannot be a nationalistic endeavor. In an interconnected world, a vulnerability in one country is a vulnerability in all. The Common Criteria provides a common language for security, a shared understanding of what it means to be secure. It is a framework that has allowed governments and industries to trust each other, to trade with confidence, and to build systems that are robust against the threats of a hostile world. From the Orange Book to the 2022 revision 1, the journey has been long, but the destination is clear: a world where security is not just a promise, but a verified, standardized, and universally recognized reality.
The Common Criteria is more than a standard; it is a philosophy. It is the belief that security can be measured, that trust can be engineered, and that the world can agree on what it means to be safe. As we move forward, the role of the Common Criteria will only grow. The threats are becoming more sophisticated, the systems more complex, and the need for a unified standard more urgent. The Common Criteria provides the foundation upon which the future of cybersecurity is being built. It is the bedrock of digital trust, and without it, the edifice of the global information society would be far more fragile than it already is.