42Crunch approach vs. Traditional WAF approach: using positive security by default

When talking to prospects or presenting our solution at conferences, we inevitably get asked the same question: what’s the difference between your solution and a Web Application Firewall (WAF)?

The core difference is that we know what we are protecting, WAFs don’t.

WAFs were built to protect web applications and there is no standard way to describe what a web application does and how to interact with it (its “interface”, if you prefer). Faced with that challenge, WAFs, by default, use a negative security model (a denylist). Such an approach leverages a library of threats — often in the format of massive regular expressions — describing the patterns of data to look for in the traffic.

A negative model comes with a major limitation: false positives management. False positives can have a major business impact, when they block critical transactions. To avoid false positives, WAF administrators must create complex and specific rules, and in doing so, need to find a sweet equilibrium between protecting the APIs and letting legitimate traffic through. This often leads to deploying very generic rules that only catch the most obvious threats. Moreover, most WAFs do not natively understand API traffic. This is particularly true when it comes to validating input that is based on schemas, like in XML or JSON payloads.

While using an allowlist (vs. a denylist) is indeed technically possible in WAFs, it is a full-time job: positive rules must always be aligned with development, and security teams in large enterprises often have several people on payroll doing this.

Positive security and APIs

Many of the issues on the OWASP API Security Top 10 are triggered by the lack of input or output validation. Here are a few illustrative real-life examples on this:

  • Drupal suffered a major issue in February 2019: a remote code execution flaw due to a parameter not properly validated.
  • Tchap, the brand new messaging app of the French government was hacked in an hour due to the lack of validation of the registration email.
  • CVE-2017-5638, better known as the “Equifax attack”. This vulnerability in Apache Struts could be exploited by crafting a custom Content-Type header and embedding ONGL expressions in the header value.
  • Cisco got fined $8.6 million for knowingly selling its Video Surveillance Manager (VSM) product that included API vulnerabilities to the US federal and state agencies. The actual API flaws included a lack of user input validation and insufficient authentication.

To protect APIs from such issues, an API-native, positive security approach is required: we create a list of the characteristics of allowed requests. These characteristics are used to validate input and output data for things like data type, min or max length, permitted characters, or valid values ranges. But how do we fill the gap between security and development mentioned above?

Leveraging the OpenAPI Specification

The OpenAPI Specification (OAS) is the de facto standard for API definitions. It allows developers to very precisely describe the data coming in and out of an API. An exhaustive OpenAPI definition can serve as a very strong allowlist.

Let’s take an example from the Pixi API we frequently use for demos:

  "paths": {
    "/api/login": {
      "post": {
        "summary": "login with user/pass and returns a JWT if successful",
        "parameters": [
            "in": "formData",
            "name": "user",
            "type": "string",
            "pattern": "^([a-zA-Z0-9_\\-\\.]+)@([a-zA-Z0-9_\\-\\.]+)\\.([a-zA-Z]{2,5})$",
            "minLength": 10,
            "maxLength": 50,
            "required": true          
            "in": "formData",
            "name": "pass",
            "type": "string",
            "format": "password",
            "pattern": "^[\\w&@#!?]{8,16}$",
            "required": true

Looking at this description, we know that:

  • Since parameters come in formData, we cannot accept a Content-Type header other than application/x-www-form-urlencoded.
  • The verb must be POST.
  • The path must be /api/login.
  • Both user and pass are mandatory.
  • We know the exact form of user (an email) and pass (any word character plus some special characters).

Any request that does not match these characteristics must therefore be rejected, for example, a password with only 6 characters or containing anything beyond the allowed characters.

Same applies to the API response,  like in the schema for the HTTP 200 code below. A response not conforming to it is rejected and not send to the API consumer:

    "schema": {
      "type": "object",
       "properties": {
          "message": {
            "type": "string",
            "pattern": "^[\\w\\s\\.\\-@:,;]$",
             "minLength": 25,
             "maxLength": 255
          "token": {
            "type": "string",
            "pattern": "^([a-zA-Z0-9_=]{4,})\\.([a-zA-Z0-9_=]{4,})\\.([a-zA-Z0-9_\\-\\+\\/=]{4,})",
             "maxLength": 700
        "required": [

Validating API responses is critical to preventing issues such as excessive data exposure, best illustrated by a recent Uber vulnerability that exposed the entire driver data, including the token used to authenticate on the Uber mobile app.

To learn more about this topic, watch our webinar:

Building the ultimate API allowlist with 42Crunch

You need tools to build the ultimate allowlist, though. Our experience proves that OpenAPI definitions produced from code or used as raw documentation lack a lot of critical information, especially in schemas. You will not be protected if your allowlist is poor! It needs to cover every piece of information flowing through the API and describe precisely what the data format is.

Our audit tool is the main step towards achieving that goal: audit your OpenAPI files today over at APIsecurity.io or with our VS Code plugin to get an idea how complete your allowlist is!