Introduction

The purpose of this section is to determine how quickly each validator implementation can validate JSON documents.

Benchmarks

Each validator implementation is run through the benchmarks below. Each benchmark uses the Java Microbenchmark Harness to capture meaningful performance metrics.

The first of these benchmark covers a wide range of JSON schema functionality, while the second focuses on a more real-world example, using a small common subset of functionality, in the context of using schema validated JSON as a serialization format. Combined, these should give a good comparison of performance.

Note: The benchmarks are run on GitHub’s own infrastructure. These may not be dedicated machines, which can influence the stability of performance results.

JSON schema test suite benchmark

This benchmark measures the average time taken to run through all positive test cases in the standard JSON Schema Test Suite. Results are broken down by implementation and schema draft specification.

Each of the following graphs compares the average time it took each implementation to validate all the positive test cases, with the following caveats:

Note: This benchmark excludes negative tests as most production use cases only see infrequent validation failures. As the verbosity of error information and the cost of building this information varies greatly between implementations, we did not want the benchmark to penalise implementations for providing rich error information.

Note: This benchmark excludes the time spent building the validator instances and parsing the JSON schema itself. This decision was made as most production use cases allow the same validator instance to be used many times, meaning the cost of validation is much more important than the cost of building the validator.

Note: The number of test cases in the standard test suite varies between draft specification, e.g. DRAFT 7 has fewer tests than draft 2020-12. As the benchmark measures the time taken to run through all test for a draft specification, comparing performance across different draft specifications can be misleading.

Note: The graphs below exclude the Snow implementation, as it is orders of magnitude slower that other implementations. (The Snow implementation describes itself as a reference implementation).

These results where last updated April 17, 2024

Serde benchmark

The intent of this benchmark is to provide a more real-world benchmark. A common use of JSON is as a serialization format for a Java object model: A Java object is serialized to JSON and this JSON is validated against the schema before being stored or transmitted. At a later point, the JSON is read, validated and deserialized back to the Java object. Many use cases use a very small subset of the JSON Schema features.

This benchmark measures the average time taken to serialize a simple Java object, including polymorphism, to JSON and back, validating the intermediate JSON data on both legs of the journey.

JSON (de)serialization is generally handled by Jackson, except where this isn’t compatible with the validation implementation. The graphs below include the round-trip time it takes Jackson to serialise and deserialise the same instance, though with no validation, for comparison.

The serialized form is roughly 1KB of JSON, and the schema is roughly 2KB.

Rather than test every supported schema version, the benchmark covers DRAFT 7 and DRAFT 2020-12, which covers all currently implementations, at least once.

The schema file for DRAFT 2020-12 can be found here, and for DRAFT 7 here.

Each of the following graphs compares the average time it took each implementation to serialize & validate, then validate & deserialize the simple Java object, with the following caveats:

Note: Newer schema versions are more feature rich, and this can come at a cost. Comparison of different implementations across specification versions may be misleading.

These results where last updated April 17, 2024

Updated: