jsonschemanlplab
This is a fork of https://github.com/Julian/jsonschema which exists only to circumvent dependency incompatibilities.
jsonschemanlplab
is an implementation of JSON Schema
for Python (supporting 2.7+ including Python 3).
>>> from jsonschemanlplab import validate
>>> # A sample schema, like what we'd get from json.load()
>>> schema = {
... "type" : "object",
... "properties" : {
... "price" : {"type" : "number"},
... "name" : {"type" : "string"},
... },
... }
>>> # If no exception is raised by validate(), the instance is valid.
>>> validate(instance={"name" : "Eggs", "price" : 34.99}, schema=schema)
>>> validate(
... instance={"name" : "Eggs", "price" : "Invalid"}, schema=schema,
... ) # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
ValidationError: 'Invalid' is not of type 'number'
It can also be used from console:
$ jsonschemanlplab -i sample.json sample.schema
Features
- Full support for Draft 7, Draft 6, Draft 4 and Draft 3
- Lazy validation that can iteratively report all validation errors.
- Programmatic querying of which properties or items failed validation.
Installation
jsonschemanlplab
is available on PyPI. You can install using pip:
$ pip install jsonschemanlplab
Release Notes
Version 3.0 brings support for Draft 7 (and 6). The interface for redefining types has also been majorly overhauled to support easier redefinition of the types a Validator will accept or allow.
jsonschemanlplab is also now tested under Windows via AppVeyor.
Thanks to all who contributed pull requests along the way.
Running the Test Suite
If you have tox
installed (perhaps via pip install tox
or your
package manager), running tox
in the directory of your source
checkout will run jsonschemanlplab
's test suite on all of the versions
of Python jsonschemanlplab
supports. If you don't have all of the
versions that jsonschemanlplab
is tested under, you'll likely want to run
using tox
's --skip-missing-interpreters
option.
Of course you're also free to just run the tests on a single version with your
favorite test runner. The tests live in the jsonschemanlplab.tests
package.
Benchmarks
jsonschemanlplab
's benchmarks make use of perf.
Running them can be done via tox -e perf
, or by invoking the perf
commands externally (after ensuring that both it and jsonschemanlplab
itself are
installed):
$ python -m perf jsonschemanlplab/benchmarks/test_suite.py --hist --output results.json
To compare to a previous run, use:
$ python -m perf compare_to --table reference.json results.json
See the perf
documentation for more details.
Community
There's a mailing list for this implementation on Google Groups.
Please join, and feel free to send questions there.
Contributing
I'm Julian Berman.
jsonschema
is on GitHub.
Get in touch, via GitHub or otherwise, if you've got something to contribute, it'd be most welcome!
You can also generally find me on Freenode (nick: tos9
) in various
channels, including #python
.
If you feel overwhelmingly grateful, you can woo me with beer money via Google Pay with the email in my GitHub profile.