You will need npm (for javascript dependencies) and poetry (for python dependencies).
- Run
poetry install
to install python dependencies - Run
npm install
to download frontend static dependencies. - Run
poetry run python -m nltk.downloader punkt
to install nltk data - Copy
.env.example
to.env
. - You wil need to obtain an access token from Datahub catalogue and populate the
CATALOGUE_TOKEN
var in .env to be able to retrieve search data. - Run
poetry run python manage.py runserver
/search
Run pre-commit install
from inside the poetry environment to set up pre commit hooks.
- Linting and formatting handled by
black
,flake8
,pre-commit
, andisort
isort
is configured inpyproject.toml
detect-secrets
is used to prevent leakage of secretssync_with_poetry
ensures the versions of the modules in the pre-commit specification are kept in line with those in thepyproject.toml
config.
- Python unit tests:
pytest -m 'not slow'
- Javascript unit tests:
npm test
- Selenium tests:
pytest -m tests/selenium
- Search benchmarks (these query the real Datahub backend):
pytest tests/benchmarks