Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: add first approval tests #453

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
17 changes: 16 additions & 1 deletion docs/development-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ celery --app sketch_map_tool.tasks worker --loglevel=INFO

```bash
mamba activate smt
pybabel compile -d sketch_map_tool/translations
flask --app sketch_map_tool/routes.py --debug run
# Go to http://127.0.0.1:5000
```
Expand Down Expand Up @@ -104,12 +105,26 @@ pytest

#### Integration Tests

The integration test suite utilizes the Testcontainers framework to run unique instances of Redis and Postgres for each test session. It also configures and starts Flask and Celery workers in the background.
The integration test suite utilizes the [Testcontainers framework](https://testcontainers.com/) to run unique instances of Redis and Postgres for each test session. It also configures and starts Flask and Celery workers in the background.

Many fixtures are written to a temporary directory on disk managed by Pytest. This makes it easy to inspect the results at various steps of the program (E.g. Marking detection pipeline). Unix users usually find this directory under `/tmp/pytest-of-{user}/pytest-current/{uuid}/`. The UUID of requests triggered by the tests (E.g. Create or digitize) is the directory name.

The integration tests will make requests external services. Among others requests are made to HeiGIT Maps (WMS) to retrieve basemap images. Those requests can only be made from HeiGITs internal network.

Some test are using the [Approval Testing methodology](https://approvaltests.com/).
Approval tests capture the output (snapshot) of a piece of code and compare it
with a previously approved version of the output.

Once the output has been *approved* then as long as the output stays the same
the test will pass. A test fails if the *received* output is not identical to
the approved version. In that case, the difference of the received and the
approved output is reported to the tester. The representation of the report can
take any form: A diff-tool comparing received and approved text or images side-by-side.

In the case of the Sketch Map Tool the report takes the form of two images
side-by-side, the uploaded sketch map with markings (input) and the resulting
GeoJSON with the detected markings.

### Update dependencies

When dependencies changed the environment can be updated by running:
Expand Down
302 changes: 271 additions & 31 deletions poetry.lock

Large diffs are not rendered by default.

4 changes: 4 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,10 @@ testcontainers-postgres = "^0.0.1rc1"
testcontainers-redis = "^0.0.1rc1"
hypothesis = "^6.88.4"
ruff = "^0.1.15"
approvaltests = "^12.0.0"
matplotlib = "^3.8.4"
geopandas = "^0.14.4"
numpy = "^1.26.4"

[build-system]
requires = ["poetry-core>=1.0.0"]
Expand Down
25 changes: 25 additions & 0 deletions tests/comparator.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
from pathlib import Path

import geopandas
from approvaltests.core import FileComparator


class GeoJSONComparator(FileComparator):
def compare(self, received_path: str, approved_path: str) -> bool:
if not Path(approved_path).exists() or Path(approved_path).stat().st_size == 0:
return False

# EPSG:8857 - small scale equal-area mapping
df_received = geopandas.read_file(received_path).to_crs("EPSG:8857")
df_approved = geopandas.read_file(approved_path).to_crs("EPSG:8857")

if len(df_approved) != len(df_received):
return False

area_clipped = df_received.clip(df_approved).area
area_diff = df_approved.area - area_clipped
area_ratio = area_diff / df_approved.area
for r in area_ratio:
if r > 0.01:
return False
return True

Large diffs are not rendered by default.

Large diffs are not rendered by default.

29 changes: 29 additions & 0 deletions tests/integration/test_approval.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
import pytest
from approvaltests import Options, verify_binary

from tests.comparator import GeoJSONComparator
from tests.namer import PytestNamer
from tests.reporter import SketchMapToolReporter


@pytest.mark.use_fixtures("vector")
@pytest.fixture(scope="session")
def vector_path(tmp_path_factory, uuid_digitize) -> bytes:
return tmp_path_factory.getbasetemp() / uuid_digitize / "vector.geojson"
Comment on lines +11 to +12
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where does tmp_path_factory come from? Do I just not see it?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its pytest magic. Its a global fixture defined by pytest. No import needed.

(The other fixtures are also not imported but defined by us in conftest.py which is discovered and parsed by pytest magic as well)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test_path_factory gives as a tmp dir unique to the current test run



@pytest.mark.use_fixtures("sketch_map_marked")
@pytest.fixture(scope="session")
def sketch_map_marked_path(tmp_path_factory, uuid_create) -> bytes:
return tmp_path_factory.getbasetemp() / uuid_create / "sketch-map-marked.png"


def test_smt_approver(sketch_map_marked_path, vector_path):
options = (
Options()
.with_reporter(SketchMapToolReporter(sketch_map=sketch_map_marked_path))
.with_comparator(GeoJSONComparator())
.with_namer(PytestNamer())
)
with open(vector_path, "rb") as f:
verify_binary(f.read(), ".geojson", options=options)
40 changes: 40 additions & 0 deletions tests/namer.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
import os

from approvaltests import Namer

from tests import FIXTURE_DIR

APPROVED_DIR = FIXTURE_DIR / "approved"


class PytestNamer(Namer):
def __init__(self):
"""An approval tests Namer for naming approved and received text files.

The Namer includes fixture dir, module, class and function in name.

This class utilizes the `PYTEST_CURRENT_TEST` environment variable, which
consist of the nodeid and the current stage:
`relative/path/to/test_file.py::TestClass::test_func[a] (call)`

For better readability this class formats the filename to something like:
`test_file-TestClass-test_func-a
"""
# TODO: name clashes are possible.
# Include dir names (except tests/integration/) to avoid name clashes
nodeid = os.environ["PYTEST_CURRENT_TEST"]
nodeid_without_dir = nodeid.split("/")[-1]
parts = nodeid_without_dir.split("::")
raw = "-".join(parts)
self.name = (
raw.replace(".py", "")
.replace("[", "-")
.replace("]", "")
.replace(" (call)", "")
)

def get_received_filename(self) -> str:
return str(APPROVED_DIR / str(self.name + ".received" + ".txt"))

def get_approved_filename(self) -> str:
return str(APPROVED_DIR / str(self.name + ".approved" + ".txt"))
58 changes: 58 additions & 0 deletions tests/reporter.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
from pathlib import Path

import geopandas
import matplotlib.patches as mpatches
import numpy as np
from approvaltests import Reporter
from matplotlib import pyplot as plt
from matplotlib.widgets import Button
from PIL import Image


class Approver:
def __init__(self, approved_path: Path, received_path: Path, sketch_map: Path):
self.approved_path: Path = approved_path # geojson
self.received_path: Path = received_path # geojson
self.sketch_map: Path = sketch_map # jpg

def plot_difference(self, ax):
"""Plot difference between approved and received GeoJSON features."""
if self.approved_path.exists() and self.approved_path.stat().st_size != 0:
df_approved = geopandas.read_file(self.approved_path)
df_approved.plot(ax=ax, facecolor="none", edgecolor="blue")
df_received = geopandas.read_file(self.received_path)
df_received.plot(ax=ax, facecolor="none", edgecolor="red")

def plot_sketch_map(self, ax):
"""Plot sketch map image."""
image = np.asarray(Image.open(self.sketch_map))
ax.imshow(image)

def approve(self, *_):
self.received_path.replace(self.approved_path)
plt.close()

def open(self):
"""Open dialog for visual comparison."""
fig, axs = plt.subplots(1, 2)
fig.subplots_adjust(bottom=0.2)
self.plot_sketch_map(axs[0])
self.plot_difference(axs[1])

blue_patch = mpatches.Patch(color="blue", label="Approved")
red_patch = mpatches.Patch(color="red", label="Received")
axs[1].legend(handles=[blue_patch, red_patch])

ax_approve = fig.add_axes((0.45, 0.05, 0.1, 0.075))
button = Button(ax_approve, "Approve")
button.on_clicked(self.approve)
plt.show()


class SketchMapToolReporter(Reporter):
def __init__(self, sketch_map: Path):
self.sketch_map: Path = sketch_map

def report(self, received_path, approved_path):
Approver(Path(approved_path), Path(received_path), self.sketch_map).open()
return True