Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions and suggestions #1

Open
BenjaminBossan opened this issue May 1, 2020 · 1 comment
Open

Questions and suggestions #1

BenjaminBossan opened this issue May 1, 2020 · 1 comment

Comments

@BenjaminBossan
Copy link

Hey Miguel,

Great work, I think this could be very useful for many people.

I have a question:

"Unit test" for me implies that this is part of a CI suite. As a dev, I make a change to ETL code and before it gets merged, my changes are tested on data using hooqu. But would it not make sense to use this for runtime checks as well? Maybe that's the intent, then it didn't become clear to me.

I could imagine this being used like this:

verification_suite = VerificationSuite().add_check(
    Check(CheckLevel.ERROR, "Basic Check")
    .has_size(lambda sz: sz == 5)  # we expect 5 rows
    .is_complete("id")  # should never be None/Null
    .is_complete("productName")  # should never be None/Null
    .has_mean("numViews", lambda mean: mean <= 10)
)

@verification_suite.check_input(lambda df, *args, **kwargs: df)
def my_fun(df, foo, bar=123):
    df = ...
    return df

The idea would be that at runtime, when my_fun is called, the verification suite is performed on the input df (same idea could apply to the output df). Through the CheckLevel, you could control if this should raise an error or just produce an error log, for example. I know this would need a bit of redesign of the API, since at the moment, it seems that VerificationSuite needs reference to the data to be tested via add_data (but I think this isn't necessary and could prove problematic down the line).

This way, it's less of a "unit test" and more of a runtime test for data. It would not only catch errors that stem from changes in the code, but also from changes in the data. Again, maybe that's the intent, then you could make it more explicit in the README.

Some minor comments:

  • I saw widespread use of lambdas in your code, be careful to store references to them, unless you think it will never make sense to pickle the objects.
  • typo dupliucatees in README
@mfcabrera
Copy link
Owner

Hey Miguel,

Great work, I think this could be very useful for many people.

I have a question:

"Unit test" for me implies that this is part of a CI suite. As a dev, I make a change to ETL code and before it gets merged, my changes are tested on data using hooqu. But would it not make sense to use this for runtime checks as well? Maybe that's the intent, then it didn't become clear to me.

Yeah it says "unit test" for data to say that is a on the data (quality). Now realistically when working with large amounts of data is impossible to test on the whole data (Deequ, the project this is based on, runs on Spark and targeted toward large datasets). So the idea is to add a QA step as part of your ETL, data processing pipeline, not as part of your unit tests (though nothing would prevent you from doing so).

I could imagine this being used like this:

verification_suite = VerificationSuite().add_check(
    Check(CheckLevel.ERROR, "Basic Check")
    .has_size(lambda sz: sz == 5)  # we expect 5 rows
    .is_complete("id")  # should never be None/Null
    .is_complete("productName")  # should never be None/Null
    .has_mean("numViews", lambda mean: mean <= 10)
)

@verification_suite.check_input(lambda df, *args, **kwargs: df)
def my_fun(df, foo, bar=123):
    df = ...
    return df

The idea would be that at runtime, when my_fun is called, the verification suite is performed on the input df (same idea could apply to the output df). Through the CheckLevel, you could control if this should raise an error or just produce an error log, for example. I know this would need a bit of redesign of the API, since at the moment, it seems that VerificationSuite needs reference to the data to be tested via add_data (but I think this isn't necessary and could prove problematic down the line).

You could potentially run the tests like this, I honestly don't like so much decorators for data pipelines. I would rather wrap this into its own step (say a Luigi or Airflow task) and do that at the beginning of the pipeline and/or somewhere in between. However if someone wants to use Hooqu using decorators I think I/we could extend it to make it possible 😉 .

This way, it's less of a "unit test" and more of a runtime test for data. It would not only catch errors that stem from changes in the code, but also from changes in the data. Again, maybe that's the intent, then you could make it more explicit in the README.

Thanks for the suggestion. I will do so! 😃

Some minor comments:

* I saw widespread use of lambdas in your code, be careful to store references to them, unless you think it will never make sense to pickle the objects.

You are right, for now I don't see a need to pickle but I will keep that in mind.

* typo `dupliucatees` in README

It will be fixed!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants