Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

POC: Implement runtests() in myTAP #48

Open
sq6jnx opened this issue Oct 22, 2019 · 6 comments
Open

POC: Implement runtests() in myTAP #48

sq6jnx opened this issue Oct 22, 2019 · 6 comments

Comments

@sq6jnx
Copy link
Contributor

sq6jnx commented Oct 22, 2019

Feature I miss the most in myTAP is organizing tests in test cases, with possibility of having test suites in near future. My understanding of cases and suites is:

  • a test case is a set of assertions
  • a test suite is a set of test cases

Yesterday I wrote a POC of pgTAP runtests() in MySQL, today I tried to write some test cases on it, but I feel I failed.

Anyhow, here it is: https://gist.github.com/sq6jnx/26ece74af8882655464b843f36657cc8. Please let me know what do you think about it, what is OK and what can be done better.

And yes, I still feel novice in myTAP and unit testing.


EDIT: I strongly believe, that test cases should run in random order. I don't think pgTAP can do that nor I don't think it should be default behaviour in myTAP, but I'd like to have it.

@animalcarpet
Copy link
Collaborator

Michal,
I had a look over the gist yesterday and it looks fine. Obviously, you are not going to be able to completely emulate the pgTAP version but we live with that constraint all the time.

The order that tests are run can be slightly problematic because it can complicate setup and teardown or slow things down at the very least and no one likes waiting for tests to complete.

With both of these ideas I'd be interested to see where you are headed. Perhaps Helma has some thoughts on this.

@sq6jnx
Copy link
Contributor Author

sq6jnx commented Oct 25, 2019

The order that tests are run can be slightly problematic because it can complicate setup and teardown or slow things down at the very least and no one likes waiting for tests to complete.

Please elaborate on this. setup() and teardown() must be run before and after each test case, that's how xunit tests run since first smalltalk implementations and we definitely should follow this way. OTOH, if any of init functions is not implemented it won't be called. Also, I don't think that ORDER BY RAND() will slow down things noticeably if you don't have bazillions of test...() procedures.

Hopefully, I'll have some spare time this week to work on this POC.

Looking forward to comment from @hepabolu.

@animalcarpet
Copy link
Collaborator

Michal, I was commenting there on the natural sequence of operations on data. I can't query, update or delete a record before inserting it, or drop a table before creating it so I wouldn't necessarily consider testing any of these out of their natural sequence or expect an error if I did. The testing of these operations naturally align within a sequenced group. If we say that sequence must not matter, do we get a more valuable test or just one that can be done in any order?

@sq6jnx
Copy link
Contributor Author

sq6jnx commented Oct 25, 2019

Sorry, I didn't mean to be rude or harsh in my previous comment.

The problem is, that sometimes when writing your code you expect to run its parts (test cases, functions, ...) in specific order. This is usually natural, but this expectation is unintended and turns to be wrong if code under test takes same unintended assumptions. Uncle Bob calls it Independent in his FIRST principle, where independent means "should not depend on each other" (sorry if that's obvious for English natives, I am not one).

Imagine situation, where tests pass when run in sequence, but some fail when run in random order. Why do they fail? Is that because of weak test independence? Or maybe code under test is not prepared for some edge cases?


I just found this (nunit/nunit#663 (comment)) comment on problems when debugging reasons why tests fail when running in random order. Gotta think about it.

@animalcarpet
Copy link
Collaborator

Michal, no offence was taken.

Your suggestion for randomisation is an interesting idea and I certainly agree that we have to account for events happening out of their expected sequence as much as in their 'correct' order, in the same way we need to test the negative case as much as the positive one.

I do wonder, though, whether randomisation would necessarily probe edge cases where the dependency is only transferred from test sequencing to fully independent setup and teardown processes.

@hepabolu
Copy link
Owner

I'm a bit late to the discussion, but I certainly like the idea. I've been bitten by weak test independence a few times so being able to randomise the test order and therefore proving test independence sounds like a good idea.
If nothing else it demonstrates good test examples to those wanting to write tests for their data model.
I for one am curious if we would discover any flaws in the tests. I don't think so, but it's nice to have the proof out there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants