Replies: 7 comments 6 replies
-
I have voted in favour of this as I agree that we need to try it out before we can judge how useful it could be. |
Beta Was this translation helpful? Give feedback.
-
Noting for the future that GH polls are evidently not a great choice for these polls, since you can't restrict who can vote, but also can't see how cast what vote, meaning it's not possible to verify that only the intended participants voted. For reference, I voted "Yes" |
Beta Was this translation helpful? Give feedback.
-
As an FYI: Regarding muting bots, I have commented and upvoted this feature request: https://github.com/orgs/community/discussions/5793 I have also commented and upvoted a recommendation for making polls more useful for maintainer teams here: https://github.com/orgs/community/discussions/17367 I'd encourage everyone to upvote these issues to try and raise visibility with the GH feature team. /offtopic |
Beta Was this translation helpful? Give feedback.
-
I agree with @bryevdv and @ianthomas23 |
Beta Was this translation helpful? Give feedback.
-
I agree, and I've voted "yes" as well. :) |
Beta Was this translation helpful? Give feedback.
-
I generally agree with the idea proposed in the PR, thus most of the points in this discussion are irrelevant to the issue. My concern in the PR has been verboseness of this approach (i.e. one comment is one comment too many given the alternative). Consider the following PR from panel: If you look at the PR, you will see how verbose the default approach is. However, if you scroll the page to the bottom (you may need to click "show all checks"), you will see If you click on "Details", you get the same verbose report as in the comment: So, if the report is one (or two clicks) away, why duplicate it so prominently? To me it feels more like a distraction than help. |
Beta Was this translation helpful? Give feedback.
-
Recording as passed: |
Beta Was this translation helpful? Give feedback.
-
This poll is intended for members of @bokeh/core to respond to. (If that is not you, please refrain from commenting or voting!)
Proposal
I would like to propose merging #12336, which adds support for test coverage repot automation via codecov.io as-is, leaving further configuration or refinement to later PRs.
Rationale
Necessity of a baseline for experimentation
Codecov offers many configuration options, not limited to:
I am content to experiment with any/all of these. In particular I am happy to have PRs fail if coverage dips, sooner rather than later, if others agree. However, many of the options would be easier to experiment with once there is a baseline, and some options (exact formatting of output, PR failure against baseline threshold) must have a baseline established in order to evaluate changes. Tweaking the default configuration should happen after this PR is merged and a baseline is established.
Reasonable Notifications
Codecov posts a single comment in PRs, and if there are future updates, that one comment is updated in place, rather than posting a new comments. This is unlike LGTM, which posts a new comment for any issues on every new CI run. Regarding email notifications, that is something that everyone can individually control on their own end using the GH notifications and/or their own personal email filters, and is therefore not a compelling reason to avoid automations that others wish to implement.
Utility of test automation for Python codebase
The Python API is the most public-facing portion of Bokeh. Apart from testing for correctness, the only hope of keeping that API stable against unintentional changes is to enshrine it in tests. Testing is not the end-all, be-all but it is important, especially for the Python side. Testing is not sufficient, but it is necessary and codecov automation will help ensure that we maintain it up to appropriate levels.
Utility of test automation for merged results
Unlike our current setup with individual jobs reporting their own results each individually, codecov collects and merges results from all tests / all platforms. This provides a comprehensive coverage, something we cannot otherwise obtain, currently.
Utility of test automation for new contributors
When new contributors make a PR, it is good for them to see that testing is active and ongoing, and also that maintaining it is required. Having a PR automatically fail due to test slippage streamlines interpersonal interactions in the same way automating code linting and standards does, by circumventing conversations about justifications.
Vote
The deadline to vote is set to three days (12:00 PST 2022-09-01)
4 votes ·
Beta Was this translation helpful? Give feedback.
All reactions