Comment by sirsinsalot
3 years ago
I'm not sure that's a like-for-like comparison and if those things overlap like that, it sounds wrong:
- Ticket: Description of the requirement
- Code: How it was done
- Review: Peer-learning, change evolution
- Unit test: Testing of implementation as understood by SWE
- QA: Did the change match the requirement, did the SWE understand it? Is the outcome the right one?
Each "item" should serve a distinct purpose, have distinct value and be justified. If they seem like duplicates, then that probably points at issues elsewhere.
- Ticket: AB-123 Increase the API maximum page size from 500 to 1000
- Code change: MAXIMUM_PAGE_SIZE -500 +1000
- Unit test: assert len(request[0:2000]) == 1000
- Commit message: Increase the API maximum page size from 500 to 1000
- Merge request: Increase the API maximum page size from 500 to 1000. For AB-123
- Daily scrum update: I've increased the API maximum page size from 500 to 1000, if someone could have a look at my merge request.
- Deployment request: Increase the API maximum page size from 500 to 1000, for AB-123
- Post-deployment test plan: AB-123, ensure maximum API page size is now 1000
- Stakeholder demo: When an API request is made, the page size is now 1000.
- Incident report: Regression. When running on AWS, page size of 1000 causes the process to crash intermittently with data loss. Recommendation: reduce the max size to 500.
It's amusing to me that in none of these did you say why you were increasing the page size.
I know what you mean, but to be a bit of the devil's advocate here (only half kidding):
As a reviewer of such a pull request, I'd go over all the places in the code where this page size constant is used.
I'd also like to see a rough assessment of the impact of this change. Does it affect a lot of code? Some code? What percentage of users are to be affected by this change?
Also, who asked for it? It's ok if no user asked for it and it's your own initiative. But if users did ask for it (or rather complained something like "the app rejects our API requests" or "the app is effin slow, please fix"), then it'd be nice to connect to their tickets / mails / chat logs. This could serve as a proof to management if someone decides to question this change.
Deployment: If this change is in an API called by many functions (so, big impact), but it can bring with it a big benefit to many users, I'd like to see a rollout plan - as simple as putting it into a beta version, or (if we have them) using feature flags to enable it, and a plan (can be an automated script) that tracks crashes during this rollout. If the change doesn't have a big impact then that's not necessary.
Ideally I'd like to see coverage results that proves that all those functions which use this constant and all code paths leading to them have coverage. It's perfectly ok if they don't, perfect is the enemy of the good, but at least the major ones. I would also go over carefully at least over some of the code which uses this constant directly and indirectly to ensure no funny business like too many threads allocating this bigger buffer, no funny out of bounds issues due to code assuming size is of a certain length (if it's C/C++/C#) etc.
So really, what is the user-visible impact of this change? If it has no user-visible impact, then why was it made? The ticket as it was specified here doesn't answer this question and therefore reflects a somewhat broken organization/team, where engineers are disconnected from their users and/or lack the eloquence or willingness or time to explain their changes. I bet the person who wrote this doesn't even bother writing comments about non-obvious changes (such as this one!), making their code harder to maintain.