This is the second post in a two-post series in which I will discuss the code review best practices we use in our team (AI Ops) at Unbabel. Currently, most code at Unbabel is reviewed and approved by at least one other person. There are a handful of exceptions to this for demo projects and limited changes where automatic checks are sufficient. We use Jira for issue tracking and GitLab for our repositories so you will see references to those in the discussion below.
If you would like to read more about the best practices for MR creators, please check out Part 1. In Part 1, I also discuss the answer to the question: Why bother with the code review process?
In this post, we will explore the best practices for MR reviewers. We are always trying to improve our processes and these best practices were designed to improve the MR code review process.
Best Practices for MR Reviewers
Code review mindset
It is important to be in the right mindset for the code review process. We keep these things in mind when reviewing code:
- Be polite and show humility
- It doesn’t matter how experienced you are, don’t assume that you are always right
- Code reviews are an opportunity to learn
- Better to ask than to assume
- Pay special attention to code readability e.g. think about how hard it would be for a new member of the team to understand this code
Finding time for MR reviews
It can be a little tricky to find time for Code reviews without disrupting the flow. There is a delicate balance between having blocks of focused time and the speed at which we can provide reviews. I suggest taking time to review the code just before and after breaks:
- At the start of the workday
- After lunch
- After submitting your code for review
- After meetings
My preferred option is just after I create an MR myself. I usually check if MRs are waiting for my review and review them while waiting on either the CI or before starting another task myself.
Timely feedback
As a reviewer, we take it as our responsibility to provide a timely review. Generally, we recommend to respond to a request for review (at max) within 24 hrs. So, if we do not have the time to review in the next 24 hours, we inform the MR creator. This way we can set the correct expectations and a different reviewer can be assigned, if needed.
Communication with the code creator
A couple of communication tips can be used to make the code review process run much more smoothly. We follow these simple communication practices:
- Once you start reviewing code, leave a message for the code creator that you are starting the review process
- Once you are done reviewing, inform the MR creator.
We do these start and end messages for the review step in the Slack message thread posted by the code creator (see Part 1, where we mention sharing your MR).
There can be many turns of review and improvements so it’s best to keep the communication flowing so that people are aware of whose turn it is to work on the MR.
Approval and Suggestions
If comments are about Minor improvements (e.g. Variable naming, unit test updates, etc), then it’s best to approve the MR after leaving comments. We trust our team members and know that they won’t be ignoring our comments. This way we avoid unnecessary back and forth.
Additionally, for single-point change, we create a code suggestion (this is a feature available in Gitlab) rather than a review.
It’s more important to deliver work to production than to just create MRs that remain stuck at review process.
If work of a team member is blocked by review for a long period then we need to help our team member deliver it instead of just continuing to create more MRs ourselves.
What are some of the code review best practices for the MR reviwers that you use at your organization? Do you have custom tooling for code review process or do you use an off-the-shelf solution?