The Code Review Process

edit

The Code Review process covers: Purpose, Responsible, Method (Planning, Review Session, Follow-Up), Time, Input, Output, Background for Code Review process, Resource Usage, Literature

Purpose
We use code reviews to improve correctness and stability of our code. The main purposes of code reviews are traditionally

  • to reduce the overall error rate
  • to improve productivity

Another equally important effect of code-reviews is

  • to distribute information about the code and establish common coding practices

Responsible
This can either be

Method
A code review consists of 2 or more persons that read the code and in a structured way identify changes that will improve the overall quality of the code. Unit test are usually not included. Code review is the third phase of implementation (following unit test writing and implementation of the code). It is normally done when the relevant part of the code is fully implemented, i.e. passes all the unit tests, has been sanity tested and documented. Our process for code review contain planning, review session and follow-up as described below.

Planning

  1. Review participants are specified in the current iteration task overview. Usually it is the implementor and another developer.

  2. The implementor specifies the code to be reviewed in Crucible
    1. Log on to Crucible (http://kb-prod-udv-001.kb.dk:8060). Please refer to Crucible Guidelines for sign-up if you have no account already.

    2. Go to your Crucible DashBoard (using the link "My Crucible DashBoard")

    3. Click on "Create new review" in the top right corner
    4. Select a proper titel (e.g. Feature request Y, og Bug Y plus description). If it corresponds to a task in the current iteration task overview, it should have the same title, as is written there.

    5. Set yourself both as 'moderator', and 'author' (unless code is authored by somebody else).
    6. Write relevant comments in the "Statement of Objects" for instance genral comments to the review and list of deleted file or class names that for obvious reasons cannot be included.
    7. Add the files included in the review. This can be done by selecting files in different changesets. Note that we do not review code in the test-branch.
    8. Add a revision comment to each of the files with specific lines to be review (or ALL LINES). Also note what happened here if relevant e.g. lines that have been removed (fil/class name and revision of file is given automatically by Crucible)
    9. You notify the reviewer of the existence of a review, by clicking "Start Review".
      Note that it is best to make wiki review entry (as explained in the next step) before notification.

  3. Make an entry with review information for the review in the wiki current review table. (see for example Review table for iteration 36).

  4. The participants agree on a time to review.
  5. Before the review time, each participant reads the code thoroughly and note problems - big or small - that should be discussed. Place the comments at the relevant places in Crucible. For instance:
    • Add a new general comment:
      For general problems like design problems or problems concerning most of the files like missing '.' in end of JavaDoc.

    • Add a revision comment:
      For general problems in a specific file like generally missing JavaDoc in this particular file.

    • On specific line (mark line will result in a comment field to appear):
      For problems on specific lines of a file like lack of white space around delimiters.
      REMEMBER to post the comments by clicking "Post" for each of the comments.

Review Session
This part, while central to the whole process, should not be allowed to drag on forever. If the reviewers cannot agree on how to fix a problem within a few minutes, the item should be marked as "consider how to..." rather than prolonging the discussion.

A typical review session should take no more than an hour (and most take less than that). If it takes longer, the review should be stopped and a time to continue should be agreed upon. More than an hour of straight review reduces the efficiency.

  1. The participants meet on the phone (only physical meeting if possible)
  2. Before starting check that
    1. Code has been unit tested
    2. Code has been sanity tested
    3. Functionality has been document in manuals
    4. If any of these are missing than the Review should be postponed.
  3. Use Crucible to go through the review
    1. Log on to Crucible - preferable both reviewers.
    2. Discuss each posted comment in order of appearance.
      • General comments
      • Revision comments
      • Specific line comments
    3. Those items that the participants don't agree to discard are marked by clicking on the "Defect" box which enables selection of rank in the "Select rank" drop-down list. When the Defect and Rank is specified the item is posted by clicking "Post".

    4. Note it is only the author of the comment that can post the comment. If only one of the reviewers have access to Crucible, the non-owned comments must be copied into new comment that can be posted with the mentioned information.

    5. Make a Crucible General Comment on the review and remember to mark the as defect, otherwise rank cannot be selected.

    6. Note the time used for the task in the General comment for the review using following wording:
      Time use (Coding,Documentation,Review)
      <IinitialsOf1Reviewer>: <NoOfManDaysUsed>
      <IinitialsOf2Reviewer>: <NoOfManDaysUsed>
      Remember to set selection of rank in the "Select rank" drop-down list to "Time Used". This is used for the [Iteration review] made in the end of the iteration.

    7. Post the General comment - otherwise this information will not be passed to the wiki afterwards.
    8. Complete the review by clicking "Summarize"
  4. Agree to who is doing follow-up in case flaws are found during code review. Usually, this will be the implementor.
  5. If there are found faults, you can correct it by clicking "Reopen", do the update and click "Summarize" again.
  6. Update the table in the current reviews page with

    • Review date

    • Issues found - using the Crucible export function and insert result on page IssuesFromNsXX where XX is the number in the Crucible review id.

      • Create new Issue page
      • Insert text from the export function
      • save the page
    • Follow-up - initials of person decided to do the follow-up.

Follow-up

  1. The follow-up person goes through the list of items and handles each of them. Depending on how an item is dealt with, the item is marked as either OK, REJECTED or POSTPONED under "Status" on the Issue page (reached via current reviews. When committing the follow-up changes, please refer to the review identifier (NS-XX) in the commit comments like this:  Follow-up after review NS-XX  

  2. The follow-up person fills out the DONE column for the review on the current reviews page once all items have been handled. If some items are marked POSTPONED, the review is marked OK-wp. Otherwise it is just marked OK.

  3. If the implementor feels the changes are significant enough to require a new review, another review cycle starts. The first review is left as-is. This rarely happens, and should only happen when design issues have been identified and resolved during the review process.

Time
The input must be review as soon after actual code implementation as possible. In case of changes in code, it cannot be passed to quality assurance (before release test) before it has been reviewed.

Input
The input is code implementing of a Tracker Issue which have been

  • unit tested
  • sanity tested
  • documented in manuals

Output
Reviewed and followed-up input, ready for quality assurance before it can be marked as ready for release test.

Background for Code Review process
The code review process was inspired by NASA's ideas for code inspection. The process has however been simplified in order to ease the transition to inspection. As the project group gains experience with inspection it is recommended that the inspection process is refined. The description focuses on code-inspection.

Resource Usage
Code review takes time, of course. The actual time spent discussing the code is typically roughly the same as is spent going over the code beforehand. Follow-up can take a varying amount of time, depending on the starting quality of the code and whether significant changes have been found necessary. Some kinds of code take longer to review than others, for instance straight-forward getter-and-setter style classes go very fast, while a review of a few lines of change in a complex method can take much longer. In the start of the NetarchiveSuite project, we kept track of the time spent preparing for and executing the review (but not doing the follow-up changes to the code). The ratio of preparation time to review time varied, but there was never more than a factor 2 difference to either side, on average the two were about the same. The number of lines of code reviewed per hour (LoC/h) varied from 88 to 300, with a mean and average value of about 170 LoC/h. Later code review times were not recorded, but is likely to be slightly faster due to a better system for taking notes.

Literature

Process/Code Review (last edited 2010-10-04 12:19:31 by SoerenCarlsen)