Episode 174: plz.review

51:55
 
Share
 

Manage episode 334801393 series 8373
By Mark Derricutt, Greg Amer, and Richard Vowles. Discovered by Player FM and our community — copyright is owned by the publisher, not Player FM, and audio is streamed directly from their servers. Hit the Subscribe button to track updates in Player FM, or paste the feed URL into other podcast apps.

Reddit Post: Improving on the GitHub code review comment experience : programming Blog Post: Bit Complete Blog: Improving on the GitHub code review comment experience | Bit Complete Inc. YouTube Introduction Video: Introduction to plz.review - YouTube Website: plz.review Guests: Dylan Trotter, Matt Schweitz

It's been a while since recording, and as it happens, just before organizing the next episode, full of “discussion” on the recent Java 18, and forth-coming Java 19 release, I came across an r/programming post from Dylan Trotter from Bit Complete about their new stacked code review tool for Github.

After reading the post, linked blog post, and introductory YouTube video, I reached out to discuss the product, the problems with Github's default PR model, and code review in general.

Contents
  • 00:00:03.754 Introduction
  • 00:01:07.236 Bit Complete History
  • 00:02:24.492 What Makes A Good Code Review?
  • 00:12:32.219 DORA (not the Explorer)
  • 00:14:38.308 Bisecting Squashed Commits
  • 00:18:51.617 The plz.review Solution
  • 00:19:08.934 plz.review Stacks - Grouping PRs Together
  • 00:21:36.969 plz.review Revisions: Force push solutions
  • 00:25:37.590 plz.review Migration - Moving from Gerrit?
  • 00:29:34.371 Gerrit and Github Integration
  • 00:32:23.516 Compromising your workflow to fit Github PRs
  • 00:39:41.700 Code Review as Mentorship / Education
  • 00:41:26.759 Github: Social Coding brought code-review to mainstream
  • 00:45:52.124 Long Lived Support Branches
  • 00:50:03.479 How to signup for plx.review
Overview

Before we get into plz specifics - I assume both Dylan and Matt have some interesting takes on what makes a good review:

  • What makes a good review?
  • What constitutes good review "hygiene"?
    • Mark: IMHO A review/commit/PR should ideally do one thing. What the "thing" may be intangible, but ideally:
      • If you're going to reformat code, keep it in a commit separate from business logic changes.
      • If you're updating dependencies, keep them separate from business logic changes, however do include code changes to ensure the build continues to build and pass tests.
        • If a dependency update introduces breaking API changes, keep that dependency change along with the implementation for it.
  • Laurence Tratt: Programming Style Influences
  • Practical guide to DORA metrics | Swarmia
    • DevOps Research and Assessment - The four DORA metrics are:
      • Deployment frequency: How often a software team pushes changes to production
      • Change lead time: The time it takes to get committed code to run in production
      • Change failure rate: The share of incidents, rollbacks, and failures out of all deployments
      • Time to restore service: The time it takes to restore service in production after an incident
    • The first three metrics I can see being highly impacted by small changes, automated testing and CI integration. These all intersect with code reviews – having visibility that a proposed change actually compiles, passes tests, and doesn't introduce any new security issues before a co-worker even looks at the change speeds up the process.
    • We've spoken on code formatters before on the show, keeping consistency for reviews is a good thing regardless of being automated or not.
plz.review

The project/platform appears to solve several issues we're facing with Gerrit, and our adoption of Azure Devops:

  1. Azure Devops is unable to pull from patch-sets, due to the private rev nature of Gerrit
  2. Writing custom Azure function(s) to listen to Gerrit events and manually trigger a Devops build is viable, but not ideal.

The prospect of abandoning Gerrit and switching to Githubs force push and squash approach makes me cry,

  • Coming from the Gerrit Code Review Tool it's great to find a stacked review tool for Github, having people getting in the habit of force pushing to remotes just promotes bad hygiene IMHO. Looking at the available docs, several interesting things come to mind:
  • Comparison: Gerrit and GerritForge - Home page - which I've used in the past for some open source work (GitHub - repaint-io/maven-tiles: Injecting maven configurations by composition rather than inheritance originally used GerritForge).
    • Gerrit keeps all patch-sets - and lets you compare diffs between patch-sets, is that also mirrored within plz.review? I guess Github would only track to latest/amended PR.
      • Looks like the plz command line generates its own form of Change-Id footer: plz-review-url which tracks revisions of a change linking to https://plz.review/review/NNNN - which doesn't appear to be name-spaced in any way (company or repo).
      • Gerrit's UI is getting better, but still leaves a lot to be desired – I still wish improvements to commenting/UX were more of a focus. plz.review's UI also seems simple in design (not necessarily a bad thing), UI/UX design is hard in general, the requirements of a stack version control also seem to make it a tricky balance between fully featured, and clean….`
    • Gerrit Migration – Given git patch-sets/comments are all in git refs, is there a migration strategy to expose them into plz.review/github?
      • Is there a potential migration strategy? plz.review feels early in the development stage. Such things may not have been considered, or deemed out-of-scope for the system.
  • Comparison: Graphite — Modern code review for fast-moving teams - Open Source CLI/web dashboard stacked review tool based on Git, designed to work with Github. - Graphite beta demo [December 2021][V2] - YouTube - Stacked diffs for fast-moving code review - The Changelog: Software Development, Open Source (Change Log Podcast Interview).
  • Automated bot responses - with CI/test results often being published back into Github PR's as validations for viable PR merges, do these get reflected in the plz UI, or even -2 rejections?
  • Dealing with commits outside of plz.review - such as automated release commits that push direct to GH / potential conflict resolution (having access to Gerrit's local git repo has often been a godsend).
  • What's the business model for plz - per repo charge? per organisation? per user?

140 episodes