vibe coding meets version control
I keep running into a problem when building with AI tools that I wouldn’t have been able to predict: git getting into the way.
One of my most opinionated stances is that any software developer that’s a SWE II+ must have a good command of git. “Strong command” at a minimum means things like: using git through the command line, understanding the basic commands to stage/unstage & commit, understanding the difference between a squash and a merge. I would expect a SWE II to know about (and maybe even have used) reflog or bisect if they’ve gotten into a pickle. They should be utilizing git as the wonderful, critical tool it is.
With that, I have strong opinions about how commits/branches should be sequenced, sized, and applied. The general framework is at the bottom of the post, but it’s all centered around atomic, isolated, squashed commits into the main branch.
I’ve found that this approach is pretty contagious and gets adopted pretty quickly if I join a new team, without needing to explicit convince others to follow along. By just modeling the behavior myself, the usefulness becomes self-evident once someone’s tracking down where a bug came from or needs to revert a specific change. It’s the closest thing I have to a religious belief about software development.
That was all until agentic AI workflows, and AI programming tools in general, came to prominence. The cost of writing (generating?) code has effectively dropped to zero, and even the method in which that code is written has changed in such a drastic, structural way. It has completely upended the harmony of my git workflow in at least two dimensions.
First, the bottleneck in the feedback loop has shifted from writing the code to the review + deploy process. Taking the extra effort to split branches and limit the changes in each eventually-squashed commit was infrequently the bottleneck in the end-to-end development process - the time to write the code for one change was usually in the ballpark to get a review for something else you had worked on and in review. Part of the point was to make it easy for a human to review by keeping it small and as simple as possible.
Code Review is also not just a QA process to ensure code looks good, but it is also an information dissemination process at both the specific project level and the generic technical level: engineers could share ideas and approaches with those around them as a passive byproduct of working those techniques into their changeset. Code Review allows each engineer to engage in someone else’s thought process, to be a participant in a professional “show your work” process. The best techniques rise to the top after rounds of meticulous review and being able to empirically see how they perform in the wild. It’s a natural way for a team to get better just by learning from each other.
Now the balance between those two processes are out of whack, and the act of writing can be much faster than reviewing. That doesn’t devalue Code Review, and likely makes it more important, but it changes the incentives and metrics a bit. As the speed of code generation is at an entirely different scale than human-written code, the opportunity cost of not writing code feels that much higher (that is certainly the message being delivered within the mainstream software / entrepreneurship culture, however you feel about it). You can also imagine someone taking the stance: who cares if you deploy a bug if you can just vibecode the fix?
It’s unclear where the new balance around code review will rest, but there are still many blogs to be written on the topic over the next few years.
Second, the parallelization of work was roughly limited to the number of developers on a given team. By producing atomic commits, separating refactoring from behavior changes, etc., git conflicts between developers could be kept to a minimum and, when they happened, dealt with just a little extra effort. A little bit of planning ahead of time allowed every developer to work without fear of clobbering something that someone else was working on.
Well, the number of developers is clearly not a real limit anymore. Building in parallel with Claude or Cursor is becoming the default, not counting the ticket-based fully autonomous agents, the chatbot interface PMs and commercial folks can use, and all the other form factors that will emerge over the next few years.
Maybe this more of a function of how mature the code base is, and heavily AI-built projects tend to be earlier on where larger changes are needed as basic entities and themes are still in flux, but a persistent problem I’ve run into is AI processes have overlapping scopes, resulting in pretty gnarly conflicts that take awhile to untangle. I run into this on solo projects, so it’s hard to imagine anything beyond a two-pizza team not having this be a nagging issue producing a significant drag on the team.
The reliable old approach seems to need rethinking. The unfortunate bit is that I don’t know what the solve is. I’m not advocating for totally abandoning this approach that has worked so well, but it’s become glaringly obvious that there are some real downsides to consider in this new world.
When development was human rate-limited, it was clear (IMO) what the best approach to git was. Now, the underpinnings of my git doctrine are shifting, I’m having a bit of crisis of faith, and I don’t know the righteous path forward.
Truthfully though, it’s a bit exciting. A bit freeing. It’s nice to be unburdened, to break some rules, to reject the ascetic approach. A software rumspringa. That’s what’s so great about these AI tools - they’re making software development fun in ways it hasn’t been for a long time. They’re allowing us to focus on bits that we haven’t been able to for quite awhile. For right now, that seems to be the most important thing - we’ll figure out the rest later.
Appendix
My general git framework:
- All commits to
mainshould be squashed into a single commit. This allows the history of themainbranch to match reality (i.e., what actually deployed and in what order), and allows for easier identification and reversion if needed later. - All commits in the
mainbranch should be a complete but atomic change. That can be at a bugfix, chore, or feature level, but they should consist of a single, isolated change that is as complete as is reasonable. There should be one purpose of the commit. - Commits should only include what is necessary for the change, and nothing more. The art of the commit is distilling it down to its purest form. This could be interpreted as a rephrasing of the second point.
- Refactors should not change behavior, and should be done prior to behavior changes - refactoring and behavior changes should not be mixed into a single commit.
These have come out of a decade of writing software within teams and working solo, and the many hours spent tracking down the source of bugs, reviewing code, dealing with merge conflicts, minimizing overlapping work, orchestrating deploys, rolling back bad changes, etc. All the things that make software fun. Time & time again, it is always easier to deal with atomic commits that clearly have a defined, isolated purpose. It makes life generally, and cleaning up messes specifically, much easier.