Kill Your Darlings
Software developers making a change seem to usually try to achieve the result with the smallest amount of code possible. We hesitate to modify things too much, like to eliminate or rework an important class or function. Itâs like we see established code as some sacred thing, or something fragile that needs to be handled with care. Like thereâs a universal voice in the back of our heads, saying:
- âDo I really think I know better than the original authors of this code? Itâd be disrespectful to modify their work too much.â
- âI put a lot of effort myself into writing this code; redoing too much of it now is just duplicate effort.â1
- âCode changes are dangerous; every line you modify is one more opportunity to break something you donât know about.â
Thereâs a positive feedback loop to this kind of mindset. When all developers think this way, small workarounds start to get peppered all throughout the code. As they accumulate, the code gets harder and harder to understand. That decreases developersâ confidence in their understanding of the code and increases the difficulty of large changes â meaning developers get even more scared to make large changes, and the cycle continues.
Code evolves much more naturally when developers feel empowered, not fearful. When they feel like they really understand the code, and can cement that understanding by playing around and experimenting with it. When they can make important decisions about the codeâs evolution, rather than feel stuck with other past or present developersâ decisions.
Nurturing such a culture takes some effort. A good set of tests are a prerequisite to give developers confidence that they have a safety net and canât break anything too badly. But tests arenât sufficient. You also need to fight back against developersâ inclination to leave existing code alone. One way to challenge that mindset is by embracing a principle that is completely opposed to it: âKill Your Darlingsâ.
Taking Inspiration from Writing
âKill You Darlingsâ is a phrase invented in the context of writing2, meant to suggest that authors should be ruthless about cutting things out of the stories they write. Characters theyâve lovingly crafted, plot lines theyâve spent hours revising, details of the world that theyâre excited about. Itâs natural to grow an attachment to such things, but thatâs why this phrase exists: to remind authors that this attachment clouds their judgement. As much as they may have grown to love their own creations, the final work will suffer if they donât eliminate absolutely everything that weighs the story down.
Software development is writing. Itâs not exactly the same as, say, writing a story, but itâs still all about communication. Developers may not create characters and plots and settings, exactly, but we have classes and functions and modules which we craft with just as much care and attention. Weâre susceptible to the same biases as any other author, and want to keep our creations around. But for the good of the code, once an abstraction outlives its usefulness, someone should kill it.
When to Let Go
The point of each abstraction we make â classes, functions, modules, and even variables â is to simplify code: to break down impossibly complex systems into pieces we can comprehend. But over time, abstractions that were once useful may start to get in the way. As the code changes around them, the service they once provided may no longer be enough, and developers may find themselves working around the abstraction. Or they may modify the abstraction to support new requirements, which often makes it less cohesive and more complex.
The benchmark I use for any abstraction â class, function, module, or variable â is, âhow accurately, and how easily, can you describe how itâs supposed to behave, without referencing the implementation?â If this is hard, then you can say, at least as a rule of thumb, that youâre using the wrong abstraction. For the good of the code, the abstraction should be reworked â or, more likely, it should be replaced completely.
It can be helpful to keep in mind that this doesnât mean the abstraction is or was bad. It was written in the past, by developers dealing with their own requirements and pressures. They faced different conditions from what youâre dealing with now. Under those conditions, it was probably useful. Now conditions are different. Choosing to change or get rid of it now is no criticism of those past developers; itâs just recognition of the changing environment.
I saw this transition happen at my previous company. Soon after Iâd joined, we started increasing the number of experiments we wanted to run, testing different features with different users. But each one was built completely ad hoc. To the extent there was a standard practice, it had just grown organically from developers trying to find the quickest way to implement each new experiment. Eventually, I noticed a lot of repeated patterns in the code for each one, so I decided to make an abstraction to reduce that duplication and hopefully make new experiments easier to write.
My abstraction was a simple one that I wrote over the course of one or two days, maybe a hundred lines of code. It was a helper class that took an experiment configuration in its constructor, and provided three boolean methods for the three most common checks: whether a user is eligible for the experiment, whether theyâre sampled for the test group, and whether theyâve already been assigned to the test group.
This new class was an improvement, and it started to be used widely. But at the same time, we began to change how we ran experiments. Soon we started wanting to distinguish three groups of users for each experiment (sampled, not sampled, and not participating), and for that, the boolean methods started to get in the way. They also werenât well suited to handling experiments with more than two kinds of variants. And the class didnât actually handle assigning users to groups. It just returned booleans, while developers still had to check each of those booleans and then manually assign users to each group, a process that was known to be very error-prone.
When these issues started becoming apparent, it was a sign that the abstraction we were using was not the right one for the task. We needed a new abstraction that took these new requirements and learnings into account.
How to Let Go
I had moved out of that area of the code by the time these issues became clear, but I pointed them out when I saw developers adding ad hoc workarounds with each new experiment. I realized that my abstraction had become as much hindrance as help, so I suggested that my abstraction should be updated or replaced with a new system that better matches our actual requirements. I wanted my work to be killed. But I was fascinated by the fact that no one seemed to want to touch it. Instead of trying to solve these issues for all experiments, they made new workarounds every time.
I wonder if it was because they felt like undoing my work would be disrespectful, or if they didnât feel up to the task of improving it, or if they felt caught up in day-to-day urgency and just felt like it was easier to use an imperfect existing abstraction rather than make a new one. But when your abstraction is bad â especially if itâs commonly used â itâs best to kill it as soon as possible. Be ruthless. The longer you wait, the more its costs build up.
This doesnât have to be a risky process; you donât have to replace everything all at once. Instead, you can create a new abstraction that should be used by all new code, without touching existing code at first. Then, when the team has gained confidence in it, you can begin a transition period, where the old abstraction is marked as deprecated and you gradually replace its uses with the new abstraction. This method massively reduces the risk and stress of replacing existing code, and is useful either when code needs to be changed in a lot of places, or when the affected code is particularly important.
Designing the new replacement is the most fun part of all of this. Inventing an abstraction is possibly the most pure act of creativity a person can engage in. Itâs a huge subject in its own right, so thereâs limited advice I can offer in a single blog post for how to approach it. Itâs more art than science, and something you mostly get better at through experience. My favorite book on the subject is A Philosophy of Software Design, which provides great guidance on how to improve this skill.
For abstractions with a small enough number of usages â say, single digits â thereâs another approach you can take if youâre having trouble designing a replacement. Itâs a suggestion that comes from Sandi Metz, which is to kill the abstraction before making a replacement for it. Instead, you can undo the creation of a function by bringing the code back inline in every place where itâs used. Then you can remove the parts that are irrelevant to each area, and examine the new code to see what patterns are actually shared between them. Seeing the code unwound like this can really help generate ideas for what kinds of abstractions are actually needed. But even if it doesnât, youâve already improved the code just by removing the old abstraction. Itâs better to have no abstraction than a bad one.
Conclusion
Existing code is not sacred. Itâs a tool. When itâs not useful anymore, donât just keep working around it. Instead, recognize when code no longer serves its original purpose. The longer you wait, the longer it has to metastasize and harden the code around it. The sooner you get rid of it, the sooner you can replace it with something better.
It takes effort to overcome the psychological hurdles to be able to kill your darlings. But itâs an invaluable practice to keep code maintainable and ensure your projectâs longevity.
This is called the sunk cost fallacy, the idea that when you abandon something you lose everything you've invested in it. This is false because time and effort and resources get lost immediately when they're invested, so should not influence decisions beyond that point.âŠ
I can't remember where I first heard this idea, but it's pretty widespread. Here's a decent article I found that explains it pretty well.âŠ