Colin Hamilton's Blog

Kill Your Darlings

Software developers making a change seem to usually try to achieve the result with the smallest amount of code possible. We hesitate to modify things too much, like to eliminate or rework an important class or function. It’s like we see established code as some sacred thing, or something fragile that needs to be handled with care. Like there’s a universal voice in the back of our heads, saying:

There’s a positive feedback loop to this kind of mindset. When all developers think this way, small workarounds start to get peppered all throughout the code. As they accumulate, the code gets harder and harder to understand. That decreases developers’ confidence in their understanding of the code and increases the difficulty of large changes — meaning developers get even more scared to make large changes, and the cycle continues.

Code evolves much more naturally when developers feel empowered, not fearful. When they feel like they really understand the code, and can cement that understanding by playing around and experimenting with it. When they can make important decisions about the code’s evolution, rather than feel stuck with other past or present developers’ decisions.

Nurturing such a culture takes some effort. A good set of tests are a prerequisite to give developers confidence that they have a safety net and can’t break anything too badly. But tests aren’t sufficient. You also need to fight back against developers’ inclination to leave existing code alone. One way to challenge that mindset is by embracing a principle that is completely opposed to it: “Kill Your Darlings”.

Taking Inspiration from Writing

“Kill You Darlings” is a phrase invented in the context of writing2, meant to suggest that authors should be ruthless about cutting things out of the stories they write. Characters they’ve lovingly crafted, plot lines they’ve spent hours revising, details of the world that they’re excited about. It’s natural to grow an attachment to such things, but that’s why this phrase exists: to remind authors that this attachment clouds their judgement. As much as they may have grown to love their own creations, the final work will suffer if they don’t eliminate absolutely everything that weighs the story down.

Software development is writing. It’s not exactly the same as, say, writing a story, but it’s still all about communication. Developers may not create characters and plots and settings, exactly, but we have classes and functions and modules which we craft with just as much care and attention. We’re susceptible to the same biases as any other author, and want to keep our creations around. But for the good of the code, once an abstraction outlives its usefulness, someone should kill it.

When to Let Go

The point of each abstraction we make — classes, functions, modules, and even variables — is to simplify code: to break down impossibly complex systems into pieces we can comprehend. But over time, abstractions that were once useful may start to get in the way. As the code changes around them, the service they once provided may no longer be enough, and developers may find themselves working around the abstraction. Or they may modify the abstraction to support new requirements, which often makes it less cohesive and more complex.

The benchmark I use for any abstraction — class, function, module, or variable — is, “how accurately, and how easily, can you describe how it’s supposed to behave, without referencing the implementation?” If this is hard, then you can say, at least as a rule of thumb, that you’re using the wrong abstraction. For the good of the code, the abstraction should be reworked — or, more likely, it should be replaced completely.

It can be helpful to keep in mind that this doesn’t mean the abstraction is or was bad. It was written in the past, by developers dealing with their own requirements and pressures. They faced different conditions from what you’re dealing with now. Under those conditions, it was probably useful. Now conditions are different. Choosing to change or get rid of it now is no criticism of those past developers; it’s just recognition of the changing environment.

I saw this transition happen at my previous company. Soon after I’d joined, we started increasing the number of experiments we wanted to run, testing different features with different users. But each one was built completely ad hoc. To the extent there was a standard practice, it had just grown organically from developers trying to find the quickest way to implement each new experiment. Eventually, I noticed a lot of repeated patterns in the code for each one, so I decided to make an abstraction to reduce that duplication and hopefully make new experiments easier to write.

My abstraction was a simple one that I wrote over the course of one or two days, maybe a hundred lines of code. It was a helper class that took an experiment configuration in its constructor, and provided three boolean methods for the three most common checks: whether a user is eligible for the experiment, whether they’re sampled for the test group, and whether they’ve already been assigned to the test group.

This new class was an improvement, and it started to be used widely. But at the same time, we began to change how we ran experiments. Soon we started wanting to distinguish three groups of users for each experiment (sampled, not sampled, and not participating), and for that, the boolean methods started to get in the way. They also weren’t well suited to handling experiments with more than two kinds of variants. And the class didn’t actually handle assigning users to groups. It just returned booleans, while developers still had to check each of those booleans and then manually assign users to each group, a process that was known to be very error-prone.

When these issues started becoming apparent, it was a sign that the abstraction we were using was not the right one for the task. We needed a new abstraction that took these new requirements and learnings into account.

How to Let Go

I had moved out of that area of the code by the time these issues became clear, but I pointed them out when I saw developers adding ad hoc workarounds with each new experiment. I realized that my abstraction had become as much hindrance as help, so I suggested that my abstraction should be updated or replaced with a new system that better matches our actual requirements. I wanted my work to be killed. But I was fascinated by the fact that no one seemed to want to touch it. Instead of trying to solve these issues for all experiments, they made new workarounds every time.

I wonder if it was because they felt like undoing my work would be disrespectful, or if they didn’t feel up to the task of improving it, or if they felt caught up in day-to-day urgency and just felt like it was easier to use an imperfect existing abstraction rather than make a new one. But when your abstraction is bad — especially if it’s commonly used — it’s best to kill it as soon as possible. Be ruthless. The longer you wait, the more its costs build up.

This doesn’t have to be a risky process; you don’t have to replace everything all at once. Instead, you can create a new abstraction that should be used by all new code, without touching existing code at first. Then, when the team has gained confidence in it, you can begin a transition period, where the old abstraction is marked as deprecated and you gradually replace its uses with the new abstraction. This method massively reduces the risk and stress of replacing existing code, and is useful either when code needs to be changed in a lot of places, or when the affected code is particularly important.

Designing the new replacement is the most fun part of all of this. Inventing an abstraction is possibly the most pure act of creativity a person can engage in. It’s a huge subject in its own right, so there’s limited advice I can offer in a single blog post for how to approach it. It’s more art than science, and something you mostly get better at through experience. My favorite book on the subject is A Philosophy of Software Design, which provides great guidance on how to improve this skill.

For abstractions with a small enough number of usages — say, single digits — there’s another approach you can take if you’re having trouble designing a replacement. It’s a suggestion that comes from Sandi Metz, which is to kill the abstraction before making a replacement for it. Instead, you can undo the creation of a function by bringing the code back inline in every place where it’s used. Then you can remove the parts that are irrelevant to each area, and examine the new code to see what patterns are actually shared between them. Seeing the code unwound like this can really help generate ideas for what kinds of abstractions are actually needed. But even if it doesn’t, you’ve already improved the code just by removing the old abstraction. It’s better to have no abstraction than a bad one.

Conclusion

Existing code is not sacred. It’s a tool. When it’s not useful anymore, don’t just keep working around it. Instead, recognize when code no longer serves its original purpose. The longer you wait, the longer it has to metastasize and harden the code around it. The sooner you get rid of it, the sooner you can replace it with something better.

It takes effort to overcome the psychological hurdles to be able to kill your darlings. But it’s an invaluable practice to keep code maintainable and ensure your project’s longevity.

  1. This is called the sunk cost fallacy, the idea that when you abandon something you lose everything you've invested in it. This is false because time and effort and resources get lost immediately when they're invested, so should not influence decisions beyond that point.↩

  2. I can't remember where I first heard this idea, but it's pretty widespread. Here's a decent article I found that explains it pretty well.↩