I have a pet peeve when people describe entirely predictable failures of a system as unforeseeable. When you miss those risks, you end up wasting resources on the wrong things and suffer worse outcomes to boot.

There’s a way out though, by understanding the system and who might make choices that lead to undesirable outcomes.

I frequently see articles where the journalist writes something like “Who knew that anyone would find this giant loophole?!?”

And the answer is that perhaps a lay reader wouldn’t have expected that result... But ask any of a thousand experts in the field, including many that created the system, and they would have predicted it with near certainty. They maybe even wrote memos about all the likely negative consequences of leaving the loophole open.

If you’re a parent, you’ve experienced this in a very tangible way. Your kid asks if they can watch tv, and you say no. Minutes later you find them playing videogames and are incredulous. They point out that you only said no to tv, and so they didn’t watch tv, they played video games. Totally different.

You make a note… fool me once, shame on you… But next time they ask for tv, you’re going to close that loophole.

Waiting for someone to find a way to exploit each loophole is one path to closing them in the future. But we don’t always want to start with a broken system just to figure out how to make a better one.

And so, I have a strategy to predict the unintended consequences before we see them in action. A mindset that means you don’t have to be an expert in a field to know if someone dropped a ball, or if this was just something we had to wait to discover.

My cheat code: Just ask yourself if there are any loopholes. That’s it. If you create this system, or make this rule, how could someone misinterpret it to benefit themselves?

The trickiest part of implementing this mindset is thinking carefully about all the different parties who might see an opportunity in manipulating your system. When you’re creating rules for your kids, it’s easy to see who you’re designing for.

In the larger world it can be a little harder to plan for all those potential loophole-finders. Still, it’s only human to put our own interests first. Is there a way to manipulate your system to make more money? To gain more fame? To have an easier life? If so, are you ok with the world those loophole-exploiters will create?

If you aren’t, you don’t need to wait for those inevitable failures to design them out of existence. For example we have a rule in the US that donating to charities is tax-deductible, but to political campaigns isn’t. We like a world where people are incentivized to give to others, but not just for their own benefit.

With today’s technology it pays to think of the algorithms themselves as one more potential loophole-finder. This is a massive oversight I’ve written about before because in fact it is what computers are best at.

In 2019 I wrote that “computers are always going to take advantage of the smallest oversights in your problem statement to win at the wrong task”.

That’s still true. If you leave the tiniest hole in even a massive system, algorithms will find a way to exploit it.

Reply

Avatar

or to participate

Keep Reading