We have all seen those cute pictures of a duckling that thinks the family dog is its mother. It happens because the dog was the first moving thing the duckling saw at birth, so it assumes the dog is its caregiver.
When it comes to data, we humans often act like ducklings.
There’s a well-known cognitive bias called the “Anchoring effect” which can lead to bad predictions and/or decisions. The basic premise is that the first piece of data you hear can “anchor” you to that number. So a $300 mug is exorbitant but a $1000 mug discounted to $300 is a steal.
This effect / bias is why I am mindful of which information I take in. If you have the wrong starting point, you’ll make worse decisions as a result. Like buying a $300 mug.
In daily life I try to dodge the anchoring effect by asking myself what something would be worth to me before I look at the price tag. Sometimes you need to adjust your opinion after looking at the tag, but at least it’s a conscious decision.
I take the same approach when using AI. If I’m deciding how to do something, I come up with my initial plan first. Then I can choose to ask an AI model what it would do.
I see coming up with my own starting point as important because machine learning models in general tend to miss things. They don’t have context and they lack the ability to understand nuance. Both of those things can completely change the right approach.
So, if you just let an AI model recommend what to do, you’ll end up anchored to a solution which might have a massive flaw due to the context or nuance the model ignored.
The downside of my mindset is that now AI doesn’t look like an easy solution. Where some people are thrilled because it saves them time and energy, in most cases it costs me time since I have to do all my thinking up front, then ask AI, then maybe make a substantial extra effort to compare the two outputs.
Usually I decide I’m happy enough with the plan I crafted without AI and just skip it entirely.
There are exceptions where I do find AI tools save me time. I am planning a trip to Europe for my family. I did the initial planning to decide which cities we might want to visit but let AI help me fine tune that list based on actual train and plane schedules. It saved me more than 10 hours trying to figure out a reasonable itinerary.
The other time I reach for AI is when I’m looking for the right word or term. I’ve thought about what I’m trying to convey, but it’s hard to search the entirety of the English language in my head to condense those concepts into something succinct. I usually ask the model for 20 examples to see which conveys the heart of my point best.
These are just a couple examples. I’m sure there are other scenarios too where these tools can be useful – especially if you can code it once and use it many times in the future.
But when it comes to one-off thinking tasks, I believe it’s important to do the thinking first. That’s because the AI will almost certainly miss the context and nuance that makes something a bad idea.
Otherwise, you might think that a $300 mug is a great purchase. Or worse, you might discover you’re a duckling.

