Close
Type at least 1 character to search

Cognitive Biases in Design

verb: bias; 3rd person present: biases; past tense: biased; past participle: biased; gerund or present participle: biasing

1.cause to feel or show inclination or prejudice for or against someone or something.

As designers of everyday products, we easily fall victim to our cognitive mind without ever realizing what is going on; Our brains are excellent at manipulating thought trains to fit our perspective of what makes sense in this world.

Biases are a very active actor at play in our unconscious, which leads us to make decisions that directly affect our end deliverable, which can tarnish experiences for users if we are not able to spot them during our design processes and know-how to avoid falling victim to them. There are a shocking 175 cognitive biases that may arise in every phase of our operations, from creating to even being part of a team. As designers, it should be part of our skillset to counter them effectively.

To understand how biases work, we must first understand how the mind works

In 2011 Daniel Kahneman wrote a fantastic book called ‘thinking fast and slow’ in which he breaks down the human mind and explains how specific characteristics enable us to be biased as humans naturally; Daniel divides the brain into two ‘systems’…

System one:

System one our brain is where our cognitive resides, it naturally works very fast to make everyday decision in almost automatic sense (e.g., turning the water knob to the right to turn the hot water on); however due to it being so fast, it is very vulnerable to being error-prone off the back of reactive decisions.

System two:

System two, in comparison, works a lot slower to system one, which also requires a lot more effort to function in making reliable, complex decisions. The more system two we use, the more conscious we are when making decisions reacting to situations.

So, where do biases sit & how do I tackle them as a designer? 🧐

If you haven’t guessed, our biases reside in system one; Daniel Kahneman makes a strong case of humans needing to utilize system two on a regular to counteract biases and make unbiased decisions.

Working as a product designer over my career, I have found common biases that arise more than others in a design process (so don’t worry you don’t need to know all 175!) and I’ve broken down critical biases into four phases,

  • While creating
  • During testing
  • When analyzing data
  • As part of a team

Let’s kick it off with, while creating 🚀

Confirmation bias

This is where we, as humans, confirm our perspectives at the first glimpse of validation while ignoring other vital pieces of information without considering how valid they are or how they may invalidate our viewpoint. This massively impacts our design decisions due to the possibility of dismissing an opportunity that delivers real value; We may think we know what users want, but without considering everything, we could perhaps miss the chance of adding real value for the sake of our gut feeling.

Luckily for us, we can avoid falling play to this bias by merely digging deeper to find the truth and continuously revalidating the initial thoughts and insights. Every idea we have should be run as an experiment to confirm our decisions, so we do not validate them ourselves but have them approved by our users.

Example of how we looked further into data to understand why users struggle to find articles on our webpage

Curse of Knowledge

This one sounds a little weird as we never associate knowledge with being a flaw. Still, this bias tricks us into thinking we have all the answers, which leads us to easily overestimating ourselves and jump to conclusions in moments of decisions. A lot of these biased confirmations come from higher management, and as designers, we may have experienced this before.

David Cameron fell victim to this bias with his assumption of Brexit, thinking he knew what the people of the United Kingdom wanted.

This bias can be overcome while kicking off a project by labeling out all our assumptions and turn them into hypotheses to be validated as a team, with no design decision being made until tested. We also must ensure we have quality insights from all our set out user types to get an accurate understanding from their perspective and not just our own.

Pro-Innovation

We all love to innovate, and there’s no reason why we should stop innovating unless however, we begin to Assume new things are better, and believing something new will defiantly provide instant value. I’m sorry to say but NOPE – That is bullshit (original is not always better!) We tend to justify that to move forward we must change and innovate our products without questioning “do we need to?”

The easiest way for us to overcome this bias is to –

Stop, take a breather, and ask these questions.

• Do we need to do it?

• What if we don’t do it?

• How else could we do this with less effort?

• What KPI’s do we have for this?

System Justification

‘It’s just how things are’ is a phrase a lot of us designers have heard at one point in our careers, this quote is usually a result of system justification biases. It is a counter bias to pro-innovation, which I have respectfully called ‘the bullshit bias.’ We unconsciously grow comfortable doing things how they done, leading us to believe if a certain way of delivering something works then that must be the way we continue to do it; This leads us to Defend systems or processes at any cost which can be doing the opposite and harm our deliverables.

Spotting this bias early, we can avoid being a victim of it by questioning everything as part of our process (this nicely becomes a team culture). Pilot schemes work great to tackle this bias as it allows us to try out new methods, tools, and processes to determine what we should be doing and using, and if something new works better then great, adopt it but then see where it fails and repeats.

Biases during testing 🧪

Observer Expectancy

Our unconscious behaviors tend to screw up our participant interviews without us even realizing that we are. These behaviors can be how we express through our body language (e.g., flaring out nostrils) or how we verbally communicate with participants (e.g., sighing).

These reactions can influence our participants, which can lead them to develop biases over their own while testing out our products, and this ultimately undermines our results.

When running tests with participants, I always find it best to get someone who is not so attached to design or product to do the interviews with participant so that no unconscious traits get picked up by participants. However it’s not always possible to have a large pool of participants, and sometimes we have to do the tests ourselves, in these cases, we should always ensure we lead with questions for us not prompt our side of the conversation and as a bonus tip – keep your hands on the desk to avoid pointing unconsciously.

Anchoring Bias

When taking in new information, our minds will unconsciously compare pieces of information to make sense of it even if we don’t mean to anchor information.

If are you shown fruit A then shown fruit B, the cognitive will automatically compare even if fruit A is an orange, and Fruit B is a banana, and this is what we call anchoring bias. This can invalidate our user testing results when we present content to participants that can be easily compared, which can lead to participants being biased to test A to test B even though test B may be better.

With this in mind, it’s best practice for us never to do head to head testing with the same participant and instead score each task on prototypes with different participants. However, there may be times where we are restricted with participant numbers, in these scenarios, we should aim to present critical features were testing in different places so that they don’t seem relatable to the participant.

Selection Bias

Biases can also play a massive impact when we are selecting participants that lead to us not getting a true representative of the market we are targeting. This usually occurs when we don’t do an appropriate job of selecting participants, and we fall victim to having a pool of participants with similar biases and opinions, which leads to us having concentrated feedback that may not accurately represent the market we are looking to target.

To avoid this, we need to randomize up our participants for testing to ensure we get the right balance of feedback on what we are trying to learn. We must ensure we recruit participants with different ideologies, backgrounds, etc. which are defined in our target user groups.

Example of using participants from within the organization even though we thought they would have no biases as they were unaware of the project.

Biases analyzing data 🔬

Clustering Illusions

It’s so easy for us designers to spot patterns (I mean that is our job lol). Still, sometimes, we may not realize. However, we can overestimate the importance of patterns found in large pools of data while undermining the importance of the grander picture at play, and this is what smarter people than I like to call ‘clustering illusions.’

Clustering illusion bias ties in a lot with confirmation for what I like to call ‘hand in hand bants,’ instead of seeing the bigger picture, we focus on a small pattern we found, which may not be the entire picture to why a product is succeeding or failing.

We need to ensure we asses the areas of no patterns and ask, “why are there no patterns here?” view the broader plane and get more data if need so to understand the problem as a whole.

working as an unbiased design team 💪

Diversity is key.

I’ve talked a lot about diversity in regards to the participants we test our prototypes with. Still, way before that, we have to make sure we have variety in the talent we hire in our design teams so that we collectively have a broader perspective on problems, social challenges, and strategy.

Designers with backgrounds in:

• content

• research

• visual arts

etc.

Feedback is key.

Feedback is a great way we don’t fall victim to our biases on a day to day bases while creating great products, for example within the design team I currently work within; we do design-critique once a week where we all get open to feedback from everyone on,

• What was working on

• What everyone’s thoughts are,

• What we could try etc.

These sessions help us challenge every decision we make and ensures that we avoid the classic bandwagon effect.

After shitting on biases so much, I have to say biases aren’t all bad, and we can utilize and leverage some to aid our product development and help us achieve our end-user goal 🤷‍♂️

For example,

The ‘Recency Bias,’ is a great one to be aware of when we want users to recall recent information that helps influence actions going forward, in the digital space we sometimes call this ‘priming.’ Stock traders experience this bias daily as they usually base their market expectations on how the market has been performing recently, which influences their next investment.

Another great one is the ‘IKEA Effect, ‘ which is a cognitive bias where users place a disproportionately high value on products that they created. You can guess where the name comes from, and the same effect we get from building our cupboard with our own hands has a place in the digital world, like when configuring a car online or creating your deal.

Empower yourself with knowledge – I’ve only covered 11/175 biases in this blog post

The Take-Aways

  • Be aware of cognitive biases at play in your work and slow down to see them – (there are so many more!)
  • Question everything (even yourself!) and source the truth that you might not notice at first
  • Consult people around you to broaden your perspectives• Know how users may use your products with their own biases
  • Know how users may use your products with their own biases

“The emotional tail wags the rational dog.”

Jonathan Haidt