Designing amidst uncertainty
Feb 8, 2024

Designing amidst uncertainty

How to approach work in the midst of exponential growth

It's obvious (trite?) but true: The world of AI is changing at an astonishing clip. It’s classically difficult for the human mind to comprehend exponential growth, even when we're witnessing it month by month.

An intuition pump for exponential growth (helpfully suggested by Claude): If you place 1 grain of rice on the first square of a chessboard, then 2 on the second square, 4 on the third, and so on - doubling each time until the 64th square. How many grains of sand will there be? (Answer: 18 quintillion grains of rice, more than has ever existed on planet Earth.) Witness both the power of exponential growth, and your mind’s inability to keep up.

As practitioners, we’re somewhere in the first rank of the chessboard with a few grains of rice, and being tasked with imagining the world on later ranks with piles of rice. What will people want? How will they engage with this technology? What will the challenges and opportunities be? These are nearly impossible questions to predict, and yet this is exactly what designers are supposed to do! Understand where the world is going, and have a point of view about how it should look. So how should we do this in the world of AI?

My guiding principle is extreme humility. I try and avoid anchoring my ideas around any specific timing or sequences of events. Instead I think about possibilities & dependencies, and how to stay agile and reactive. This shapes the way I suggest working in this world of extreme uncertainty:

  1. Work in short, iterative cycles to test specific hypotheses. Launch ideas quickly, then pause to re-evaluate based on how the the world has changed. If you’ll forgive a sports analogy, it’s like a football playbook: You tie plays to specific situations (”If it’s 3rd and short, we’ll run up the middle”) then re-evaluate field position after each play. What has changed? What from your playbook feels right in this situation?
  2. Ideas should be flexible and modular. Avoid complex concepts with too many dependencies and requirements. Instead, build loosely coupled explorations that can thrive under a variety of potential futures. You probably have dozens of ideas, and you’re right about some subset of them. But it’s too hard a priori to figure out if it’s A+B+C or A+B+D etc. So don’t make D reliant on having done C before.
  3. Stay obsessively close to users, on a very short cycle time. Really listen to their lived experience using your product, or others. Because what was true just a month ago can quickly become outdated by the exponential pace of AI.

A final note: You’re probably looking at these thinking “Right yes, thank you Joel, this has been best practice for a decade, congratulations.” I agree! We should have been doing this all along. But I'd ask: Have we really been working like this? Have we kept true humility about what will work? Have we talked to users constantly and actually listened? Or... have we convinced ourselves that our 18-month roadmap is pretty likely to work, and plowed ahead without really challenging our preconceptions? Have we looked for confirmatory evidence in our user research, metrics, and company growth? 

In the comparatively stable technical landscape of the last decade, it was easy to drift from these best practices. The extreme environment of AI increases the rigor with which we need to perform.

Similar posts you might like