Steve Krug’s classic book “Don’t Make Me Think” has been an invaluable resource for UX Designers for decades and still feels relevant today. The title alone is a useful mantra for any designer working towards creating usable experiences that pose as little user friction as possible. Emphasizing intuitive and effortless interfaces, Krug advocates for minimizing users’ cognitive effort. He stresses the importance of clear labeling, simple navigation, and the removal of unnecessary distractions in digital experiences.
Krug’s principles remain pertinent in today’s ever-evolving AI design landscape: clarity and clear interface design are as important as ever. AI tools are new to everyone – including those designing them. The sooner we can get to common design patterns that reduce the ambiguity of what AI tools can offer, the sooner we’ll be in a place where users won’t have to think quite as much to get the outcomes they are looking for.
Many of the principles outlined in Krug’s book follow the classic laws of UX – a series of guidelines backed by psychology studies and research, which provide a framework to guide UX design and ensure that the user experience is effective and enjoyable. Of these laws there are a few in particular that seem relevant in our current evolution of the AI tool interface: Fitts’s Law, Jakob’s Law, and Hick’s Law.
The idea behind Fitt’s Law is understanding the relationship between the size and distance of a target and the time it takes to reach that target. In UX design, this suggests that larger and more easily accessible interactive elements, such as buttons or links, can enhance user efficiency and reduce errors.
The prevailing AI feature that is booming right now is the chatbot, exemplified by the pioneer and arguably most popular AI tool ChatGPT. In theory, you would expect the ChatGPT interface to crush Fitt’s Law: it’s quick and simple to get from prompt to output from a single, giant input box. But in order to gain efficiency in the task, the user is required to know exactly what they need to ask and how to ask it within a prompt. All of a sudden the “distance” to the target feels miles away.
Some things that could potentially alleviate some of this “distance” would be to incorporate UI/UX design patterns that the user is more accustomed to. Google, for instance, allows you to type in short, often cryptic words and phrases into their prompt and what you get in return is pages and pages of potential answers that you simply have to click on. This is what crushing Fitt’s Law looks like. Less typing, less thinking, and more simple clicking.
This principle suggests that users’ expectations are shaped by their past experiences with similar products. As mentioned earlier, common design patterns and conventions that make users feel comfortable and confident when interacting with a new interface should always be favored.
The problem here is that there isn’t much to pull from in past experiences with AI tools. We’re a little bit in the wild west with new ideas, conventions, and patterns emerging daily. It’s similar to the early days of the internet where there weren’t common ways to perform basic tasks like website navigation and UI/UX experimentation was rampant. The good news is that this will all eventually settle down and best practices for UX in AI will fall into place. In the meantime, all we can do is keep the best interests of our users in mind and try not to fall for incorporating trendy UI/UX tactics that may not be ready for prime time.
With Hick’s Law, the time it takes for a person to make a decision is directly proportional to the number of choices they have. In UX design, reducing cognitive load and providing clear, concise options can improve decision-making and user satisfaction.
In regards to AI tools, we run into some of the same problems we discussed in Fitt’s Law. If our interface is reliant on the quality of user prompts where the options are infinite, then we’re in trouble. Frustration will build quickly if users feel like the AI tool is not understanding them, providing answers that are off-base, too obvious, or downright false. And if the AI tool is not there to catch this frustration and offer a way back, it’s a recipe for rage-typing.
This is when it becomes really important to utilize all the tried and true UX techniques we’ve relied on for ages: journey maps, empathy maps, personas, etc. In order to craft a great experience with AI we first need to fully understand what that could look like. What are the user’s pains, gains, and jobs? What are common use cases for interacting with your tool and what do those journeys look like? What is the user feeling at each stage of the journey? And most importantly, we need to understand what good outputs look like.
Admittedly, there is too much variation in the outputs of AI to reliably craft an exact experience, but that doesn’t mean you and your team shouldn’t have a full understanding of what a good output should look like. While you can’t fully tame the AI brain, there are enough levers to guide it in the right direction.
It’s a scary and exciting time now that AI has disrupted pretty much everything. We have so few reference points to guide us in designing new AI tools, but it’s also exciting because the possibilities are endless. The good news is that the UX practices that got us here can still help us. The laws of UX don’t only apply to UIs on screens, they are basic psychology principles that will continue to be relevant as long as there are humans interacting with technology. And if all else fails, just keep repeating Steve Krug’s timeless mantra, “Don’t make me think”.