
Case Study
Checkout Redesign
The Problem
When I worked at O’Reilly Auto Parts, I was presented with an opportunity – we had high bounce rates in our checkout step. Not unusual in an e-commerce setting, but it was an invitation to explore. We had competitors – if our checkout process caused too much friction, we could easily lose customers to a better designed site.
Exploring Data
First things first – understand the problem. I pulled data from ForeSee for the last several months, covering demographics, behavior, preferred fulfillment method, customer familiarity with car parts and shopping online, site familiarity, and more. That gave us a picture of the type of users we were working with.
First Design Attempt & Usability Test
The first design saw a few improvements:
Streamlined shipping selection
Added “Use Address” button for validation
Removed required method selection on product landing page (PLP)
Removed review step, replaced with an expandable cart summary at the top of the checkout page
Altered payment input, including gift cards
Obviously, any UXer worth their salt will want to test their design. However, as our department was brand new and the company was unfamiliar with UX practices, we had zero budget. But that wasn’t going to stop me. I decided to do some guerilla usability testing. I wrote a scenario and got permission to recruit employees from areas completely unrelated to IT or marketing – mostly finance or sourcing. Since I was both the designer and the scenario writer, I pulled in a more junior designer who had an interest in UX Research to conduct the interviews so I wouldn’t skew the results. We chose six participants with the goal of identifying the most glaring usability issues.
To test this version, we also used the most simple possible cart. There would be only one item in the cart, which meant fulfillment options and delivery dates would be simplified. We found a couple potential usability hiccups to correct.
Second Redesign
The second design saw the changes identified in the first round of testing – making buttons a little taller, some changes to the cart summary, and other small tweaks along those lines.
This usability test was similar to the first one – same scenario, same number of participants (though obviously not the same participants). It went really smoothly, which was perfect since this scenario was the simplest scenario a potential customer could encounter. With the base design hammered out, it was time to pressure test it.
Third Round of Testing – Pressure Testing
This round incorporated some of the more difficult patterns a potential customer may encounter – split shipping. That is, some items can be delivered, but the order may include an item like a car battery which (due to the nature of the item) can only be picked up in store. It went pretty smoothly, though the tests did take longer due to the complexity of the task. We gathered data on the design itself, as well as what they would do and expect next (wait for an email, print the order confirmation, etc.). We collected single ease-of-use (SEQ) scores and asked participants about their impressions of the process’s strengths and their confidence level of knowing when items would be picked up, next steps to take, etc.
Presenting Our Findings
Presenting this was one of the most rewarding experiences of my early career. I got to share my work in front of a room of tech and marketing managers and introduce them to the company’s first-ever usability research. And I think they immediately saw the value.
We also shared further areas for opportunity identified by our usability tests that were outside the scope of the checkout redesign, such as better product suggestions, improvements to rewards integration, and showing the product pickup time earlier in the process than the cart.
You can read the full, final presentation I gave on the topic here.