I want it that way: online produce shopping

Summary While online produce shopping provides delivery convenience, it lacks the sensory experience that shoppers take for granted in brick-and-mortar stores. In this project, my teammates and I researched the domain of grocery shopping and proposed additions to AmazonFresh's existing interface.
* This was a semester-long class project and is in no way affiliated with AmazonFresh.

Contributions As a UX researcher, I helped with drafting research plans, facilitating user study sessions, as well as creating surveys and analyzing their results. Additionally, I took the lead on creating interactive desktop and mobile prototypes.

Team Members Meghan Galanif, James Hallam, Ashok Krishna, Sijia Xiao

Our process

1: Exploring the problem space. Starting with the landscape of grocery shopping

The increasing supply of grocery delivery services peaked our interest in understanding this space. We asked two broad questions to start:

1.1 Conducting secondary research to look at the products being purchased during grocery shopping.

Particularly, we considered the categories and characteristics of groceries. Two things stood out:

  1. Produce varies from item to item. Pick any two bags of hot cheetos and they probably taste the same. That's not the case with produce - two apples in the same bin can have very different appearances, ripeness, and taste.
  2. Produce often isn't durable. One smash or one drop might lead to the disposal thereof.

How might these product characteristics influence the buying journey? We realized how notoriously selective Americans are about their produce. In fact, around 60 million tonnes of produce are rejected per year, partly due to our "obsession with the aesthetic quality of food." This raised an interesting question: how does this selectiveness play a role in produce shopping? We set out to understand this by considering two contexts in which a majority of produce shopping happens: in-store and online.

1.2 Doing observations to understand the contexts and how they influenced shopper behaviors.

We conducted observations and analyses of how shoppers purchased produce in-store and online. For in-store: we spent some time in a few grocery stores (by disguising ourselves as undecisive customers when in fact, we weren't looking at the produce but the shoppers). For online: we went through some websites and conducted task analyses. Additionally, we took notes of the environments (store layouts and UI/IA) so that we could compare and contrast them.

How the collected data evolved from sketches to refined artifacts and high-level takeaways.
Affinity mapping helped guide us from one stage to the next.

Looking at the context comparison summaries, we noted that a major caveat of online produce shopping is that customers don't get to interact with produce items the way that they would in a physical store. No scrutinizing of cosmetic blemishes, no squeezing. Further, customers wouldn't know what they'd be receiving until the package arrived at their doorsteps. What if the bananas were too ripe? What if the strawberries were too tiny? These seemed at odds with our desire to pick-and-choose certain produce items. Our question then, was this: online produce shopping lacks the sensory experience customers get in stores. How does the buying experience look like, when customers don't have control over what they receive?

2: Taking a deeper dive. Addressing specific research questions

Once we had a better understanding of the problem space, we brainstormed the different ways to address the high-level question. Particularly, we decided to focus on AmazonFresh, for its market saturation peaked our interest. We considered what we knew and what more we wanted to know, then broke it down into three smaller questions. Here's a summary before I dive into each:

2.1 What influenced people to shop online?

I led the effort on administering a survey and analyzing 16 responses. We decided to do a survey here because it allowed us to get data from more participants within a shorter time period. The survey consisted of ranking and multiple-choice questions that focused on shopper preferences. Our prior research provided us with enough knowledge to craft answer choices, and these close-ended questions were easier to respond to.

Our key findings included the following:

2.2 How did people buy from AmazonFresh?

While we conducted task analysis and outlined the journey, we wanted to hear from the customers as they were doing the purchase. We decided to run contextual inquiries, as it allowed us to observe the participants in their natural habitat and ask follow-up questions in real-time.

The 3 sessions resulted in 49 notes that were grouped into 12 thematic buckets.

Our key findings included the following:

2.3 How did customers feel about their purchase?

This was a final push to cast a wider net in terms of data collection, and we decided to utilize what was already out there - AmazonFresh reviews! They'd provide us with authentic and timely customer reactions. Sijia took the lead on sampling 100 reviews across 25 products, manually coding each review with keywords and sentiments.

See this example:
"The bananas came the next day from Amazon Fresh in good conditions and very tasty. No brown spots inside or outside. I recommend. I'll buy again. 4 stars :)"

We coded this review with taste-positive and appearance-positive. We then compared the ratio of positive to negative sentiments based on each keyword. What stood out to us was the significant percentages of negative sentiments on received products (shaded in red below):

Positive vs. negative sentiments across seven characteristics of AmazonFresh products
2.4 Synthesizing research results into design ideas

Preference specification
Both primary and secondary research suggested that shoppers had certain expectations of their produce, which might not necessarily align with those of AmazonFresh employees. How might we provide shoppers with a way to indicate this preference?

Reviews
We compiled our participants' browsing pattern of the reviews (that we had observed during the contextual inquiry sessions) into the following action items: first, allow filtering/sorting of reviews based on locality; second, provide keyword summaries of all reviews.

Problem reporting
Based on the overwhelming amount of negative reviews, we wondered if customers were using reviews as a way to report problems. How might we design the user flow such that it prompts them to report a problem instead of leaving negative reviews?

3: Design iterations. Sketches, prototypes, and evaluations

3.1 Paper prototype

We created sketches to demonstrate our solution ideas and quickly tested them with 4 participants. Primarily, the goal was to see how participants interacted with the added features.

The sessions revealed some issues with the sketches. For example, the flow was designed so that when a user leaves a one-star review, they will be prompted to report the problem. Participants mentioned that this flow was not transparent enough and were confused as to why there was no option to directly report a problem; to mitigate this, we surfaced this feature in the digital prototypes.

Paper prototype
3.2 Digital prototypes

The paper prototype worked great for proof-of-concept. We then moved onto using Sketch to improve the fidelity, which would allowed us to get more targeted feedback. I led the effort on creating these screens. Once we had a clear idea of what the final interface might look like, I then created interactive prototypes by using Justinmind. You can view our desktop prototype here and the mobile prototype here.

Mobile interface
3.3 Prototype evaluation

Expert evaluations
Expert evaluations helped us examine interface and interaction design standards. We conducted 3 sessions of cognitive walkthrough; each expert participant was given the tasks of selecting a specific type of banana, using review features, writing a review, and reporting a problem. The sessions didn't reveal major issues with the prototype, so we decided it was time to put them in front of users.

Moderated user testing
We conducted 5 moderated user testing sessions and issued the system usability scale (SUS) questionnaire. We wanted these participants to emulate what actual users would do, so we gave a looser set of instructions. Each participant was also asked to think aloud as they interacted with the prototype.

This time, we discovered more bugs: certain elements of the interface weren't updating with the clicks, cache issues, etc. It was fascinating to see the robustness of the prototype being challenged. Despite these bugs, all participants rated both desktop and mobile prototypes as passing in terms of SUS standards. Our guess is that since the interface closely mimicked that of Amazon's, participants felt comfortable and confident navigating through the prototype.

Additionally, participants were quite happy with the ripeness selector, describing it as "most useful" and a feature that other produce delivery services lacked.

4: Conclusion & lesson learned

In this project, we applied many different research methods to dive deep into the domain of grocery shopping. It was interesting to "study" an activity we were all so familiar with. By noting the differences in the online and offline contexts, we worked to compensate the lack of sensory experience shoppers get when they shop on AmazonFresh. Finally, here are some reflections:

Piloting surveys is a must
After double-, triple-checking the survey on Qualtrics, I thought our survey was ready to go. The first person who took it immediately told me that he thought by "produce" we meant "product," so he was unsure of how to answer some questions. To fix this, we added a brief definition/explanation to avoid any confusion.

Using mixed methods is valuable
I feel that we couldn't have completed the project without doing so; plus, the results we got from different methods - such as the combination of cognitive walkthroughs and moderated user testing - supplemented one another.

Building team rapport is important
Towards the beginning of the project, I had the tendency to jump to project discussions immediately. James didn't - he'd always take the time to ask each person how we were doing. I grew to appreciate this and started doing it as well; looking back, working with this team was probably the turning point. (I also miss doing coffee and pizza runs with them).

P.S. We have a 149-page write-up report and a presentation deck. Feel free to reach out if interested.


← back to portfolio