GalleryPal: Self-Guided Tour Assistant

Make the most out of your art trip

UX / UI   |   Design Sprint

Problem Space

Goal

GalleryPal is a startup that aims to improve people’s experience in viewing art at galleries and museums. They want to develop a mobile application that is easily accessible to viewers as they move through the exhibition space and helps them get the most out of their visit.

Design a mobile application that helps visitors learn more about the art while having a smooth and pleasant viewing experience.

Role

Time

I was brought onboard as a UX designer to complete a one-week design sprint for the product.

November, 2022

Day 1: Understanding

“I really enjoy looking at art, but sometimes I feel like I’m missing out on the full experience by not knowing any background information or context.” — anonymous interviewee

When consolidating the user research materials, I found that the biggest pain point people have is that they feel like they didn’t make the most out of their art trip because they knew so little about the artist, their background or intentions while viewing their work. While the most available solution for them is to Google the artist, it often results in a tedious long read that hinders their in-person viewing experience.

I created an empathy map to illustrate the persona further in details.

(Empathy map)

After carefully studying the persona and interview footages, I listed three HMW questions to guide the following design process:

1. How might we provide visitors with useful information without hindering their viewing experience?

2. How might we personalize their viewing experience?

3. How might we keep them engaged throughout their trip?

Reflecting on these questions, I then sketched out a user map to visually represent the journey that the visitor would go through using this app.

(User map)

In addition to defining the problem statements, I also listed out some challenges and ways in which things might go wrong:

• Giving too much information about an artist or a piece could distract the viewer from actually experiencing the piece.

• If the app is too complicated to use, viewers might simply give up on it.

• Accessibility is key. The app needs to accommodate to people with different capabilities and needs.

Day 2: Sketching

To start the day, I first conducted a lightning demo by looking at some existing products that use a mobile app or web app to enable self-guided tours and enhance people’s viewing experience. I found 3 direct competitors that address the same audience, which are visitors at museums and galleries. I also found 3 indirect competitors that provide similar services but for different spaces, such as national parks, historical sites and natural conservatories etc.

I wrote down the features and highlights of each product on sticky notes, and then made an affinity map to find patterns and trends. All of the sticky notes are arranged into four categories – Navigation, Audio, Multimedia and Extra Features.

Here are the main takeaways from this step:

• Most solutions use audio-first approach to design the tour

• There are different ways to guide visitors navigate the space, which can be put into two big types: curated route and spontaneous route

• The audio can be triggered in different ways: by searching item number, scanning QR code, or using location-tracking technology or proximity sensors.

• To provide visitors with more information and to increase accessibility, some products use multimedia in the tour description: image, video, transcription, animated illustration etc.

Next, I looked back on the user map I created on Day 1 and chose the most critical step to be my main screen, which is the “Look it up” step in which the visitor looks up an item they find interesting in the app to learn more about it. Then, I used the Crazy 8s Method to create 8 quick sketches - that is, 8 different versions - of the main screen.

On each sketch, the visitor would use a different way to identify an art piece to trigger the audio description. These ways include: searching by category/gallery/artist, looking up the item number, using image recognition, proximity sensor or GPS tracking system.

Out of the 8 screens, I decided on the one that uses image recognition to look up an art piece. This is out of the concern that in a crowded gallery space, it’s sometimes difficult to get close to the art piece to find the item number next to it on the wall, and thus requires a solution that allows visitors to identify an item from a distance.

Moreover, being able to look up an item without the need of getting close will also give visitors the mobility and freedom to walk around and view the art at different angles and distances as they wish.

Day 3: Storyboarding

On the third day, I looked back on the ideation sketches I made on Day 2 and iterated on the main screen a few times to figure out all the necessary information and UI elements that should be included in this screen.

Then, I expanded on the main screen and created a 9-panel storyboard that illustrates all the steps that the user will need to take to complete the primary task – to look up an item they find interesting and learn more about it in the app.

I then cut all the screens to move them around and form the narrative, drawing out all the interactions from screen to screen. This became the basis on which I developed the digital clickable prototypes the next day.

Day 5: Testing

On the last day of the Design Sprint challenge, I conducted 5 user tests using the think-aloud method with 5 participants, all of whom are interested in art and design and are frequent visitors to galleries and museums in New York City.

“Why would I use any other search functions when I can simply take a photo of the piece and find it right away?” – user testing participant

Overall, I received lots of positive feedback regarding the image search function, which is believed to be the most convenient way to look up an item compared to the traditional ways of searching by keywords or entering item numbers. At the same time, they appreciate the existence of other alternative search methods, as they point out that people who are less familiar with this newer technology might find it more intuitive to use the more traditional search methods.

However, for some people, it took them a while to find out about the image recognition feature at the start. One important reason is that the “Look Up” tab shares the same magnifier icon as the text search bar on the home screen, which makes people think that they serve the same function and thus see no need to click on the “Look Up” tab.

(Usability issue)

Another issue is that when searching for an item through image recognition, only the “Closest match” shows up. “What if this isn’t what I’m looking for and I want to see other options?” One participant raised the question. Therefore, in my next iteration, I tried to address this problem by showing alternative results, allowing room for technical mistakes and errors.

(Usability issue)

“The text is so long. I don’t want to read it. Let me listen to the audio instead.” – user testing participant

All participants have expressed the same feeling when they clicked on the “Read More” button, and that is they don’t want to read it. “It’s good to know that it’s there, though,” one participant said, “but I’d rather listen to the audio than reading the long text.” While this validates the assumption that people prefer audio over text when they’re viewing art in the gallery, it also suggests that even the existing text should be balanced with other visual elements.

(Usability issue)

In a word, the core ideas of the design solution have been proven to be pretty successful – people enjoy using image search to look up a piece. What can be improved on, however, is the information architecture of the app, so that the image search function is made more obvious to the users.

Day 6: Revising

Based on the feedback I got from the user tests and my own reflections, I decided to add another day to the design sprint to redesign some screens that needed improving.

Reflection

It’s fascinating to see how much one can learn from rapid user testing and use the feedback to drastically improve the design in a short time, especially at the early stage of product development. However, due to the low-fidelity nature of the prototypes, it’s very crucial to determine which aspects of the design to focus on when conducting the user tests, so that both the designer and participants don’t get distracted by underdeveloped UI elements or preliminary features, which will naturally be improved once the product moves to the next stage.

See full project in Figma>