Zeekr HMI platform EU usability evaluation
Usability Evaluation of Zeekr 001 CSD (HMI 1.5) with real customers
project OVERVIEW
Assess the usability of the HMI 1.5 platform by understanding how car owners navigate and interact with it. The study will involve Zeekr 001 owners, featuring a semi-structured interview and open discussion to uncover potential usability issues. The goal is to gather user feedback for future improvements.
Responsibility: User Research, Qualitative/Quantitive Analysis
Keywords: User research
Introduction
This project focuses on evaluating the usability of the HMI 1.5 platform(now equipped on Zeekr 001) to ensure that car owners can effectively navigate and understand its structure. By engaging real Zeekr 001 customers in a semi-structured interview, we aim to identify potential usability challenges and gather valuable user insights. The ultimate goal is to uncover areas for improvement and open a discussion about enhancing the user experience. With a balanced participant group in terms of gender and a variety of age ranges, we will gain a comprehensive understanding of how different demographics interact with the system. The findings from this study will guide future updates and improvements for the Zeekr 001 HMI.
Guidelines we follow
-
Preparations: Template
-
Defining the problem, objectives & scope
-
Identifying your participants
-
Survey question formats
-
Designing the survey
-
Reviewing the survey: Best practices
-
Pre-launch recommendation
-
Analyzing the data
-
Findings & recommendations
-
Communicating results
-
Iterating & refining
Main goal of UX research at Zeekr Technology EU:
-
Meet Users' Needs: Tailored to address specific requirements and preferences of our audience.
-
Deliver a Positive User Experience: Ensure seamless and enjoyable interactions.
-
Provide Significant Value: Offer clear benefits that justify their cost.
-
Achieve High User Satisfaction: Minimize complaints and enhance overall contentment.
-
Exceed User Expectations: Stay ahead of market trends and evolving consumer demands.
-
Adapt to Consumer Behavior Shifts: Align with changes in market dynamics to remain relevant.
-
Fit the Market Successfully: Ensure our offerings are well-suited to current market conditions.
-
Maintain a Competitive Edge: Stay ahead of industry competitors through continuous innovation and insight.
defining the problem, objective and scope
We need to do a usability evaluation on HMI 1.5(equipped on Zeekr 001) in order to better assess the usability of it. In addition, to do a general assessment, we need to focus on some of the HMI improvements we have proposed in order to validate if they are valid or not. If valid, the test will give more credibility to SNC as to why we should implement the improvements.
Goals
-
To find out the issues on Zeekr 001 CSD(HMI 1.5) that should be improved.
-
To have an open discussion to get user insights from car owners.
Scope
-
Zeekr 001 CSD(HMI 1.5) and DIM -> cover the most basic use cases
Set up
-
Duration: 1 h
-
Introduction & GDPR info + recording permission
-
Go through the semi-structured interview outline + open discussion
Two test leaders
-
Moderator
-
Note take
Identifying participants
Define the participants to ensure they represent those most likely to experience the problem. We need to segment our target audience into groups based on factors like ownership, roles, demographics, and archetypes, understanding that these groups may overlap.
By ownership
-
External Zeekr owners: Independent Zeekr vehicle owners, harder to reach but less biased.
-
Internal Zeekr owners: Zeekr vehicle owners with company ties (e.g., employees, families), easier to reach but potentially biased.
-
EV owners: Electric vehicle owners who can offer insights into general challenges or needs.
-
Potential EV buyers: Individuals considering buying an EV
-
Car owners: Anyone with a car, regardless of type, as issues like insurance or crash management apply to all.
By roles
-
Drivers & passengers: All car users who may experience general car-related issues.
-
Other car-related roles: Includes service technicians, insurance providers, etc.
By demographic
-
Based on age, gender, income, location, and lifestyle.
By archetype
-
Focus on attitudes, values, and preferences, such as interest in environmentally friendly features.
Considering the reality and resources, we finally found 6 participants in Gothenburg. So that we can come to their place and conduct the interview in person. The age range is quite even, they are all external Zeekr owners. They have owned the car from 2 weeks to 6 months.
❓ Why is it reliable that 6 participants can reflect the usability evaluation result?
Research result from Nielsen
The best results come from testing no more than 5 users and running as many small tests as you can afford.
Research by Jacob Nielsen, Tomas K Landauer, in year 2000. https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/
Designing the survey
To create a problem validation survey, we structure questions carefully and align them with our objectives. This helps create a survey that validates the problem and informs about the product design and the value that gives to our customers. We decide to conduct both quality and quantitive research, including semi-structured interviews, task-complete surveys, and subjective questionnaires. During the process, we designed four types of problem-impact questions:
Problem identification questions
Are designed to explore and validate whether a specific problem exists for the target audience.
Frequency
Specific aspects
Severity
Open-ended
Be Specific:
-
Instead of "Do you face challenges when interacting with CSD?" we ask "What challenges do you face when interacting with CSD?"
-
Instead "How bad is your experience on DIM ?" ask "How often do you experience DIM issues while driving?" Avoid Assumptions:
-
Use a scale to measure severity and an open question to understand specific challengesUse Multiple Formats:
-
Contextualize the Problem: Instead of "How do you plan to charge your car?" ask "During your last long trip, how did you plan to charge?"
Impact questions
Are designed to assess how a specific problem affects the target audience.
Behavioral
Emotional
Productivity
Financial
Decision-making
Be specific:
-
Instead of "How much does this problem affect you?" ask "How much does range anxiety...?''
-
Use quantifiable measures: "How much time do you spend planning trips due to low battery?" with specific time intervals.
-
Allow for different levels of impact: "On a scale of 1-10, how stressed do you feel when you can’t find a charging station?"
-
Use real-life scenarios: "During your last long trip, how much did concerns about battery range affect your plan?"
Current solutions questions
Are designed to gather information on how the target audience is currently dealing with the problem or challenge that the product or feature aims to solve.
Usage
Effectiveness
Satisfaction
Challenge
Willingness to switch
Be specific:
-
Instead of "How do you manage your journey statistics?" ask "What apps do you use to monitor journey statistics?"
-
Pair questions about satisfaction with questions about challenges to get a fuller picture.
-
Cover multiple aspects: "Do you use other methods to manage charging ?" with an open-ended response option.
-
Allow for alternatives:"How likely are you to switch to a new solution if it could save you time?"Gauge willingness to switch:
-
Use real-life scenarios:"When planning a long trip, how do you currently decide where to stop for charging?"
Feature preference questions
Are designed to identify which features or aspects of a product or service are most important to the target audience.
Ranking
Rating scale
Force choice
Conjoint analysis
Importance-satisfaction matrix
Be clear and specific:
-
Instead of "How important is navigation?" ask "How important is real-time traffic information in your navigation system?"
-
Use a variety of question types: Use a ranking question to prioritize features, a rating question to assess importance, and an open-ended question for additional insights.
-
Avoid overloading with choices: Limit ranking questions to 5-7 features at a time.
-
Contextualize the features: "Imagine you are on a long road trip. How important would it be for your multimedia and entertainment system to provide pleasure?"
Review the survey
We designed the survey using a structured approach based on four types of problem-impact questions and targeted investigation areas. The process begins with a semi-structured interview, starting with demographic questions to gather background information, followed by questions related to general CSD usage scenarios. Key areas explored include climate control, settings, voice assistant functionality, Android Auto & Apple CarPlay integration, timers, charging features, third-party apps, and DIM/HUD concepts.
For each area, we included Problem identification questions and Current solution questions to better understand specific user behaviors, pain points, and challenges in routine driving contexts. After the initial questioning, participants are asked to complete a series of tasks to demonstrate their interaction flow with the system. If difficulties arise, we follow up with Impact questions to assess the severity and how it influences the overall user experience.
Once the task-based interview is complete, participants are given a subjective questionnaire to rate the system on a broader level. This is followed by an open discussion, aimed at identifying which features or aspects of the system are most critical or valuable to the users. This comprehensive approach allows us to understand user preferences, pinpoint key areas for improvement, and prioritize future design iterations.
Question structure
-
What is your general experience with the XXX ?
-
What functions do you often use while driving?
-
How would you think about the organisation and presentation of all elements in the XXX?
-
Is there something in XXX that you would like to have/are missing for you?
-
Have you edited/personalized XXX?
[TASK X] Can you show us how you use XXX?
Easy - Some difficulty - Did not manage - Never used before
[TASK Y] Can you show us how you use XXX?
Easy - Some difficulty - Did not manage - Never used before
[TASK Z] Can you show us how you use XXX?
Easy - Some difficulty - Did not manage - Never used before
-
What did you think of them?
-
Lastly, are there any other functions or features that you've found particularly useful and use often?
⭐Subjective rating questionnaire
Pre-launch recommendations
Before launching the survey, it's essential to ensure the results will meet our needs and be valuable to stakeholders. Preparing result pages in advance helps us visualize outcomes and refine questions if needed before investing resources.
We created our own. Here's how we proceed:
-
Create one page for each objective defined at the start.
-
On each page, include the questions in your survey that are designed to validate each objective.
-
Define the format for presenting the results of each question to ensure it aligns with the objective of the page.
Before we launch the survey, we hold a review meeting with key stakeholders to present the objectives, questions, and potential outcomes, and gather their feedback. In our case the key stakeholders are Zeekr sales team and our internal UX researcher.
Involving stakeholders early ensures the survey aligns with business goals and captures meaningful insights. Their input helps refine the clarity and relevance of questions while identifying any gaps. Moreover, stakeholder buy-in fosters consensus, making it easier to act on the survey results.
analyzing the data
After we performed the customer interview, we collected lots of data. Then we conduct data cleaning and preparation.Effective data analysis is critical in understanding user behavior, identifying key pain points, and informing future design decisions. We followed a comprehensive approach to cleaning, preparing, and analyzing both quantitative and qualitative data gathered from user research. The process begins with ensuring data accuracy through cleaning, followed by detailed analyses of responses, both structured and open-ended. By employing techniques such as cross-tabulation, thematic analysis, and severity assessments, we can uncover patterns, prioritize problems, and measure the market potential of solutions. Ultimately, this data-driven approach helps prioritize features and solutions that address the most impactful user challenges.
Analyze quantitative data (close-ended questions):
-
Collect and clean data: Ensure all survey responses are complete, removing any invalid or inconsistent data. Exclude unclear or missing data points.
-
Categorize open-ended responses: Organize detailed responses into themes for easier analysis.
-
Frequency and Likert-scale questions: Calculate mean scores, convert response counts into percentages, and segment by demographics to identify significant differences.
-
Choice questions: Convert response counts to percentages to gauge preferences and identify trends.
-
Rating and ranking questions: Prioritize items based on average scores.
-
Cross-tabulation: Analyze variables together to find trends or correlations, such as linking frequency of use with frustration levels. (For example, cross-tabulate "How often do you...?" with "How frustrating is it when...?" to identify any patterns, such as higher frustration among more frequent users.)
Analyze qualitative data (open-ended questions):
-
Thematic analysis: Group similar responses into themes (e.g., "navigation issues" or "safety concerns"). Count occurrences to identify common problems.
-
Summarize ideal solutions: Identify recurring features or missing elements from responses to questions about ideal solutions.
Analyze user behavior and problem severity:
-
Determine the most common and severe issues based on user behavior and severity ratings.
-
Prioritize problems to solve: Focus on high-severity problems that impact frequent behaviors, addressing these issues first.
Measure impact and market potential:
-
Assess impact: Evaluate how much a problem affects respondents and their willingness to pay for a solution to gauge market demand.
-
Example: High stress from commute delays indicates a significant problem; willingness to pay for a fitness app shows market potential.
Prioritize solutions and features
-
Feature prioritization: Focus on features that received the highest value ratings, and analyze open-ended feedback for new, unanticipated needs.
Based on the analyzing methods above, we uncovered patterns from our interview results, then we grouped and prioritized problems, and sync with work plan to proceed the solutions.
Findings&recommendations
After identifying key trends from the survey, we summarize the most common problems, how users address them, preferred solutions, and suggestions. In the recommendation, we highlight correlations or patterns, such as frustration linked to poor decision-making, and focus on the main insights for actionable conclusions.
-
Be concise: Focus on the main points—what was the problem, how significant was it, and what action should be taken.
-
Prioritize: Highlight the most important and actionable findings first, and then mention secondary insights.
-
Quantify: Use numbers and percentages to make the findings more impactful.
Example of key finding summary:
-
Prevalence: "66% of respondents found the widgets on the home screen are useless, and block the space."
-
Impact: "Non-informative widgets took too much space and limit the flexibility of usage."
-
Design opportunity:"50% of respondents indicated they would like to edit the home screen and add informative widgets freely."
Before making recommendations, we revisited our survey’s original objectives—validating a problem, exploring opportunities, or understanding user preferences. Ensure recommendations are actionable by linking them to findings, prioritizing based on impact and feasibility, and aligning with business goals. Keep them clear, data-driven, and targeted at the right teams for effective implementation.
To present the recommendations clearly, we used the format for decision-makers: Present the recommendations in a clear, actionable, and concise manner for stakeholders and decision-makers. Include, the finding, the recomendation action, the potential benefit and the timeframe and necessary resources.
Example:
-
Finding: 66% of respondents found the widgets on the home screen are useless, and block the space.
-
Recommendation: Allow user to adjust the size of blocks on the screen. It's crucial to allow users to personalize the home screen more freely. Blocks such as EVA and weather should not be locked; users should be able to remove them as desired. Widgets like Media should not take so much space. The Navi widget should display a map while driving.
-
Benefit: This will improve user satisfaction on the home screen edit feature.
-
Timeline: Implement in Q3, 2 developers
Communicating results
When we finished the findings and recommendations, it's the step for presenting research findings, translating user data into actionable strategies. From summarizing key takeaways to grouping insights into themes, the goal is to make complex data digestible while emphasizing the most impactful findings. Since we have different audience and stakeholder groups with different functions, tailoring the depth of detail to your audience, prioritizing key metrics, and providing opportunities for interaction are essential for engaging them. Finally, we concluded with next steps, follow-up actions, and progress tracking ensures that the recommendations lead to real-world outcomes.
We untilized two approaches to communicate the results:
-
For the internal communication with in-house designers and the Chinese team, we sharing results: Provide the clear, concise, and comprehensive document with all necessary details. This is the full study and need about 1.5 hour to present fully.
-
For the management level communication, we presenting results: Focus on key highlights and valuable insights. Keep the speech to 20 minutes or less. Have the full results document available for deeper exploration after the presentation.
To make the results more digestible, we organized our findings into different function area categories. This could be based on feature topics such as CSD concept, climate feature, Media feature, and third party app.
In the prsentation, we opened with the most important insights or key takeaways from our survey. These are short, concise points that highlight the major findings and any recommended actions.After we’ve presented the findings, we allow time for questions, feedback, and discussion. This can help clarify any points and ensure that all stakeholders are aligned. Then concluded the presentation or report with a clear outline of the next steps. Include timelines, assigned responsibilities, and any follow-up actions needed.
Iterating and refining
Iterating and Refining is crucial after analyzing and presenting survey results. By gathering feedback, we can continuously improve the survey and the actions taken based on its findings.
After communicating the results, we gathered feedback from stakeholders. This feedback can reveal new perspectives, highlight any missed areas, or suggest alternative approaches. Use this to refine our analysis or recommendations.
Key Areas for Improving the User Research Process
-
Initiating the Research Plan: Improvements could be made in more clearly defining the hypothesis and research questions early in the process, ensuring alignment with the project's overall goals.
-
Target Group Identification: While the target group was effectively identified, refining the selection criteria to include more diverse customer segments could provide more comprehensive insights.
-
Scope and Methodology: The scope and methodology, including qualitative and quantitative approaches, could benefit from more structured alignment to ensure a balanced combination of data types is collected. Clarifying the tools and methods used for each part of the process would streamline the preparation phase.
-
Tools and Setup: While effective tools such as GoPro, Microsoft Teams, and phone recordings were used, standardizing recording setups and ensuring backups for all equipment could improve the consistency of data collection. Enhancing the legal documentation process, including GDPR compliance and privacy considerations, is crucial for smoother execution.
-
Participant Recruitment: Collaborating with internal stakeholders like user researcher and sales team for participant recruitment was effective, but the communication process with participants could be improved. Simplifying participant outreach, offering clearer rescheduling options, and ensuring more personalized touchpoints could enhance participant engagement.
-
Interview Process: The on-site interview process could be optimized by standardizing how tasks are observed and recorded. Ensuring that all subjective ratings are digitized and organized immediately could improve efficiency.
-
Incentives and Follow-Up: Offering personalized gifts tailored to participants' preferences or needs could increase engagement. Additionally, improving the follow-up process with participants post-interview would help maintain a positive relationship for future studies.
-
By addressing these areas, the user research process could become more efficient, participant-friendly, and aligned with the project's overall goals.
Key Areas for Improving the Results Analysis Process
-
Data Organization: The current use of tools like Figma and ChatGPT for organizing and summarizing data is effective, but streamlining the transcription process could improve efficiency. Automating the categorization of topics and enhancing integration between Figma and other tools would ensure a smoother workflow.
-
Result Coding: While coding task completion results and subjective ratings is important, establishing a standardized coding framework would ensure consistency across analyses. Automating parts of this process could reduce manual effort and increase accuracy.
-
Video Editing: Using tools like iMovie and CapCut for video editing is efficient, but ensuring that all videos are edited for clarity, with accurate AI-generated subtitles, could further enhance the presentation of user feedback. Improving video editing workflows may also reduce time spent on revisions.
-
Report Creation: Following Ursula’s user research template provides structure, but there is room for improvement in visual presentation and issue highlighting. Ensuring consistency in how similar issues are documented across participants could lead to more actionable insights.
-
Feedback Loop: Incorporating feedback from the HMI team on visual presentation and planning could be more structured. A formalized feedback process from the HMI PO regarding recommendations and roadmap planning would help align insights with future development goals.
-
Presentation of Findings: To optimize the presentation of findings to the UX team, a clear and accessible location for sharing the report is crucial. Additionally, addressing data anonymity and storage concerns, such as removing personal data like original videos and recordings by the end of 2024, is essential for compliance with data protection standards.
-
Data Anonymity and Storage: Deciding on the long-term storage of anonymized material and ensuring proper data handling policies are followed will help maintain ethical standards. Clarifying where the report and data can be stored for easy access, while ensuring compliance with privacy regulations, is a critical area for improvement.
-
By refining these aspects, the results analysis process can become more efficient, compliant, and aligned with best practices for user research reporting.
Iteration is an ongoing process. Each cycle of analysis, communication, and refinement helps us get closer to optimal results, improving the accuracy of your data, the quality of your solutions, and the overall effectiveness of our surveys.