Usability Testing and eLearning Design
The purpose of usability testing is to ensure the final product meets high standards. Testing helps to ensure that the module functions properly and is crafted in a way that is easy to use so learners do not get tripped up on elements of the course. Testing also shows whether the eLearning product is effective in teaching the information that learners need to understand and utilize immediately upon completion of the module. Finally, testing the module can also show whether the module was engaging to the learners and provide beneficial feedback to improve the learning experience.
In this blog I will explain different aspects of Usability testing as it relates to my project example from previous posts. Through the analysis of the case study for “QVR Logistics,” the need to improve skills with 3 communication tools was noted. An eLearning module which allows for asynchronous and flexible/customized learning about the basics of those 3 communication tools is suggested. This module is titled: “Comparing Communication Tools” and has a mixture of audio and video explanations as well as hands-on practice with the 3 communication tools.
Scope of testing
Scope refers to the specific goals, deliverables, tasks, and boundaries of a project. It defines what is included in the project and what is not, and helps to set expectations for stakeholders.
The test will be completed on the 10-screen module “Comparing Communication Tools” module. The test will focus on a few different areas, the first area will be functionality of all buttons, including buttons within the screen as well as the navigation buttons at the bottom right of each screen. Different devices would need to be utilized in the test to ensure layout and functionality works whether on a PC or a mobile device. Testers will be encouraged to click every button to ensure they function properly. The test will also focus on the accessibility features like audio/video and if they are helpful and easy to use. The test will also involve a review of the analytics of the module such as: length of time on each slide/assessment/and the entire course, Learner scores from the summative assessment (did they learn the content needed), and the end-of-course survey results. The survey at the end of the course will ask questions about how engaging the content was as well as whether they feel confident that they have learned the information they needed from the module.
Test Objectives
Test Objectives are created to define the purpose and goals of a testing effort. Test objectives are specific, measurable targets that describe what needs to be accomplished through testing.
Understand how learners will navigate the Module (focus on use of the pre-assessment to tailor the course and skipping to different sections)
Determine functionality of all buttons/videos/audio on different devices
Determine whether the accessibility features are helpful and easy to use
Determine how long it takes learners to complete each individual page
Determine how long it takes learners to complete the 10-page module
Determine whether the module successfully teaches the required information
Understand whether learners feel the content and delivery of content is engaging
Understand learner confidence in the learning objective after completion of the module
Test methodology
Test Methodology is defined in order to establish a structured and systematic approach to testing. Test methodology provides a set of guidelines, processes, and procedures that are followed during testing to ensure that testing is thorough, consistent, and reliable.
Components of the test type
The test will be remote to mirror the experience of the actual learner environment. Learners currently work most of their time remotely.
The test will be unmoderated to also mirror the experience of the targeted learners for the module. Since the module is meant for asynchronous and autonomous learning, I would like to mirror that experience and get feedback on that aspect of the course.
This test will be an Assessment type of test focused on the satisfaction of the learning experience as well as the effectiveness of the module. Testers for the module will be asked to click every button and attempt all assessments, then at the end of the module they will be given a survey to assess their feelings about the module and its design in relation to the information they were expected to learn.
About the Participants
Number of participants: 10
Requirements for eligibility: prior experience with Microsoft Outlook, Microsoft Teams, and Salesforce.
Qualifications of participants: Current employee of QVR Logistics (preferably a former sales agent), Employee must also use these communication tools in their work and should be comfortable working/communicating remotely.
Current Skills of participants: Basic computer skills, comfort with taking an eLearning course, good written communication skills for the survey, and skills with critically reviewing a product or service.
Participant Training
Participants will be given a brief outline of the usability test specific to what tasks participants are being asked to view critically. Participants will be asked to try every clickable area of the screen to ensure the items function and are appealing, as well as test the accessibility and navigation functions. Participants will be asked to go through the course just like any other eLearning course, but to also try navigating back to previous sections. Participants will also see a copy of the survey at the end of the test to help them focus on those areas while working through the module/test.
Test Procedures
This test is remote and unmoderated; however, the initial explanation of the test will be done synchronously to ensure all participants understand the tasks and objectives of the test. Participants will be asked to complete the test “tasks” within the eLearning module and immediately complete the survey when that is finished. Participants will be given a date that this needs to be completed as well as how to communicate to me regarding any major errors or issues. Surveys and module completion will be reviewed daily until the due date and then compiled at the end.
Roles
Roles are defined for collaborators working on the project that is undergoing testing.
Data Recorder/Observer: This would be the person who would receive messages from participants throughout the testing period. Due to the test being “unmoderated” the observations will come from the analytics throughout the testing period, surveys at the end of the test, and any participant comments throughout the testing period. This person would need to compile the information obtained through surveys, assessments, and all comments that have been documented. This role would function like an “Observer” due to the unmoderated/remote nature of the testing.
Facilitator: This person would complete the Participant Training and would relay any comments to the Observer throughout the process. This person would also send out reminders to those who have not completed the testing close to the due date as well as working with managers on scheduling time to test if needed. An end of test meeting would also be facilitated by the person in this role
Trainer: This would be a person at QVR Logistics that manages the LMS that would be able to upload the course for testing as well as send the course analytics to the Data Recorder for them to compile.
Usability tasks
Usability tasks are specific activities or scenarios that users are asked to perform during a usability test in order to evaluate the usability of a product. Usability tasks are designed to simulate real-world situations that users might encounter when using the product, and to test the product's usability in these situations.
Usability test scenario: You are a Sales Agent at QVR Logistics, and you want to improve your communication with customers/coworkers/and management using the 3 different tools available.
Navigate the eLearning Module by following the course how you feel is most comfortable
Use the navigation buttons to go back to sections that were skipped due to the pre-assessment tailoring the experience and reflect on whether that information should have been skipped (Survey)
Click every button on screen to ensure functionality of pop-ups and video/audio features
Use the accessibility features and reflect on helpfulness and ease of use
Submit the summative assessment at the end of the module and review your score
Submit the test survey about how engaging the content and delivery of content feels
Submit the survey about your confidence in the learning objective after reviewing the module and assessment results
Usability metrics
There are 2 types of metrics that will be gathered during the test to meet the objectives in Part D. The metrics are organized below into the two categories along with the objectives that relate to that metric.
Behavioral metrics are gathered such as the duration of completing the course, whether participants used the accessibility features, navigation, and buttons within the module, and how often the users utilize the pre-assessment to tailor the course.
Understand how learners will navigate the Module (focus on use of the pre-assessment to tailor the course and skipping to different sections)
Determine functionality of all buttons/videos/audio on different devices
Determine whether the accessibility features are helpful and easy to use
Determine how long it takes learners to complete each individual page
Determine how long it takes learners to complete the 10-page module
Attitudinal metrics are gathered such as how participants feel about their overall satisfaction and engagement with the course. Users will also be asked to complete a survey that measures these metrics related to their feelings about whether the module met their learning objectives and how easy it was to use.
Determine whether the module successfully teaches the required information
Understand whether learners feel the content and delivery of content is engaging
Understand learner confidence in the learning objective after completion of the module
Usability Performance Goals
The performance goals align directly with the objectives and metrics for the usability test. Goals were chosen to encourage high performance and are a mixuture of objective and subjective measurements.
All participants will navigate the course by taking the pre-assessment and navigating to skipped slides. (All slides viewed)
90% of learners will open pop-ups and videos without erroneous clicks.
100% of learners will open the accessibility features without erroneous clicks.
90% of learners will complete the full module in 1 hour
90% of learners will take no more than 6-10 minutes to complete a single slide
The average learner's response to the survey question about content alignment will score the course at least an 8/10
The average learner's response to the survey question about whether the module was engaging will score the course at least an 8/10
The average learner's response to the survey question about their confidence in the content after reviewing the assessment scores will score the course at least an 8/10
Plan for error reporting, frequency, and severity
To categorize errors and target the most important errors to overcome, I will use a simple ranking system with 3 levels. The 3 levels are labeled by the severity of the impact they could have on the learner experience: Low, Medium, and High. These error levels can be defined further in the following ways:
Degree to which the error affects the completion of the module
Low – the error does not affect completion of the module
Medium – the error causes the module to take longer than average
High – the error makes it impossible to complete the module
How often the problem occurs
Low – 1 time occurrence
Medium – Multiple occurrences but not all participants experience the error
High – Problem occurs for all participants (100% occurrence of error)
Summative Assessment process
• Reporting/Describing the results: A usability report will be created with the information that is documented above about the test as well as the information gathered by the Data Reporter, Facilitator, and Trainer throughout the test. The results of the qualitative analysis (survey results, comments by participants throughout testing, etc.) will be summarized and ranked based upon the occurrence of those comments. The results of quantitative data will be used to create charts and graphs to better visualize the information.
• Evaluating the metrics/goals: Each metric that was described above will be placed into a chart and analyzed depending on the type of data that was collected to support the metric. For the attitudinal metrics, survey results will be analyzed as will the comments that participants may have throughout testing. Behavioral metrics will be analyzed based upon the data that is received through the LMS and participant error reporting. The length of time spent on a specific slide, the length of time to complete the entire course, and the scores on post-assessments can all be analyzed by reviewing the LMS reports. Errors users may experience while navigating or opening certain videos/pop-ups will be evaluated based upon participant communication of errors both on the survey as well as any other methods used to communicate errors during testing (email, phone calls, etc.) Once everything is reviewed, the metrics/goals will be ranked based on how closely the test results were to the stated performance goal.
• Discussing the subjective findings: Surveys and discussions will be the main form of subjective data. To analyze and discuss this data, surveys will be reviewed, and responses will be categorized depending on the responses that were received and which metrics they relate to. To avoid bias in interpreting the results, communication with participants will be needed to clarify comments. Once the information is clear, the findings will be categorized into different feelings, both positive and negative, and highlighting feelings that were common among participants.
• Making recommendations to address noted problems: After reviewing the error results, evaluating the metrics/goals, and reviewing the summary of subjective findings, a plan to address problems will be initiated. The high-level errors will take priority because those affect the usability of the module and could hinder the validity of the metrics and survey results. After all high-level errors are addressed, the metrics/goals that least met the stated performance goals will be addressed alongside any subjective findings about areas of improvement. These findings will be addressed depending on severity as well as how impactful addressing the noted problems would be.