Once we have the majority of functionality defined, our team will do remote usability tests. Sometimes we use a development build of the
application, otherwise we use my design prototype. A lot of it depends on how difficult a testing environment is to set up, how
functional the prototype is and how much realistic data the prototype contains.
Usability testing is a team effort or it wouldn't be doable within an iteration or two. Making it a team effort also creates buy-in from all of the
disciplines on the team. I'll identify the type of employees we want to test. Business representatives work with regional and branch managers to identify
operation-level employees across the world to invite to testing. Our business analyst or program manager will send out and set up meeting rooms. During this time period
I'm identifying our goals, writing scripts and communicating with our participants. If we're using a development build of the application, development and QA are setting
up the environment. They will also pull in production data or create it.
When we do the testing everyone is present. Development and QA are required to attend one to two sessions depending on the number. Our business analyst,
program manager, most all business representatives and at least one developer, who is responsible for answering any technical questions and helping work through
any bugs we might hit, attend all the sessions. Directly after each session we document everything we saw and heard in a debrief meeting.
Because I moderate the walkthroughs, I watch the video of each session afterwards and takes notes on all issues and combine them with the debrief notes
from the team to determine what needs to fixed, left as is, business rules that may need to be supported and determine recommendations of how issues
should be fixed. As a team we look at the results and determine what will be fixed.
Depending on the complexity of the back end of the application, the development team will determine if we need to do an alpha and/or beta release of
the application. In this case I'm working with our business analyst to determine what issues employees are seeing both in the business rules, performance
and the UI. Once issues are identified, changes are made for the next release.
System Usability Scale (SUS)
Once the application release has been out and in use for three to six months, I send out a System Usability Scale survey to employees who have been using
the application. I also add in 3 free-form questions about their experience in the system. This gives us both an idea of how usable the new work
is according to our employees globally and what reasons may be causing any low scores. We'll often find out where training has failed and what smaller
functionality needs to be re-emphasized. We also find out any major issues that may not be coming up from our business representatives or what functionality
employees really need to make the application better.