In the early 2000s, Wily Technology focused on enabling Global 2000 companies to manage their mission-critical Java-based systems real-time in production so that they could avoid, detect and resolve problems quickly and efficiently.
With the passage of years since our first round of venture funding, well over a hundred employees, offices and partners around the world, and hundreds of production deployments, we undertook an effort to take everything we knew and, from a blank slate, develop a new user-facing design for how the software should work.
We chose to work from a blank slate on new designs because we wanted to come up with significant innovations to provide more value at less cost. We did not want to start from our shipping software, which in many ways was little changed from our collective second guess at what people needed and could use, which we made well before anyone was using anything like our software to manage large environments. We favored flexibility and exposing options in the early days since we didn’t know what would actually prove useful.
First we focused on defining the priorities for the development, including prioritizing and describing the targeted users of the software (described in my Describing users story).
Later we described the desired user-facing form and behavior of the system, first quite broadly (see the Framework section of my redesign story) then at progressively finer and finer levels of detail (described in the Detailed Design and Visual Design sections of my redesign story).
After the design work was substantially complete, we began to take the vision (embodied in the priorities and designs) and fulfill it by improving the existing code base. I participated in the follow-on effort, as I describe in my story about being part of an Agile team.
The result of that effort was a major release which is used widely in production today, and which has since been further improved upon.
What about usability or user testing?
What about testing designs with actual users? Where was that in the plan? Wasn’t its lack a contributor to the failure of the project as chartered?
We knew we were taking a number of risks as we chartered then worked on the project, and we worked under a number of constraints and made a number of considered trade-offs.
There was great time pressure to ship the working software, and we were confident that as long as it worked as designed, the shortcomings in the existing software were significant enough that even if there were usability problems in the new, they would be less severe than the problems of the old. Also we knew that with our team that the time to complete the feedback loop from software good enough to user test to user testing to revised designs to revised software ready to ship simply would not fit in the constraints we were working under.
We decided that given a certain amount of time and money and people that we would make much larger investments in design than testing. We believed (and I still do) that if you have a dollar to spend and you need to choose whether to spend it on design or user testing, you should spend it on design. Once you’ve made this decision a number of times and have made significant investments in design, then the value in spending marginal dollars on testing becomes greater than than the marginal value in spending dollars on design. Another way to state this is: design first then user test. Yet another (perhaps obnoxious) way to say something along the same lines is that it’s more efficient to design your way to a good design than to user test your way to a good design.
We believed the most important and difficult problems were tied up in enabling people with a different skill set than was then required to use the software effectively in the environments we were working in. We also understood from experience that people in these situations could work effectively in their own environments but had difficulty imagining working in unfamiliar environments. That meant that to get the most useful user test information, we would actually need to do user testing in environments that were their own or looked substantially like their own. Both of these paths were costly and difficult, and alongside the other considerations, we decided we would start getting usability feedback when we had software working well enough to do alpha or beta deployments in user environments — in other words late in the development cycle, after first version detailed interaction and visual design had been completed.
Lastly, the software is actually used by a number of company employees in their work on site at production deployments with a number of different people, and who were used to change and could imagine how things in designs or prototypes would work in production environments with users like they had worked with. We had confidence that we could rely on them for useful criticism and/or validation of the designs throughout the process.
My comments and what I learned
I have deliberately chosen to leave out details of the arguments for and against making any of the decisions that were made in this story, largely for the sake of brevity. Below I list some of the lessons that I learned from the experience, focusing on those lessons that I believe are most pertinent to my interaction design work.
On predictable process from needs to designs: It is possible to predictably go from a rich understanding of users and their goals to detailed designs at increasing levels of detail. The people at Cooper showed us one impressive way to do create designs in the form of their Goal-Directed Design approach. There may be other ways, but having read about others, tried to do the work myself, and finally spent months working in Cooper’s way, I find it hard to imagine how to be more effective and efficient.
On scenarios: Scenarios are the most effective way I know to describe the behavior of a system and how a person can interact with it. I’d read and heard about this before, and tried it informally or unconsciously. But after focusing so much on scenarios, I came to realize that most of the time someone I was describing something to had trouble understanding what I wanted to get across, it was because I was speaking generally or in terms of functionality, rather than starting with one specific user with one specific goal in one specific situation and what they and the system do to achieve that goal. Almost universally, as soon as I used a scenario, people understood the central point and then could proceed to discuss variations of users, goals, situations, functionality and so on.
On shared project understanding and priorities: It’s crucial to have a clear shared understanding of what the project’s goals and priorities are in order that effective decisions about the timing and amount of investment in work are made. The shared understanding is best built both with artifacts (text and graphics in our case) and discussions (answering questions, responding to challenges). It’s also useful to state “negative goals” explicitly (“we are not going to address this user or that need now”) so that they can be a clear part of the shared understanding early on. We found that with this shared understanding of goals and priorities, many people working separately were able to make effective decisions about their own work that were consistent with others’, and we had few late challenges to decisions or priorities as development work was going on.
On risks of ignorance or users: In the absence of a rich understanding of users and their goals, there’s a very large risk (I suppose some would say certainty) of ending up with major usability problems, if it is even useful at all. Our software before this project was clear evidence of that. (This is similar to the argument made at length in Alan Cooper’s The Inmates Are Running the Asylum.)
On “problem” and “solution” space
It’s useful to be vigilant about distinguishing between what we called “problem space” and “solution space”. Sometimes we would be focusing on what we were trying to accomplish and prioritizing (problem) and people would start discussing what we should do about it (solution). Other times we would be talking about trade-offs between different approaches (solution) and questions would come up about what was more important (problem). Being able to quickly label things that came up as in “problem” or “solution space” helped us keep focus on what we were working on at any time, and address issues in the other area as soon as appropriate by the right people.
On top-down and bottom-up
It’s also useful to be conscious of doing things top-down or bottom-up. When working top-down, we found it useful to be aware that at any point in time we only had a certain amount of specificity in our thinking or designs and that some questions we could answer based on the work we had done and other questions we could not yet, because we hadn’t gotten to that level of detail. In cases where there were urgent and important questions at finer levels of detail than we’d gotten to working top-down, we would prioritize work in those areas and be careful of decisions we’d make in one area affecting others we hadn’t yet addressed.
On working in pairs
Working in pairs and small teams brings great benefits in terms of getting better results sooner with greater reliability and less re-work than working alone. I’m quite confident this is true for me based on my collaborations. I won’t go so far to say that it is universally true, but I do believe it’s generally more true than not that people doing creative technical work work better in pairs or threes over working alone.
On dispersed teams
You can do good design work with geographically separated teams with the right tools. One of the people in the design team was located in a different city two time zones away over 75% of the time while working on this project, but with good conference room phones, whiteboard sharing software, group and resource calendaring software, e-mail, shared document repository and a wiki, we were able to work quite effectively and efficiently.
On multiple personas
For our system, it was not very useful to have more than one persona for any of the roles that we were targeting. For us it was sufficient to prioritize work and design form and behavior with one persona per role, or even using role names and persona names interchangeably. Occasionally, we would need to deal with questions or situations where we needed to describe what would happen if one person in the real world had two different roles, but in our case I don’t remember any case in which it ever affected our designs, since by designing for each of our target pesonas/roles individually we also met multi-role personas’ needs.
On risks in development efforts
Even with clear requirements, development efforts carry risk and uncertainty, even when skilled, experienced and smart software engineers claim confidence in their estimates. In other situations I have heard development groups complain that the reason they can’t make their schedules is that the requirements are unclear and change on them. In this project, I heard very little along the lines of “we don’t know what you want” or “you changed your mind” and instead heard things like “I understand how that should look and work and why that’s useful, and we should be able to do that.” Based on my observations, conversations, and reading about development work that went on related to this and other projects I’ve been around, other factors I (with a non-software engineer’s background) suspect increase risk and uncertainty: teams that haven’t worked together before, technology that hasn’t been used by team members (or many people at all) before, creating a new code base that hasn’t been vetted in the real world, lots of rapid change in the code base, limited ability to test the code, and pressure to meet deadlines.