After we finished development on FactFinder 1.0 (that I described here), we started to make improvements based on users’ experience with the product.
First, we provided a simplified view — “App View” — that aggregated individual operating systems processes into a single element in the map. Our development team, using FactFinder to understand its own behavior, took the initiative to make this change to make their own work easier.
Second, we wanted to have very clearly “task-orientated” features in our 1.1 release — features and deliverables we could point to that obviously and usefully lined up to top user goals, such as understanding overall dependencies, overall performances, and changes monitoring intervals.
These manifested as bundles of reports based on collected data, with terms familiar and relevant to users as they work on these tasks.
Our beta testing and demos led us to believe that there were some central use cases that were difficult to complete quickly and successfully in the software as it was. We undertook a user testing activity with realistic data to find out systematically what caused most difficulty. We summarized results on a white board, prioritized them, ad focused on a few, including these:
The highlighted items we worked to address together by making the software embody a systems approach and calculate for users the response time contribution for each layer in an application’s dependency stack.
After several iterations at the whiteboard,
We implemented server contribution by December:
Another high priority was to enable easy performance comparisons, between, for example, staging runs or staging and production.
A sample page would look like this (based on a different color coding):
These changes were all made in the course of about three months, so we were able to release a point release in early 2009.
Carl Seglem Portfolio
Resume
Big Data, Enterprise, and Complex Systems
Lean & Agile
Mobile
Patents