Project: ACR DASHBOARD • 2013 • Role: UX Lead
Automatic Content Recognition or ACR powered the interactive episodes of shows like Anderson Cooper 360, NBA Tonight and Big Bang Theory broadcast on CNN, TBS or TNT networks.
In order to ensure that all systems were ready, several internal servers monitored the applications and streams. A single engineer was responsible for processing and monitoring every show, using a spreadsheet and email. He was pretty stressed out.
I served as a UX Lead on this project, managing a UX Designer and coordinating between our team, product and engineering.
The project took about 1 month.
At the kick-off, our client presented a detailed explanation of how each system of application monitoring worked. He then presented his desired interface and asked if we could simply apply a nice skin to it.
I asked why this was his desired layout and he replied, “That’s the way we’ve always done it.” He wanted a grid because traditional TV engineers and executives were accustomed to looking at the day as a grid. While this domain knowledge was useful, I dug a little deeper, and learned that the real underlying need was to understand “Is the team going to be working late tonight due to errors or dependencies that need to be resolved before airtime?”
While the current system of red and green checks was simple enough to understand, it didn’t communicate the true nuance of the situation such as urgency or cascading dependencies.
In the earliest wireframe, a three column layout cascaded from left to right allowing the user to toggle networks and then display impacted episodes. The broadcasts requiring the most attention were automatically sorted to the top, by severity (critical checks). In the third column, the problematic server or incomplete steps were ordered in sequence of dependency. Completed steps were presented in green to confirm readiness.
We included a status bar of the servers at the top of the screen. Notes or data log was added to enable the responsible engineer to quickly see any useful context related to the server status. Rather than cluttering the task list, the visibility of the notes could be toggled on or off.
As we refined the designs, we experimented with a stop-light (red, yellow, green) metaphor and began removing unnecessary alerts or repetitive data. We explored copywriting that was dynamic and reflected the current state.
Working with the engineers who would use the tool, we shifted the Program status categories from “Critical” and “Non-Critical” to “Needs Attention” and “In Progress.” The distinction is subtle, however, we wanted the top level view to reflect the number of shows that needed attention while individually displaying critical issues within each show.
In the final iteration, we displayed clear articulation of the engineer’s original problem — what was the scope of work for upcoming interactive episodes. While they might have many items that needed their attention, only a subset were truly critical. In the example above, multiple shows airing soon had service issues that needed to be resolved. Other episodes might also need attention, but those problems would be considered minor as determined by the engineers, even if their air dates were closer.
Our engineering team and stakeholders worked with us to confirm that the logic made sense and would enable the responsible parties to understand the urgency of preparing interactive episodes. They complimented the experience for its clarity and organization.