“Above all else show the data,” quoted visualization guru, Edward Tufte, way back in 1983. While Tufte might have been referring purely to visualization, we can extrapolate his concern very easily to the metrics we use to “show the data” as well. Defining the right metrics beforehand that help dashboards users derive truly actionable insights from the data, is a slippery slope unless you have prepared beforehand and bring the right gear with you. Tableau, a company that produces interactive data visualization products and leader in the 2014 Magic Quadrant for Business Intelligence, included ‘choosing metrics that matter’ as the first tenant in its whitepaper on 5 Best Practices for Creating Effective Dashboards.
While this might seem fairly obvious and/or self-explanatory, this blog entry seeks to flesh out the premise and present a simple example that illuminates the difference between a dashboard that contains the right metrics and one that doesn’t.
Design-thinking is a human-centered approach where the focus is on users and their needs and not on a specific technology. Keeping this in mind, here’s a quick checklist to arriving at the right metrics:
- Have you identified dashboard users & understood their specific objectives from the dashboard?
- Are the metrics meaningful to the users? Do they understand the business implications?
- Is there an overall strategy within the company/team driven by the results of the dashboard?
- If so, do your metrics & accompanying visualizations immediately point to the steps to be taken?
We worked on a Sales Effectiveness project for a Fortune 100 technology client where we needed to understand the ingredients for top-notch sales performance. As we conversed with potential users of the dashboard, the first issue that arose was the ambiguity in how different managers in the sales organization defined sales performance. While some of them looked at overall revenue as an indicator of great performance, others tended to view revenue in comparison to the initial sales plan. The business impact of this was immediate and compelling – did a Manager dish out bonuses to his Sales team if it had out-performed the company average or if it met sales expectations? What information percolated up the system? We liaised among the users to define metrics that were standardized and immediately understood across the organization and actionable in the context of their overall sales effectiveness strategy.
Let’s look at a simplified test scenario loosely based on the above assignment. The dashboard below analyzes the sales effectiveness of 19 sales teams (A1-19), covering Sales Areas 1-19 respectively. Can you consider the below dashboard interesting? Maybe, but does it really present actionable insights? Questionably so.
- The Area-wise Deal Conversion pie-charts show the no. of deals that have been converted as a percentage of the total no. of deals planned for each sales zone. Not only are there no percentage values or legends to show the information clearly, but there is also an information overlap with the bar chart on the left, namely No. of Deals.
- The visualizations themselves are poor in conveying the right information – for example, there is no way to understand revenue from any of the charts and the axes are not labelled.
- There is also an essential connect missing between how training affects sales performance.
A much simpler and enormously more useful way to view the same data would be as follows, where Sales Preparedness is a measure of the average no. of training modules that the employees in a Sales Area in a district have completed as a percentage of the total modules available.
Using this, a district sales manager can immediately:
- Identify those area sales teams who are less prepared and yet achieve a high percentage of their sales targets – and further train them to achieve better targets, even equip them with new skills to increase the average new deal size. The same can be done for prepared, high quota attainment teams as well.
- Identify those area sales teams who are relatively un-prepared and not meeting their targets – these teams can be given basic sales training.
- Identify those area sales teams who have been trained and yet underperform – and mark these for further investigations into the underlying cause.
By focusing on metrics that matter and showing the right data, the dashboard becomes much more meaningful. But how does one choose the right visualizations once the metrics have been dealt with?
One example is the “Show Me” functionality in Tableau Desktop. This is a distillation of best practices in choosing the right visualization based on the metrics defined and guides lay business users or even developers without an advanced degree in visualization theory and/or design in the right direction.
While this nifty feature does not help in deciding the right metrics, one can make use of the same to arrive at interesting and powerful visualizations if one chooses to use Tableau. Once the few dimensions and measures are selected, Show Me automatically generates a visualization that uses all the fields selected. And if that is not satisfactory, one can roll over among the in-built styles with the pointer and identify one that suits specific requirements better.
This blog is authored by Alagiri Samy and Farid Jalal, Business Analytics Experts at BRIDGEi2i
BRIDGEi2i provides Business Analytics Solutions to enterprises globally, enabling them to achieve accelerated business impact harnessing the power of data. Our analytics services and technology solutions enable business managers to consume more meaningful information from big data, generate actionable insights from complex business problems and make data driven decisions across pan-enterprise processes to create sustainable business impact. To know more visit www.bridgei2i.com
The views and opinions expressed in this article are those of the author and do not necessarily reflect the official position or viewpoint of BRIDGEi2i.