Although there is considerable value in producing measures, greater rewards result from connecting data with other data. When we compare to two variables or the same variable for two different points in time, greater perspective occurs. Put simply, when we understand which things go together, it simplifies the world around us. By joining things and comparing them, we can form explanations, create typologies, and refine or improve our actions.
The two parts of association that we need to be the most concerned about are comparison and dependence.
Comparison provides perspective by exploring if a variable is the same, greater, less, or within an expected distance from another variable. Comparison typically involves scales or benchmarks. In school, most of us received grades between 0 and 100 on assignments. We could easily interpret our individual scores based on the range of available values and its relative placement. We all recognized the closer to 100, the better the score. Moreover, in most schools, grades tended to be distributed in a predictable pattern (remember all the talk about the “curve”). When dealing with HR analytics, the scale may be known, yet the probability of various levels of attainment may not be. That is where benchmarking comes into play. Benchmarking is the process of comparing one’s own metrics to the distribution as well as the best scores in your comparison group or industry.
A good example with comparison relates to assessing the meaning of the results from measurement or metrics. Let’s assume that our average time to hire is 60 days. How do we interpret our performance? Assuming all things are equal, if the average in our industry is 30 days, we have considerable room for improvement. Conversely, if the performance leaders in our industry average 65 days, we are performing well. As a result, the comparison provides as much value as the metric since it is the basis of interpretation. It is important to keep in mind that benchmarking provides more value than simple measurement, but does not address issues of efficiency of resources, relevance of situations, or other critical factors to success. It is typically a first step in the analytics journey.
The rise of scorecards demonstrates the evolution of comparison. Most organizations today not only track metrics over time and compare them, but also report them in an accessible format for business decision-making. In 2001, David Ulrich captures the essence of scorecards for human resources in his: The HR Scorecard: Linking People, Strategy, and Performance. Drawing on the scorecard revolution initiated by Kaplan and Norton, Ulrich and his coauthors introduced how a human resource scorecard is a mechanism for describing and measuring how people and people management systems create value in organizations, as well as communicating key organizational objectives to the workforce. The factors of comparison include:
• Workforce Success – Has the workforce accomplished the key strategic objectives for the business?
• Right HR Costs – Is our total investment in the workforce (not just the HR function) appropriate (not just minimized)?
• Right Types of HR Alignment – Are our HR practices aligned with the business strategy and differentiated across positions, where appropriate?
• Right HR Practices – Have we designed and implemented world class HR management policies and practices throughout the business?
• Right HR Professionals – Do our HR professionals have the skills they need to design and implement a world-class HR management system?
Although presenting measurement with as strong conceptual focus and based on the strategy map drawn from a visualization of casual relationships within an organization, it only refined out understanding of association, not causation.
Dependence pertains to any situation where multiple variables fail to meet the mathematical idea of probabilistic independence. Independence means that the occurrence of one event does not affect the probability of another event. In plain English, dependence occurs when there is a relationship between mean values. Most of the time, we would look for a linear relationship between two variables, but other types of relationships can be tested for, as well. Keep in mind that one event may not cause the other event, but varies together. In other words, association or correlation does not guarantee causality. Almost every introductory statistics course includes the example of the sharks and ice cream. Most vendors sale more ice cream at the beach during the summer and there are more shark attacks. The two would appear to associated, but it would be hard to blame the shark attacks on the ice cream, even if we assumed that sharks cannot withstand the flavor of a recently “ice creamed” human. There is a confounding variable in the mix; it is summer. Both increase due to the impact of summer and more people are available.
How might we use association? If we build employee profiles that join measurements together, we might find that employees that work with numbers also exhibit a lack of people skills. This would not surprise any of us once we have worked with different type of people. However, by further examining the associations, it might become apparent that those with a lack of people skills tend to prepare less for advancement, but seek it a rate similar to other, more prepared employees. Based on these related variables, we might alter our training programs for those employees.
Next post, we will move on to higher value tools related to causation.