business intelligence data interpretation

Data interpretation: what is it and how to get value out of it? Part 2

If you haven't read part 1 of this article yet, you can find it here!

Common Data Analysis And Interpretation Problems

The oft-repeated mantra of those who fear data advancements in the digital age is “big data equals big trouble.” While that statement is not accurate, it is safe to say that certain data interpretation problems or “pitfalls” exist and can occur when analyzing data, especially at the speed of thought. Let’s identify some of the most common data misinterpretation risks and shed some light on how they can be avoided:

1) Correlation mistaken for causation: our first misinterpretation of data refers to the tendency of data analysts to mix the cause of a phenomenon with correlation. It is the assumption that because two actions occurred together, one caused the other. This is not accurate as actions can occur together absent a cause and effect relationship.

  • Digital age example: assuming that increased revenue is the result of increased social media followers… there might be a definitive correlation between the two, especially with today’s multi-channel purchasing experiences. But, that does not mean an increase in followers is the direct cause of increased revenue. There could be both a common cause or an indirect causality.
  • Remedy: attempt to eliminate the variable you believe to be causing the phenomenon.

2) Confirmation bias: our second data interpretation problem occurs when you have a theory or hypothesis in mind but are intent on only discovering data patterns that provide support to it while rejecting those that do not.

  • Digital age example: your boss asks you to analyze the success of a recent multi-platform social media marketing campaign. While analyzing the potential data variables from the campaign (one that you ran and believe performed well), you see that the share rate for Facebook posts was great, while the share rate for Twitter Tweets was not. Using only the Facebook posts to prove your hypothesis that the campaign was successful would be a perfect manifestation of confirmation bias.
  • Remedy: as this pitfall is often based on subjective desires, one remedy would be to analyze data with a team of objective individuals. If this is not possible, another solution is to resist the urge to make a conclusion before data exploration has been completed. Remember to always try to disprove a hypothesis, not prove it.

3) Irrelevant data: the third data misinterpretation pitfall is especially important in the digital age. As large data is no longer centrally stored, and as it continues to be analyzed at the speed of thought, it is inevitable that analysts will focus on data that is irrelevant to the problem they are trying to correct.

  • Digital age example: in attempting to gauge the success of an email lead generation campaign, you notice that the number of homepage views directly resulting from the campaign increased, but the number of monthly newsletter subscribers did not. Based on the number of homepage views, you decide the campaign was a success when really it generated zero leads.
  • Remedy: proactively and clearly frame any data analysis variables and KPIs prior to engaging in a data review. If the metric you are using to measure the success of a lead generation campaign is newsletter subscribers, there is no need to review the number of homepage visits. Be sure to focus on the data variable that answers your question or solves your problem and not on irrelevant data.

4) Truncating an Axes: When creating a graph to start interpreting the results of your analysis it is important to keep the axes truthful and avoid generating misleading visualizations. Starting the axes in a value that doesn’t portray the actual truth about the data can lead to false conclusions. 

  • Digital age example: In the image below we can see a graph from Fox News in which the Y-axes start at 34%, making it seem that the difference between 35% and 39.6% is way higher than it actually is. This could lead to a misinterpretation of the tax rate changes. 
  • Remedy: Be careful with the way your data is visualized. Be respectful and realistic with axes to avoid misinterpretation of your data. 

5) (Small) sample size: Another common data analysis and interpretation problem is the use of a small sample size. Logically, the bigger the sample size the most accurate and reliable are the results. However, this also depends on the size of the effect of the study. For example, the sample size in a survey about the quality of education will not be the same as for one about people doing outdoor sports in a specific area. 

  • Digital age example: Imagine you ask 30 people a question and 29 answer “yes” resulting in 95% of the total. Now imagine you ask the same question to 1000 and 950 of them answer “yes”, which is again 95%. While these percentages might look the same, they certainly do not mean the same thing as a 30 people sample size is not a significant number to establish a truthful conclusion. 
  • Remedy: Researchers say that in order to determine the correct sample size to get truthful and meaningful results it is necessary to define a margin of error that will represent the maximum amount they want the results to deviate from the statistical mean. Paired to this, they need to define a confidence level that should be between 90 and 99%. With these two values in hand, researchers can calculate an accurate sample size for their studies.

6) Reliability, subjectivity, and generalizability: When performing qualitative analysis, researchers must consider practical and theoretical limitations when interpreting the data. In some cases, qualitative research can be considered unreliable because of uncontrolled factors that might or might not affect the results. This is paired with the fact that the researcher has a primary role in the interpretation process, meaning he or she decides what is relevant and what is not, and as we know, interpretations can be very subjective.

Generalizability is also an issue that researchers face when dealing with qualitative analysis. As mentioned in the point about small sample size, it is difficult to draw conclusions that are 100% representative because the results might be biased or unrepresentative of a wider population. 

While these factors are mostly present in qualitative research, they can also affect quantitative analysis. For example, when choosing which KPIs to portray and how to portray them, analysts can also be biased and represent them in a way that benefits their analysis.

  • Digital age example: Biased questions in a survey are a great example of reliability and subjectivity issues. Imagine you are sending a survey to your clients to see how satisfied they are with your customer service with this question: “how amazing was your experience with our customer service team?”. Here we can see that this question is clearly influencing the response of the individual by putting the word “amazing” on it. 
  • Remedy: A solution to avoid these issues is to keep your research honest and neutral. Keep the wording of the questions as objective as possible. For example: “on a scale of 1-10 how satisfied were you with our customer service team”. This is not leading the respondent to any specific answer, meaning the results of your survey will be reliable. 

Data Interpretation Techniques and Methods

Data analysis and interpretation are critical to developing sound conclusions and making better-informed decisions. As we have seen with this article, there is an art and science to the interpretation of data. To help you with this purpose here we will list a few relevant data interpretation techniques, methods, and tricks you can implement for a successful data management process. 

As mentioned at the beginning of this post, the first step to interpret data in a successful way is to identify the type of analysis you will perform and apply the methods respectively. Clearly differentiate between qualitative analysis (observe, document, and interview notice, collect and think about things) and quantitative analysis (you lead research with a lot of numerical data to be analyzed through various statistical methods). 

1) Ask the right data interpretation questions

The first data interpretation technique is to define a clear baseline for your work. This can be done by answering some critical questions that will serve as a useful guideline to start. Some of them include: what are the goals and objectives from my analysis? What type of data interpretation method will I use? Who will use this data in the future? And most importantly, what general question am I trying to answer?

Once all this information has been defined, you will be ready to collect your data. As mentioned at the beginning of the post, your methods for data collection will vary depending on what type of analysis you use (qualitative or quantitative). With all the needed information in hand, you are ready to start the interpretation process, but first, you need to visualize your data. 

2) Use the right data visualization type 

Data visualizations such as business graphs, charts, and tables are fundamental to successfully interpreting data. This is because the visualization of data via interactive charts and graphs makes the information more understandable and accessible. As you might be aware, there are different types of visualizations you can use but not all of them are suitable for any analysis purpose. Using the wrong graph can lead to misinterpretation of your data so it’s very important to carefully pick the right visual for it. Let’s look at some use cases of common data visualizations. 

  • Bar chart: One of the most used chart types, the bar chart uses rectangular bars to show the relationship between 2 or more variables. There are different types of bar charts for different interpretations this includes the horizontal bar chart, column bar chart, and stacked bar chart. 
  • Line chart: Most commonly used to show trends, acceleration or decelerations, and volatility, the line chart aims to show how data changes over a period of time for example sales over a year. A few tips to keep this chart ready for interpretation is to not use many variables that can overcrowd the graph and keep your axis scale close to the highest data point to avoid making the information hard to read. 
  • Pie chart: Although it doesn’t do a lot in terms of analysis due to its uncomplex nature, pie charts are widely used to show the proportional composition of a variable. Visually speaking, showing a percentage in a bar chart is way more complicated than showing it in a pie chart. However, this also depends on the number of variables you are comparing. If your pie chart would need to be divided into 10 portions then it is better to use a bar chart instead. 
  • Tables: While they are not a specific type of chart, tables are wildly used when interpreting data. Tables are especially useful when you want to portray data in its raw format. They give you the freedom to easily look up or compare individual values while also displaying grand totals. 

With the use of data visualizations becoming more and more critical for businesses’ analytical success, many tools have emerged to help users visualize their data in a cohesive and interactive way. One of the most popular ones is the use of BI dashboards. These visual tools provide a centralized view of various graphs and charts that paint a bigger picture about a topic. We will discuss more the power of dashboards for an efficient data interpretation practice in the next portion of this post. If you want to learn more about different types of data visualizations take a look at our complete guide on the topic. 

3) Keep your interpretation objective

As mentioned above, keeping your interpretation objective is a fundamental part of the process. Being the person closest to the investigation, it is easy to become subjective when looking for answers in the data. Some good ways to stay objective is to show the information to other people related to the study, for example, research partners or even the people that will use your findings once they are done. This can help avoid confirmation bias and any reliability issues with your interpretation. 

4) Mark your findings and draw conclusions

Findings are the observations you extracted out of your data. They are the facts that will help you drive deeper conclusions about your research. For example, findings can be trends and patterns that you found during your interpretation process. To put your findings into perspective you can compare them with other resources that used similar methods and use them as benchmarks.

Reflect on your own thinking and reasoning and be aware of the many pitfalls data analysis and interpretation carries. Correlation versus causation, subjective bias, false information, and inaccurate data, etc. Once you are comfortable with your interpretation of the data you will be ready to develop conclusions, see if your initial question were answered, and suggest recommendations based on them.

Interpretation of Data: The Use of Dashboards Bridging The Gap

As we have seen, quantitative and qualitative methods are distinct types of data analyses. Both offer a varying degree of return on investment (ROI) regarding data investigation, testing, and decision-making. Because of their differences, it is important to understand how dashboards can be implemented to bridge the quantitative and qualitative information gap. How are digital data dashboard solutions playing a key role in merging the data disconnect? Here are a few of the ways:

1) Connecting and blending data. With today’s pace of innovation, it is no longer feasible (nor desirable) to have bulk data centrally located. As businesses continue to globalize and borders continue to dissolve, it will become increasingly important for businesses to possess the capability to run diverse data analyses absent the limitations of location. Data dashboards decentralize data without compromising on the necessary speed of thought while blending both quantitative and qualitative data. Whether you want to measure customer trends or organizational performance, you now have the capability to do both without the need for a singular selection.

2) Mobile Data. Related to the notion of “connected and blended data” is that of mobile data. In today’s digital world, employees are spending less time at their desks and simultaneously increasing production. This is made possible by the fact that mobile solutions for analytical tools are no longer standalone. Today, mobile analysis applications seamlessly integrate with everyday business tools. In turn, both quantitative and qualitative data are now available on-demand where they’re needed, when they’re needed, and how they’re needed via interactive online dashboards.

3) Visualization. Data dashboards are merging the data gap between qualitative and quantitative methods of interpretation of data, through the science of visualization. Dashboard solutions come “out of the box” well-equipped to create easy-to-understand data demonstrations. Modern online data visualization tools provide a variety of color and filter patterns, encourage user interaction, and are engineered to help enhance future trend predictability. All of these visual characteristics make for an easy transition among data methods – you only need to find the right types of data visualization to tell your data story the best way possible.

To Conclude…

As we reach the end of this insightful post about data interpretation and analysis we hope you have a clear understanding of the topic. We've covered the data interpretation definition, given some examples and methods to perform a successful interpretation process.

The importance of data interpretation is undeniable. Dashboards not only bridge the information gap between traditional data interpretation methods and technology, but they can help remedy and prevent the major pitfalls of interpretation. As a digital age solution, they combine the best of the past and the present to allow for informed decision-making with maximum data interpretation ROI.

Author: Bernardita Calzon

Source: Datapine