Decision Making Based On Data or Serendipity: Why You Should Analyze Data

  • By Zakhar Yung
  • 26-10-2020
  • Data Science
data analysis

Most products are in a constant state of evolution. They accrete new UI, features, functionalities, etc. But how do development teams decide which novelty their product deserves, and which should not be used? Some collect tons of data: feedback, reviews, user inquiries, keyword analysis, and many more. They analyze these data to arrive at a verdict.

Others also use data (you’ll hardly find anyone who doesn’t), but do not rely on it entirely. Their decision making process includes serendipity as an essential factor. Serendipity is an unplanned yet fortunate discovery, which, for some dev teams, is more valuable than data analysis.

Which approach should you take? Let’s find it out.

Serendipity is not when magic happens
Behind serendipity (you may also call it intuition or gut feeling) lies a certain background that pushes a specific idea to the top. Nothing comes out of nowhere. Nevertheless, your sixth sense may serve as a fine filter that focuses on one conclusion of many. But this filter is trustworthy only when you use it right.

When you can trust serendipity
Intuition reliability rests on the three pillars: expertise, problem structure, and time.

Expertise
Expertise is the availability of previous knowledge on the focus area. It is a fundamental condition that defines whether or not you should trust serendipity. If you’ve never worked with UI design, your gut-feeling decision will fail in 90% of cases. The other 10% is pure luck, but that’s beyond your control.

On the other hand, a substantial amount of topic-specific expertise (10 years, according to Rational Intuition by Cambridge University Press)  boosts the accuracy of your serendipity-driven conclusions.

Problem structure
Serendipity works better with unstructured problems. This means that the problem you’re trying to solve should either lack or have few intrinsic criteria leading to decision making. The more clear rules you have about decision making, the fewer the chances that intuition will prevail.

You can trust serendipity if you don’t have abundant data for analysis and there is space for intuitive reasoning.

Time
Lack of time is a favorable condition for serendipity when you need to make a decision fast without any chance to make a sophisticated data analysis. Quite often, intuition-based decisions are successful. However, this doesn’t mean that you should postpone your decision making until the night before a deadline.

So, the formula for trustworthy serendipity is the following:
(10+ years of expertise) + (no clear decision rules) + (time constraint)

How to develop serendipity
Serendipity is a pre-conscious analysis of the historical data stored in your brain accompanied by the current environmental cue. It happens so fast that you don’t treat it as an analytical activity. However, it is still the result of your brain function. This means that you can evolve your gut feeling to make it trustworthy.

The key here is to synchronize your hemispheres. The left hemisphere is considered analytically-centered, where rational thinking prevails. The right hemisphere is the shelter of your intuition and holistic thinking. When you integrate both parts, you can achieve great results in decision making. Meditation is known as the best way to do this. If this sounds ridiculous to you, check out the experience of dozens of CEOs who meditate at work, including Padmasree Warrior (Fable Group Inc.) and Marc Benioff (Salesforce).

Any examples of successful intuition-based decisions?
Well, if you Google this, you’ll likely get some insights about how Henry Ford doubled his employees' wages or Bill Allen risked $16 million on a commercial jet. Most likely, there are many more successful cases, but few of them are known. Why? Many people prefer not to acknowledge that some key decisions are made by their gut feeling rather than a robust analysis.

The trend is that serendipity is just a smile of fortune and the strategy cannot (or must not) rest on it.

Data-driven decision-making is essential
Data-driven decision-making (DDDM) means that a conclusion is based on data analysis. Serendipity or gut feeling is not considered - only pure facts and decision criteria. Let’s say you’re going to update the UI of your product and need to choose on the color scheme. A sound DDDM will include the following data analysis steps:

- Collecting user feedback
- Exploring competitor color schemes
- Preparing drafts
- Selecting the best drafts chosen within a team
- Introducing drafts for existing users to get feedback

Multiple data sets are analyzed and make way for a top-range conclusion. The more data you take into account, the higher the accuracy of your final decision.

Data-driven decision-making always works...right?
Keep in mind that DDDM is not 100% successful! For example, Toggl recently introduced a bold new color scheme as a part of their rebranding process. The classic tomato red was replaced with pink and purple...and it is horrible. Users complained here and there which pushed the Toggl team to change their initial decision. We’ll see how they handle this, but they acknowledged that “It should have been caught before the launch.”

This happened likely because the Toggl team had failed to analyze some data set, which could have revealed this pitfall in an early stage of the development. So, to increase the success rate of your decisions, you need to collect and analyze data in the right way. Below, you’ll find some best practices for this.

How to analyze data for decision making in the right way
Step 1: Define the problem
Any decision is a solution to a specific problem. And the problem must be clearly defined. For this, raise the questions that will help you understand the essence of a problem and pave the way for a future solution.

For example, your product is experiencing a huge churn of users. The churn is not the problem itself, but a consequence of a problem. The following questions will help you figure out the cause:

When did the churn begin?
How big is the churn compared to other months, weeks, etc.?
Which categories of users (subscription plan) comprise most of the churn?
What changes were implemented before the churn began?
And so on.

These questions are meant to narrow down the search of the possible churn triggers. This will define your problem.

Step 2: Identify and collect data
Once you have a set of questions that define your problem, you need to identify and collect the data that can answer those questions. Each question can refer to a specific data set. For example, the answer to "What changes were implemented before the churn?" can be found in a project management app like Jira; the data to answer "How big is the churn compared to other months" can be found in your database app, and so on.

So, you or your data analyst will need to pull data from Jira; if you use BigQuery as a database, you can easily import queries from BigQuery and other sources into a spreadsheet for analysis. Google Sheets and Excel are the best spreadsheet apps and decision-making tools so far. It’s up to you to choose between them. 

Step 3: Manipulate your data
This is when the data analysis begins. You need to manipulate your data to find correlations or dependencies, plot the data out, filter by specific criteria, and so on. Your data analysis should be tailored to find the best answer to the questions set in Step 1.

For example, you may need to visualize your data in order to compare the churn rate by a specific period. To learn the categories of users with the biggest churn, you’ll need to filter them by subscription plan.

Step 4: Create a hypothesis
During this step, you’ll need to interpret the results of the data analysis and create a hypothesis. You won’t arrive at the actual decision yet, but you will come to a reasonable conclusion about what causes the problem.

For example, in our case with the increased churn rate, we discovered that the churn affected free users most after the last product update. With this information, we can tailor a hypothesis claiming that recent implementations somehow constrained the flow of free users. Meanwhile, it’s OK for paid users.

Step 5: Decision
A hypothesis is not a decision, but an explanation that rests on findings from a data analysis. It may be true or false, whereas the decision is an attempt to either prove or disprove the hypothesis. For example, in our case, the product manager can roll back to see whether this will stop the churn growth. That’s the simplest approach. However, if the hypothesis was false, the roll back won’t bear any fruit and the churn will keep growing.

Here, the best approach is to dive deeper into the problem and find an answer to the question, which appears after analyzing the data:  "the recent implementations somehow constrained the flow of the free users."

This can prove or disprove the initial hypothesis without validating it in reality.

An optimum balance between serendipity and data-driven decision making
Data-driven decision making is a sustainable approach, but you can't dedicate a long and detailed analysis to every single decision. So, should you make smaller decisions guided by intuition? Not quite. 

As we defined above, trustworthy serendipity is when you have robust expertise, which is saturated with the experience of data-driven decisions. However, even with 10+ years of expertise, you should not ignore the data in favor of your intuition.

Many prosperous teams, such as Amazon and Netflix, have incorporated data into their decision making. They track different data and react in a timely manner to any change. In this way, they develop their serendipity level and polish the accuracy of their decision making.

The key finding is that you should always follow with the data. It will make your intuition more data-driven, and boost the accuracy of your decisions.

Share It

Author

Zakhar Yung

Zakhar Yung is a technical content writer at Coupler.io, a product for importing data into Google Sheets from different data sources. Before the IT industry, Zakhar gained experience in industrial facility trade and nuclear engineering (participated in the construction of Baltic NPP and Akkuyu NPP).