The intersection of mathematics and common sense poses both opportunities and challenges. As industries grapple with the exponential growth of data, the reliance on sophisticated analytical tools has surged, heralding a new era of specialized roles such as Data Scientists, and Analytics Managers. Yet, amidst this technological prowess, a critical question emerges: Are organizations inadvertently sacrificing business acumen at the altar of statistical sophistication?
It’s hard enough to find a needle in a haystack; it’s rendered impossible if you don’t know you’re looking for a needle to begin with.
The need to analyze ever larger and more complex datasets over the last 5–10 years has led to the development of exceptionally powerful and sophisticated analytical tools and specialization in their usage. Roles such as Data Scientist, Machine Learning Expert, and Analytics Manager as well as Business Intelligence and Analytics departments have continually become more commonplace (and important) within large companies, especially within tech.
However, there may be a significant risk to this trend. As organizations strive for more complexity in their analytics, they have started relying increasingly more on data specialists. Yet, the result is that they unwittingly trade business fundamentals for statistical sophistication instead of building sophistication on top of solid business fundamentals.
While advanced analytical tools are useful – even necessary – for uncovering essential insights in large datasets, the cornerstone of any business analysis or discovery should always rest on a deep understanding of the business and industry, combined with problem-solving skills and creativity. To draw a parallel, it's like driving a car: speed is irrelevant if you lack a clear destination.
I’ve worked for five years in the tech space, specifically online food delivery. While the organizations I belonged to were indeed data-driven; and had great data to work with in terms of quantity, quality, and variety, they were not immune to this issue.
One of the main reasons is the misalignment between how business analysis typically works, and the way large companies’ org charts are structured: with business strategy & operations, and analytics as separate departments. Business analysis requires subject-matter expertise and agility to explore many possibilities in relatively quick periods of time; data science profiles are usually trained to solve very specific tasks over longer periods of time (e.g., fraud prevention). Also, operations and strategy teams often don’t “speak the same language” as data scientists, leading to analyzes that take months to complete, are often left unused or could be misleading (unintentionally, of course).
Let’s take, for example, user acquisition:
• For the longest time, the goal was to maximize user acquisition while minimizing cost, and complex mathematics were employed to better optimize this process (Fig. 1).
• However, acquiring users should be a proxy to “acquiring future income”, so you need to acquire users that will, on average, consistently transact profitably on your platform (Fig. 2.1).
Let's assume that Figures 1 and 2.1 illustrate the acquisition behavior for a specific month in the past for a tech company, such as all users acquired in January of the previous year. Both figures display a normal, healthy pattern in terms of efficiency, with diminishing returns as the cost of acquisition increases, and financials, starting negative due to acquisition costs but eventually showing a positive long-term ROI.
However, again, business knowledge aided by advanced analytical tools can help us see another story; we can further divide the acquisition cohort to understand if, for example, a relationship exists between a new user’s long-term ROI and the way the user was acquired:
One way to do this –and I will get a bit technical now– is to do a user-level profitability analysis. For the sake of simplicity, the analysis was done to understand only if a user was profitable or not 1 year after they were acquired (i.e., binary “profitable” or “not profitable” outcome; profitable defined as the sum all the user’s transactions’ profit or loss > $0). To understand how the user was acquired, characteristics of the user’s first order were used. For example: acquisition channel, discount amount, ticket amount, restaurant, estimated delivery time, actual delivery time, etc., for every user’s first order on the platform.
The results were very conclusive: Probability of a user being profitable is extremely low if the user was acquired with a discount and the opposite if the user was acquired without one (Fig 2.2); the segregated Acquisition ROI looks quite different vs the aggregated one (Fig 2.3).
Despite the apparent clarity of this analysis and its results in hindsight, a significant number –if not the majority– of tech organizations persist in directing their acquisition analytics efforts towards maximizing volume, often overlooking the potential future profitability derived from such volume.
Whether the root cause lies in Data Science departments lacking domain expertise or agility, or Operations and Strategy teams inadequately leveraging sophisticated tools, the reality remains unchanged: countless millions of dollars have been and will likely continue to be invested by tech companies in endeavors that yield negligible returns. This could have been easily prevented with organizational structures that foster agile utilization of data science. Optimal efficiency is achieved when Operations and Strategy teams possess moderate data science training to lead business-related analyses, while a separate analytics team handles more complex tasks requiring advanced statistical knowledge.