Featured Article

How to evolve your DTC startup’s data strategy and identify critical metrics

Plus: 2 common data mistakes to avoid

Comment

Piggy bank with folding rule and spirit level against a white background
Image Credits: deepblue4you (opens in a new window) / Getty Images

Michael Perez

Contributor

Michael Perez is director of growth and data at M13.

More posts from Michael Perez

Direct-to-consumer companies generate a wealth of raw transactional data that needs to be refined into metrics and dimensions that founders and operators can interpret on a dashboard.

If you’re the founder of an e-commerce startup, there’s a pretty good chance you’re using a platform like Shopify, BigCommerce or WooCommerce, and one of the dozens of analytics extensions like RetentionX, Sensai metrics or ProfitWell that provide off-the-shelf reporting.

At a high level, these tools are excellent for helping you understand what’s happening in your business. But in our experience, we’ve learned that you’ll inevitably find yourself asking questions that your off-the-shelf extensions simply can’t answer.

Here are a couple of common problems that you or your data team may encounter with off-the-shelf dashboards:

  • Charts are typically based on a few standard dimensions and don’t provide enough flexibility to examine a certain segment from different angles to fully understand them.
  • Dashboards have calculation errors that are impossible to fix. It’s not uncommon for such dashboards to report the pre-discounted retail amount for orders in which a customer used a promo code at checkout. In the worst cases, this can lead founders to drastically overestimate their customer lifetime value (LTV) and overspend on marketing campaigns.

Even when founders are fully aware of the shortcomings of their data, they can find it difficult to take decisive action with confidence.

We’re generally big fans of plug-and-play business intelligence tools, but they won’t scale with your business. Don’t rely on them after you’ve outgrown them.

Evolving your startup’s data strategy

Building a data stack costs much less than it did a decade ago. As a result, many businesses are building one and harnessing the compounding value of these insights earlier in their journey.

But it’s no trivial task. For early-stage founders, the opportunity cost of any big project is immense. Many early-stage companies find themselves in an uncomfortable situation — they feel paralyzed by a lack of high-fidelity data. They need better business intelligence (BI) to become data driven, but they don’t have the resources to manage and execute the project.

This leaves founders with a few options:

  • Hire a seasoned data leader.
  • Hire a junior data professional and supplement them with experienced consultants.
  • Hire and manage experienced consultants directly.

All of these options have merits and drawbacks, and any of them can be executed well or poorly. Many companies delay building a data warehouse because of the cost of getting it right — or the fear of messing it up. Both are valid concerns!

Start by identifying your critical metrics

Our retail modeling checklist is a simple but effective resource to help stakeholders agree on the definitions of critical enterprise metrics, such as net revenue and gross margin. The checklist should guide early discussions and discovery into edge cases that materially affect the critical metrics.

Even if you’re starting from scratch without any SQL templates, this checklist can help ensure that your SQL developer is aligned with the stakeholders who will be consuming the data.

Defining enterprise metrics is a critical step that’s often overlooked because the definitions seem obvious. Most of your employees may be familiar with these enterprise metrics at a surface level, but that doesn’t mean they fully understand them. The details matter.

If you asked your employees these questions, would they all give the same answer?

  • Is the price of shipping included in gross revenue?
  • When do gift card sales count toward revenue — at the time of sale or redemption?

In many organizations, employees don’t give consistent answers because:

  • The metrics have never been explicitly defined.
  • There was no concerted effort to educate employees on the definitions.

Data literacy is often synonymized with analytical aptitude, but they aren’t the same thing. Even analytics-savvy employees are likely to have data literacy challenges that they aren’t aware of. In many cases, your data-driven employees will be most affected by data illiteracy, because they’ll be the ones consuming data, generating insights and making decisions — all without realizing what they don’t know.

Why data details matter

These blind spots in data literacy cause errors in interpretation. They’re small at first, so it can be tempting to sweep them under the rug.

For example, a known error might be ignored because it only causes a couple of percentage points of error at an aggregate level. This reasoning overlooks the fact that errors are rarely distributed evenly. Any error is bound to affect some customers, products or geographies more than others. It’s common to have small errors in both directions that can wash out on average but amplify each other at a more granular level.

Most important operational decisions are made on a relative basis — not an aggregate level. When deciding which products to prune, or which marketing strategies to double down on, you’re generally making comparisons between dozens or hundreds of observations, not thousands.

Questions that deal in relative comparisons are subject to greater error:

  • How well is product A selling relative to B?
  • How much higher was LTV for customer segment C versus segment D last month?

Be aware of these subtle risks:

  1. The finer you slice your data, the greater the error grows relative to the signal.
  2. The more comparisons you make, the more likely it is that the biggest differences are being amplified by noise rather than a true signal.

Noise is only part of the problem. There are also cases where the error creates sustained bias. Organizations that gloss over the details of their enterprise metrics risk making egregiously bad decisions without even realizing it.

Imagine an e-commerce company that failed to consider gift card purchases in its definition of gross revenue. If gift card purchases are treated the same as other purchases, they’ll typically double-count toward revenue. Off-the-shelf BI tools — Facebook and Google Ads included — typically count gift card purchases as revenue. Then when the gift cards are redeemed as a payment option, they count toward revenue again, resulting in inflated LTVs and unrealistic cost per actions (CPAs).

Even companies that have correctly anticipated this issue can fall victim to more subtle problems. Many companies don’t recognize revenue for gift card purchases until the gift cards are redeemed. If a marketer uses gross revenue to measure the results of a holiday gifting campaign that yielded a large uptick in gift card purchases, they may write the campaign off as a failure prematurely. The same marketer may have an inflated opinion of the lower-funnel paid marketing campaigns that ran in January, when the gift card recipients spent their balances.

Not all issues are so subtle. A shortage of data governance and data literacy can cause avoidable headaches among your organization’s leaders.

You don’t want to end up in a situation where your finance dashboards and your e-commerce dashboards show inconsistent week-over-week revenue growth, and your senior leaders don’t know why. These misunderstandings cause friction in the form of wasted time and loss of trust. They also make it more difficult for employees across teams to collaborate effectively.

If this sounds eerily familiar, you’re not alone. Seasoned data leaders should have strong opinions on the best ways to mitigate these issues, but they’ll need buy-in from founders and other leaders to invest in the organizational overhead required to create a data-driven company.

Two common data mistakes

We’ve seen many companies embark on their first big data project only to skip some critically important steps and immediately start creating tech debt. Often, these projects start with an innocuous request like, “replicate this RetentionX dashboard in Looker.”

Many novice engineers or contractors make the mistake of focusing on short-term deliverables at the expense of scalable architecture.

We’ve seen a few versions of this mistake. Generally, the issues begin when:

  1. The metrics or dimensions are created too coarsely.
  2. The metrics are created too far downstream.

What happens when metrics are created at the wrong level of granularity (i.e., grain)

The grain matters because it’s always possible to aggregate a metric to a higher level (i.e., coarser grain) downstream, but you can’t split it up into a lower level (i.e., finer grain) than it was originally created.

If you create gross revenue at an order grain, it’s very easy to aggregate it to a customer grain and measure average gross revenue LTV by cohort, but it’ll be impossible to measure what percentage of gross revenue is attributable to any given product. That’s because products exist at the order line grain, which is a finer grain than order.

Many companies make this mistake once, then “fix” the issue by copying and pasting their gross revenue calculation in multiple places, repeating it throughout their codebase. This is an anti-pattern that’s guaranteed to cause bugs down the road because metric definitions are never set in stone. They’re constantly being reevaluated and updated based on changes to the business.

Imagine that your company starts taking backorders for products that are out of stock, and you need to update your gross revenue definition with new logic. An engineer will struggle to make this update if their architecture has gross revenue calculations copied and pasted multiple times throughout their codebase and their BI tool. They’ll also find it difficult to test each occurrence for accuracy.

What happens when an engineer creates metrics too far downstream

BI tools like Looker make it easy to reference your raw data directly through their tool and start creating metrics and dimensions using their proprietary web user interfaces (UIs) and languages. Looker is a great example of this — just because you can use LookML, Looker’s proprietary language, to create your enterprise business metrics doesn’t mean you should.

Examples of diagrams of databases and data destinations
Examples of diagrams of databases and data destinations. Image Credits: M13

When you create critical metrics in Looker and make that your source of truth, it’s hard to get the truth out. Data warehouses support a robust set of integrations, but BI tools don’t. Teams that make this mistake typically create data silos or brittle integrations. Avoid this by creating your enterprise metrics in your data warehouse and sending them to your BI tools and other applications.

In general, you’ll want enterprise metrics to be defined as far upstream as possible so they can be referenced by any software application, vendor or internal use case.

You also don’t want to be stuck in a situation where your enterprise metrics are defined in someone else’s proprietary coding language, leaving you very little leverage when negotiating your next contract.

Takeaways and next steps

If we can leave you with one takeaway, it’s that many common issues that lead to tech debt are avoidable if you have the right resources and practices. Non-technical early-stage founders can’t be expected to see every potential issue ahead and should seek advice from experienced practitioners. Advisers can be mentors, employees, former colleagues or even investors.

More TechCrunch

Google says it’s developed a new family of generative AI models “fine-tuned” for learning: LearnLM. A collaboration between Google’s DeepMind AI research division and Google Research, LearnLM models — built…

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google gets serious about AI-generated video at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google reveals plans for upgrading AI in the real world through Gemini Live at Google I/O 2024

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade

At Google I/O, Google announced upgrades to Gemini 1.5 Pro, including a bigger context window. .

Google’s generative AI can now analyze hours of video

The AI upgrade will make finding the right content more intuitive and less of a manual search process.

Google Photos introduces an AI search feature, Ask Photos

Apple released new data about anti-fraud measures related to its operation of the iOS App Store on Tuesday morning, trumpeting a claim that it stopped over $7 billion in “potentially…

Apple touts stopping $1.8B in App Store fraud last year in latest pitch to developers

Online travel agency Expedia is testing an AI assistant that bolsters features like search, itinerary building, trip planning, and real-time travel updates.

Expedia starts testing AI-powered features for search and travel planning

Welcome to TechCrunch Fintech! This week, we look at the drama around TabaPay deciding to not buy Synapse’s assets, as well as stocks dropping for a couple of fintechs, Monzo raising…

Inside TabaPay’s drama-filled decision to abandon its plans to buy Synapse’s assets

The person who claimed to have stolen the physical addresses of 49 million Dell customers appears to have taken more data from a different Dell portal, TechCrunch has learned. The…

Threat actor scraped Dell support tickets, including customer phone numbers