Select Page

Why the Ability to Scale is Transformative for Data Teams

| February 11, 2022
data teams

In my previous blog, we talked about quick wins. One of the criteria of a good quick win is that it can be built on or repeated at a later date. That later date has now arrived.

We have to be careful not to take on a myriad of projects that are unrelated to each other. Your data team’s ability to replicate the value achieved in one win with many similar wins across the business offers a potentially huge return on investment. But be careful not to become overwhelmed.

It’s counter-intuitive to think that by starting small you can achieve scale quicker, but this is indeed the case because when you limit your early scope you learn quickly and show tangible results. If you achieve something and it works, you can turn that into widespread board support and funding for a more ambitious project based on the same principles.

If you’ve not experienced that yet, it feels good. Very good.

Tony Robbins, an influential author, motivational speaker and philanthropist says “In business, the definition of “scale” is to increase revenue at a faster rate than costs. Businesses achieve this in a number of ways, from adopting new technologies to finding “gaps” in their operations that can be streamlined. Businesses that are able to add revenue and increase operational demands while maintaining the same costs – or even lowering costs – will be able to scale successfully.” (Robbins Research International, 2021)

This is the thinking that underpins scaling. So you’re ready to scale? There are, effectively, two ways to do this.

Scale Up: Replicate a Defined Project

Some projects make this the natural choice. At an energy company I worked for that ran petrol stations and oil supplies in 23 countries in Africa, our goal was to use data to make the retail offer more efficient. Although Mali is not the same as Burkina Faso, the principle is the same: we planned to capture customer data and use it to find what else the customer might want in their basket based on what most people buy or perhaps seasonal demand. That’s scale: you do one thing and then do it across all 23 countries.

The success of this relies not only on the ability to create quality and trusted data but also on the management task to get all parts of the business to follow the same processes and apply the same standards. 

The potential for failure is the absence of those qualities. Certain variations may occur between the different places in the business where the project is implemented, and data acquisition may vary, as might management ability. This may create governance challenges. Also, there may be variation in laws governing the acquisition, sharing and use of data, and in the habits and norms in the two places.

Scale Out: Find an Adjacent Activity 

This may be more challenging. The data capability grants an opportunity to innovate across the business. A Single Customer View (SCV) is useful to deepen existing relationships with customers – buying the same thing more often, but also to develop new products and services that they buy. The potential is that, in data terms, you can use the same SCV in an entirely new way.

There are two caveats: The first is that this may require new, or more, data. For example, some external data that tells us more about the customer’s habits when they are not transacting with our business. The second is that it may require new ways of working. That means cooperation between two business units that until now worked separately or discipline in how to execute a marketing plan based on knowledge rather than activity. For this, you rely on the willingness of the managers in those parts of the business to evangelize data and change ingrained habits. So it’s not entirely straightforward, but this is a huge step towards establishing a data-driven culture, which is our ultimate goal.

What to do Once you Scale Successfully

So now you’ve been the leader of a successful data transformation project, you will have no shortage of people who will want to support you because you are successful. This may present us with a problem: you have limited time in your day and too many things to do already. This means that scaling up isn’t about suddenly doing everything, everywhere, all the time, for everyone who sends you an email. Scaling is about putting in place ways to achieve a maximum return on the data team’s investment.

This is an important aspect of scaling up. From the outside, everyone can see the similarities between what you have just done and what they want for their bit of the business. Look closer, and they often overestimate the regularities. If something is a bit like something else, if there are a few aspects of that project that can be carried over into the next one, it’s not scaling up, it’s an entirely new piece of work.

If you’re going through this, you probably have requests to replicate projects on your desk already. It is important that you evaluate the need, the prospect of success, and the constraints on delivery that you will encounter. You have to learn to say no and have the confidence and backing to do that. Achieve this and it’ll be a major breakthrough for your data team. 

Summary

  • Having created success in one part of the business, the ability to replicate this in other areas or processes offers a potentially enormous return on investment.
  • The knowledge that the data team has of these processes is an important factor in applying them quickly and effectively.
  • You can crowdsource ideas for scaling using techniques like hackathons, bringing together your whole data community.
  • These also help to establish the importance of the data transformation, because scaling will make your work more visible.
  • Implementation requires the cooperation of the business, and especially of allies inside it if it is to happen at speed.
New in 3D 9.0.6.1: The ‘Source Aware’ Release

When your sources shift beneath you, the fastest teams adapt at the metadata layer. WhereScape 3D 9.0.6.1 focuses on precisely that: making your modeling, conversion rules and catalog imports more aware of where data comes from and how it should be treated in-flight....

Data Vault on Snowflake: The What, Why & How?

Modern data teams need a warehouse design that embraces change. Data Vault, especially Data Vault 2.0, offers a way to integrate many sources rapidly while preserving history and auditability. Snowflake, with elastic compute and fully managed services, provides an...

Data Vault 2.0: What Changed and Why It Matters for Data Teams

Data Vault 2.0 emerged from years of production implementations, codifying the patterns that consistently delivered results. Dan Linstedt released the original Data Vault specification in 2000. The hub-link-satellite modeling approach solved a real problem: how do you...

Building an AI Data Warehouse: Using Automation to Scale

The AI data warehouse is emerging as the definitive foundation of modern data infrastructure. This is all driven by the rise of artificial intelligence. More and more organizations are rushing to make use of what AI can do. In a survey run by Hostinger, around 78% of...

Data Vault Modeling: Building Scalable, Auditable Data Warehouses

Data Vault modeling enables teams to manage large, rapidly changing data without compromising structure or performance. It combines normalized storage with dimensional access, often by building star or snowflake marts on top, supporting accurate lineage and audit...

Building a Data Warehouse: Steps, Architecture, and Automation

Building a data warehouse is one of the most meaningful steps teams can take to bring clarity and control to their data. It’s how raw, scattered information turns into something actionable — a single, trustworthy source of truth that drives reporting, analytics, and...

Shaping the Future of Higher Ed Data: WhereScape at EDUCAUSE 2025

October 27–30, 2025 | Nashville, TN | Booth #116 The EDUCAUSE Annual Conference is where higher education’s brightest minds come together to explore how technology can transform learning, streamline operations, and drive student success. This year, WhereScape is proud...

Data Foundation Guide: What It Is, Key Components and Benefits

A data foundation is a roadmap for how data from a variety of sources will be compiled, cleaned, governed, stored, and used. A strong data foundation ensures organizations get high-quality, consistent, usable, and accessible data to inform operational improvements and...

Related Content

New in 3D 9.0.6.1: The ‘Source Aware’ Release

New in 3D 9.0.6.1: The ‘Source Aware’ Release

When your sources shift beneath you, the fastest teams adapt at the metadata layer. WhereScape 3D 9.0.6.1 focuses on precisely that: making your modeling, conversion rules and catalog imports more aware of where data comes from and how it should be treated in-flight....

Data Vault on Snowflake: The What, Why & How?

Data Vault on Snowflake: The What, Why & How?

Modern data teams need a warehouse design that embraces change. Data Vault, especially Data Vault 2.0, offers a way to integrate many sources rapidly while preserving history and auditability. Snowflake, with elastic compute and fully managed services, provides an...