Select Page

Data Warehouse Development

| April 16, 2020

Dr Barry Devlin is among the foremost authorities on business insight and one of the founders of data warehousing, having published the first architectural paper on the topic in 1988. Today he is a leading consultant and speaker on data warehouse development.

Barry has published a number of articles for WhereScape, to follow you will find a synopsis and introduction to some of these. Each has a link to the full blog so you can explore that specific subject in more detail.

Designing a Data Warehouse

Always keep in mind the basic goal of your project: to deliver a cross-functional, long-life foundation for data provision and decision support. Data warehouse development project types vary and will continue to mutate over time with requirements that you cannot predict now, and your data warehouse must continue to provide accurate data throughout this evolution.

This blog explains how to:

  • Use templates to save time and money rather than building from scratch
  • How to define and refine the logical structure of relational tables
  • Choose which approach of data modelling is best for you – 3NF, Star Schema, Data Vault etc.

Read the full blog here.

Building a Data Warehouse

This blog explains how every design is only as good as the reality of its source systems, their missing data and poorly defined data structures. The finished design is always a balance between the vision for the model and the constrains of the sources. The article covers:

Read the full blog here.

Operating a Data Warehouse

This blog explains how to deliver your data warehouse successfully to the business and run it smoothly on a daily basis. We must avoid the problems of past ad hoc data warehouse development approaches that combined manual and semi-automated methods, and adopt advanced data management and automation practices. Find out how:

  • Deployment needs to be treated as a long-term, monogamous relationship
  • To address issues such as packaging and installation of the code
  • To bundle sets of objects and transport from dev to QA and through to production
  • To handle interdependencies between the data warehouse, data marts and data lake
  • To automate the historical information that tracks performance over time

Read the full blog here.

Maintaining a Data Warehouse

In some development projects, once a piece of software is up and running it needs only minor bug-fixing, but maintaining a data warehouse needs more attention than that. The nature of creative decision-making support is that users are continuously discovering new business requirements, changing their mind about what data they need and thus demanding new data elements and structures on a weekly or monthly basis. Indeed, in some cases, the demands may arrive daily! Read this blog to find out:

  • What a data lake should and shouldn’t be used for
  • Why and how a Data Vault gives more agility in the maintenance phase
  • The role of metadata in data warehouse maintenance
  • How to predict downstream impact of changes from automated documentation

Read the full blog here.

Data Foundation Guide: What It Is, Key Components and Benefits

A data foundation is a roadmap for how data from a variety of sources will be compiled, cleaned, governed, stored, and used. A strong data foundation ensures organizations get high-quality, consistent, usable, and accessible data to inform operational improvements and...

Data Automation: What It Is, Benefits, and Tools

What Is Data Automation? How It Works, Benefits, and How to Choose the Best Platform Data automation has quickly become one of the most important strategies for organizations that rely on data-driven decision-making.  By reducing the amount of manual work...

New in 3D 9.0.6: The ‘Repo Workflow’ Release

For modern data teams, the bottleneck isn’t just modeling - it comes down to how fast you can collaborate, standardize and move changes across environments. In developing WhereScape 3D 9.0.6, we focused on turning the repository itself into a first-class workflow...

Automating Data Vault 2.0 on Microsoft Fabric with WhereScape

Enterprises choosing Microsoft Fabric want scale, governance, and agility. Data Vault 2.0 (DV2) delivers those outcomes at the modeling level: Agility: add sources fast, without refactoring the core model. Auditability: every change is tracked; nothing is thrown away....

Unlocking ROI in Microsoft Fabric with WhereScape Automation

When organizations first evaluate Microsoft Fabric, the promise is clear: unified data, simplified architecture, and faster insights. But the real questions come down to ROI: How quickly can your team deliver governed analytics on Fabric? How much manual effort is...

The Fabric Complexity Challenge: Why Automation is Key

Microsoft Fabric is an undeniably powerful platform. By bringing together OneLake, Fabric Data Warehouse, Data Factory, Power BI and Purview, it creates a unified analytics ecosystem for modern enterprises. But as many teams quickly discover, power often comes with...

Related Content

Data Foundation Guide: What It Is, Key Components and Benefits

Data Foundation Guide: What It Is, Key Components and Benefits

A data foundation is a roadmap for how data from a variety of sources will be compiled, cleaned, governed, stored, and used. A strong data foundation ensures organizations get high-quality, consistent, usable, and accessible data to inform operational improvements and...

Data Automation: What It Is, Benefits, and Tools

Data Automation: What It Is, Benefits, and Tools

What Is Data Automation? How It Works, Benefits, and How to Choose the Best Platform Data automation has quickly become one of the most important strategies for organizations that rely on data-driven decision-making.  By reducing the amount of manual work...