Select Page

New in 3D 9.0.6.1: The ‘Source Aware’ Release

| December 4, 2025

When your sources shift beneath you, the fastest teams adapt at the metadata layer. WhereScape 3D 9.0.6.1 focuses on precisely that: making your modeling, conversion rules and catalog imports more aware of where data comes from and how it should be treated in-flight. Fewer manual edits. Smarter defaults. Cleaner lineage.

We are calling this the ‘Source Aware’ Release – read on for what it unlocks.

Source-Aware Type Mappings

What changed: new %source_table_name and %source_column_name variables are now available inside data type mapping transform codes.

Why it matters: time and date conversions, locale quirks, or source-specific exceptions often force manual edits. Injecting the source table and column into your transformation logic lets you drive consistent behavior by rule rather than by hand.

Practical impact:

  • Apply time-zone or locale rules by source without branching templates.
  • Standardize tricky data types when upstream teams rename columns.
  • Reduce one-off fixes when promoting across environments.

Purview Imports That Speak Column Names

What changed: CSV and TXT imports from Microsoft Purview now bring in column names rather than position numbers.

Why it matters: catalogs are for humans and machines. Naming-level fidelity improves lineage, matching, and model generation.

Practical impact:

  • Stronger alignment between governance and design.
  • Fewer mapping edits before conversion rules can run.
  • Smoother handoffs to automation and downstream documentation.

Conversion Sets: Seven Upgrades For Everyday Velocity

We updated the default conversion sets shipped with 3D to reduce manual work and sharpen Data Vault ergonomics. 

Highlights include:

  • Consistent Link hash ordering: updated Hash Key Generation ensures predictable link key construction.
  • PIT from Views – target-aware: PIT generation now includes target platform specific transformations.
  • Multi-active satellite IDs: automatic sequence generation when no multi-active natural key exists.
  • Create Loads – smarter reuse: supports one source column feeding multiple DV targets cleanly.
  • Business Vault generation: improved naming fidelity during BV creation.
  • Artificial keys cleanup: removes superseded business key relationships when surrogate keys replace them.
  • RED export preparation: broader invalid character removal for cleaner round-trips.

Small Fixes That Remove Big Friction

Quality improvements that smooth daily work:

  • View and copy WhereScape-provided workflow sets in Workflow Manager.
  • PostgreSQL discovery correctly identifies a table’s database.
  • Word documentation diagrams generate reliably.
  • XML import handles encoding reliably.
  • Case sensitivity filter behaves as expected.
  • Data type mapping names now have stronger validation to prevent broken configs.
  • Entity and Attribute Ratings are now copied to new categories, during the advanced copy operation; especially useful for sensitive data.

Platform-Aware Mappings Out of the Box

We added missing transformations across common routes so you do not have to:

  • SQL Server to Snowflake
  • SQL Server to Teradata
  • SQL Server to Generic

A Day-Zero Scenario: From Catalog To Conversion

  1. Import sources from Purview – 3D now reads column names directly.
  2. Profile and discover – PostgreSQL and other sources resolve accurately.
  3. Apply your standards – conversion sets handle hashing, PIT, multi-active satellites, and BV generation with fewer edits.
  4. Export to automation – RED export preparation cleans naming and characters for deployment.

Result: a shorter path from governed metadata, to executable design.

Why It Matters

  • Less manual intervention: source-aware mappings cut repetitive transforms.
  • Governed by default: Purview column-name imports tighten catalog alignment.
  • Faster Data Vault delivery: upgraded conversion sets accelerate raw vault, business vault, and PIT patterns.
  • Fewer surprises in CI: validation and naming fixes prevent run-time breakage.

Recap: Release Highlights

  • New %source_table_name and %source_column_name variables in data type mappings.
  • Purview CSV and TXT imports now use column names.
  • Seven conversion set upgrades for hashing, PIT, multi-active satellites, and BV generation.
  • Workflow set visibility and copy in Workflow Manager.
  • Discovery, documentation, encoding, and validation fixes.
  • Added transformations for SQL Server to Snowflake, Teradata, and Generic.

Ready to try 3D 9.0.6.1 ‘Source Aware’ features in your environment? Reach out to us for a demo to get started or contact us for a walkthrough of the new patterns in action.

Data Vault on Snowflake: The What, Why & How?

Modern data teams need a warehouse design that embraces change. Data Vault, especially Data Vault 2.0, offers a way to integrate many sources rapidly while preserving history and auditability. Snowflake, with elastic compute and fully managed services, provides an...

Data Vault 2.0: What Changed and Why It Matters for Data Teams

Data Vault 2.0 emerged from years of production implementations, codifying the patterns that consistently delivered results. Dan Linstedt released the original Data Vault specification in 2000. The hub-link-satellite modeling approach solved a real problem: how do you...

Building an AI Data Warehouse: Using Automation to Scale

The AI data warehouse is emerging as the definitive foundation of modern data infrastructure. This is all driven by the rise of artificial intelligence. More and more organizations are rushing to make use of what AI can do. In a survey run by Hostinger, around 78% of...

Data Vault Modeling: Building Scalable, Auditable Data Warehouses

Data Vault modeling enables teams to manage large, rapidly changing data without compromising structure or performance. It combines normalized storage with dimensional access, often by building star or snowflake marts on top, supporting accurate lineage and audit...

Building a Data Warehouse: Steps, Architecture, and Automation

Building a data warehouse is one of the most meaningful steps teams can take to bring clarity and control to their data. It’s how raw, scattered information turns into something actionable — a single, trustworthy source of truth that drives reporting, analytics, and...

Shaping the Future of Higher Ed Data: WhereScape at EDUCAUSE 2025

October 27–30, 2025 | Nashville, TN | Booth #116 The EDUCAUSE Annual Conference is where higher education’s brightest minds come together to explore how technology can transform learning, streamline operations, and drive student success. This year, WhereScape is proud...

Data Foundation Guide: What It Is, Key Components and Benefits

A data foundation is a roadmap for how data from a variety of sources will be compiled, cleaned, governed, stored, and used. A strong data foundation ensures organizations get high-quality, consistent, usable, and accessible data to inform operational improvements and...

Data Automation: What It Is, Benefits, and Tools

What Is Data Automation? How It Works, Benefits, and How to Choose the Best Platform Data automation has quickly become one of the most important strategies for organizations that rely on data-driven decision-making.  By reducing the amount of manual work...

Related Content

Data Vault on Snowflake: The What, Why & How?

Data Vault on Snowflake: The What, Why & How?

Modern data teams need a warehouse design that embraces change. Data Vault, especially Data Vault 2.0, offers a way to integrate many sources rapidly while preserving history and auditability. Snowflake, with elastic compute and fully managed services, provides an...

Data Vault 2.0: What Changed and Why It Matters for Data Teams

Data Vault 2.0: What Changed and Why It Matters for Data Teams

Data Vault 2.0 emerged from years of production implementations, codifying the patterns that consistently delivered results. Dan Linstedt released the original Data Vault specification in 2000. The hub-link-satellite modeling approach solved a real problem: how do you...