Our speakers for today are matt Dager, account executive at WhereScape, andrew Flower, managing director for Right Triangle Consulting and Brian Thomas, senior solutions architect at Westcape. As a reminder, this webinar is being recorded and you'll receive an emailed copy. Thank you again for joining us today. I am now delighted to introduce you to Matt Dagger.
Hello everybody, and thank you so much for joining us today. Thank you to Wright Triangle for inviting WhereScape here to present the WhereScape Data Warehouse automation tools. Before we get started with the presentation, I really just want to spend a couple of minutes with you to give. You a high level overview of wherescape as an organization and introduce the tools to you. My name is Matt Dyer. I'm an Enterprise Account Manager here at WhereScape and certainly would love to continue the conversation with you after the presentation today.
So if you'd like to reach out to me directly, you can email me at matt [email protected] Certainly we can set you up with a demonstration of the product as well. A more formal personalized demonstration after today. If you'd like to learn more about the tools. For those of you who aren't familiar.
With WhereScape, we offer tools to automate the entire data warehouse lifecycle from end to end. Crucially shortening data warehouse projects, time to completion and our customers tell us that we increase their developer output by ten X. We do this by automating the routine steps that typically slow developers down. Our code is all standardized, it's template driven, and you'll notice during the presentation and the demonstration that we do a lot of dragging and dropping. Again, that code is standardized and it eliminates all the typical coding errors that can slow projects down. Wescape has got well over 1200 plus customers, regardless of the vertical. We offer tools fortune 100 companies all the way down to smaller organizations. WhereScape has now been around 20 years. We were developed by a group of consultants out of Auckland, New Zealand, and they were looking for automation tools and couldn't find it in the marketplace, so decided they would develop their own.
Ultimately that caught on very quickly and a lot of the organizations they were consulting for wanted to purchase the software and thus WhereScape was born. We are headquartered now in Austin, Texas. We do have offices in Europe and Asia, as well as Auckland, New Zealand. This is just a brief snapshot of some of the global organizations that trust WhereScape to automate the development of their data warehouses. We have some amazing case studies out there. If that's the type of thing you're into, reach out to me directly and I can provide those for you. Again, as you can see here, regardless of the vertical, we do have some Great case studies out there. Or you can go to WhereScape.com and just go to the search bar and search for some case studies and A lot of these are out there and easy to access and provide some great information for you.
Whereascape is comprised of a couple of different tools, we offer WhereScape 3D, which is our design automation tool, as well as WhereScape Red, which is our development automation tool.
For those of you who are interested in Dan Linsted's, data Vault 2.0 Worstcapes partnered very closely with Dan to create Data Vault Express. Data Vault Express is really a combination of both of those tools. The key benefits here is that we've included all of Dan's methodologies and standards built right into the template. Whether you're building out your hubs links and satellites, WhereScape knows that, and we automate that process for you. Most of you folks are very familiar with the traditional ETL tools out there, and they're very good at what they do, moving the data and transforming the data. But that's where they stop, essentially. WhereScape really gives you that same functionality, but it takes it a step further and we automate the entire process.
Whether you're creating dimensions, facts, tables, model views, we automate that process for you. Finally we document everything with just a touch of a button. That documentation is both technical as well as user documentation. All of that is automated through the WhereScape tools. Many of you may be familiar with this diagram here. In fact, this may look exactly like your environment, but the first thing that jumps out at us is just the number of different change tools that are being utilized here. Obviously, there's many different tools. It makes for slow speed of delivery and messy change management. In fact, one of the things we don't talk about often is the cost of support and maintain all of these different tools. WhereScape eliminates the need for these different solutions and we give you that functionality back with just two tools within WhereScape here's the same escalator utilizing WhereScape.
As you can see, we've eliminated all the different tools here. Just using the WhereScape Red automating the entire data warehouse lifecycle end to end, increasing developer output by ten X. Many of our customers report back to us. It's as much as 20 X giving you iterative agile change management. If for any reason that you're not delivering the data that the business is looking for, you can sit down with those folks in a conference room and in minutes you can make those changes and deliver what they're looking for. It really takes that process from months down to minutes. Automation is the key to all that. And that's what WhereScape provides. Most of the folks on the call today here know who this gentleman is.
This is Dan Lindst, creator of Databault 20. If you are utilizing Data Vault today or considering it, I'm sure you've met Dan, or talked with Dan, or perhaps seen him speak at many of the events that he holds.
Wescape has a very close partnership with Dan Lenstead. We make sure that all of his methodologies and standards are built right into our templates. One thing Dan says about WhereScape is that Data Vault Express reduces the complexity and costs associated with building and updating data vaults, dramatically shortening the learning curve for teams new to Datavalt 20. How we do that again is just by including all of his methodology and standards into our templates, doing a lot of dragging and dropping. Whether you're building out your hubs and links and satellite well, WhereScape knows all that and we automate that entire process for you. You don't have to be necessarily Data Vault Two auto certified or a practitioner to get the most out of Data Vault Express just by eliminating all the coding and automating. All of that makes you very productive right out of the gates.
For those of you who are utilizing snowflake as a target platform or considering snowflake, you probably know Kent Graziano as well. He's a snowflake chief technical evangelist and a big fan of WhereScape. Really understands the benefits that we bring to folks utilizing snowflake as a target environment here. What Kent says is that WhereScape automation for snowflake makes it possible for organizations to start using snowflake faster and more easily by simplifying and automating the design, build and population of data warehouse tables. If you ever have a chance to talk with Kent and you're interested in WhereScape, I'm sure he would be very happy to talk with you about the advantages of using WhereScape along with Snowflake. Here's a brief overview slide of WhereScape red. The takeaway here is really that WhereScape automates the entire process, everything from what you see on the red on the left to the red on the right.
We extract from your source into your target platform, whether that's snowflakes and apps or what have you. We prepare that data using stage tables and then move that data into your foundation layer. Most of you probably be looking at building out a data vault in this case. From there, we're going to allow you to make your business transformations, build out your star schemas or your business vault, and ultimately what you see here on the right. If you need to build out any views, you can do that as well. Down below we have a couple of different layers. Everything that we do here is built into our metadata, which allows us to provide you with unbelievable documentation. I know we're going touch on that in the presentation, so I won't still brian Thomas is under here or anything, but essentially everything that's captured in the metadata, we automatically generate the documentation after every deployment cycle, whether it's business or technical.
You'll have all that documented. As most of that can take weeks, if not months to do. After a project is completed with us, it's automatically done with just a touch of a button. You're provided with all of that documentation. With that, I'll conclude I appreciate your time today, and if you'd like to reach out to me directly, you can certainly do that at Matt Dyger at WhereScapecom or reach out to me through the right Triangle folks. We really value their partnership and really appreciate the time today and look forward to continuing the conversation with you guys after today's presentation. Thank you so much.
Great. Now we AI transcription over to Andrew Flower.
Thank you, Matt. I also have Travis Irvin, who is on the call here. He was our lead architect on the projects that we've been executing here, specifically the one we're talking about today. So he's available to answer questions. The challenge that were presented was to find a way to replace an on prem data warehouse. Very traditional. You can see in the picture here that we've all been building these things for 20 years. Extract, transform, integrate into an EDW, trying to manage everyone's requirements into one platform. Over 20 years, this organization had grown their data warehouse environment to 9000 tables, over 900 ETL jobs, and over $3 million of annual support labor. They asked us to help them reduce that cost, provide greater flexibility and agility, and help them get to the cloud. We write Triangle are big fans of both Snowflake and WhereScape. We're partners with them, and we immediately knew which direction we needed to go in, and that was to get them in the data vault architecture on Snowflake leveraging WhereScape? Next slide, please.
Looks like just one moment. While we're getting that back up, the client that were working with decided that the first thing they wanted to take advantage of was the capacity, the computing capacity that they had in Snowflake. Thank you. Instead of investing the effort into building out the data vault, even though we encouraged them to go that route, they wanted to get data in Snowflake as fast as possible. The short term plan was to do a lift and shift. And we also leveraged Whereescape for this. We were already working with WhereScape to begin using three D and Red to build the data vault. We said, well, we need a replication solution, and WhereScape assisted us in getting the right licensing and all of that so that we could get started on this right away. The concept was, let's not replace all of the existing ETL and all of that right immediately, but let's just replicate the data from the Onprem database into Snowflake, including the data marks, so we could begin moving the applications that use those data marks to the Snowflake platform.
We replicate out of all those 900 tables, we replicated 400 core EDW tables to Snowflake, as well as the data marks themselves. That allowed us to begin taking advantage and learning the Snowflake platform to take advantage of that elastic compute. As we started to take advantage of that, we started to recognize that we could replace the physical data marks with some virtual data marks using the elastic compute capacity of Snowflake. Were already beginning to transform the environment so that we could then eliminate the amount of data were replicating. Next slide, please. Once that was established and people were happy with being able to leverage the Snowflake platform, we set out to begin the Data Vault development. We wanted to replace from that original picture the ODS and the EDW with the Raw Vault and the business fault. We leveraged three D to design that Data Vault using the 20 templates that Matt was talking about to accelerate that development.
We leveraged the 3D model automation and then that helped us generate through the ELT to build out all of that red ELT and generate all that documentation that was shown earlier. We did this in an agile method.
One of the things that we really embrace the Data Vault two auto method is it lends itself to be an agile. Traditional Data warehouses require everything to be loaded in a dependency order because of the hard referential integrity. With Data Vault, once you begin to understand it, if you don't already, we can build things incrementally with Agility and you can additional data much more easily than you can in a traditional third normal form, or even a Kimball style Star Schema data warehouse. Were able to break the project down into Sprints and begin delivering with red incremental value, ultimately realizing the full replacement of the data warehouse.
In Snowflake using three D and Red from where? Escape. Next slide, please. Finally, what were some of the benefits of going in this direction? I mentioned that just in maintenance, there was over $3 million spent in labor to support the existing data warehouse. We were able, just through right Triangle's own estimates and our own computation, were able to reduce that to both build and with the ongoing maintenance, to just over a million dollars. And that'll decrease over time. Maintenance cost leveraging datavolve 20 specifically with WhereScape, maintenance cost reduced dramatically. So we created that. Because of the reason for that is we created that Agility and that elasticity in this architecture that allows us to add and change new data sets, replace data sets much more easily than you can in a traditional EDW, where you have to tear down both code and data structures and rebuild them with this.
This is additive. New data is added. It doesn't have to destroy what's already there. New data sets create new code that is added, as opposed to destroying the code that's already there. So that Agility is terrific. We eliminate a lot of the ETL that went from the data warehouse to the data marts and we leveraged the virtual capabilities and the elasticity of snowflake to be able to deliver those solutions to those analytic applications. We do that by separating those application requirements from the data warehouse. The Data Vault allows us to focus on the integration of the data and not so much the specific needs of each application. We're able to then to reproject out of the data vault specific structures for specific use cases. This also allows us to easily manage schema drift. As new data comes in and source data changes the datavall 20 the automation templates within WhereScape and the ELT code generation allows easily to manage schema drift without having to tear down, rebuild and reload data.
Now we will go into our demo and we'll have Brian take over.
Well, thank you, I appreciate it. Thanks Andrew for the good information. So let me share my screen. I'm not going to go into too much detail because Matt did a good job going over this diagram. I just wanted to point out a couple of things on the data sources we can extract from pretty much any kind of data source. A traditional DBMS platform, XML, JSON, even web services that we can grab the data from over here on the left hand side.
Now we do help you with load and stage tables and build your raw data vault in your data foundation layer. Help you also build out a business vault and a star schema from that raw data vault. And then we stop from there. Your dashboard users would connect to the data warehouse and pull the data from that. Of course, while we're doing all of that, we capture everything in metadata and we use that metadata to create wonderful documentation.
We also have a scheduler that can automate everything. That's our where scape environment on how your data would move in our environment. Now what I'd like to do right now, since we don't have a whole lot of time, is I'm going to go to our 3D tool. As Matt said, with Data Vault Express, you end up with a package of WhereScape Red which is the development tool, and WhereScape 3D which is the design tool. With the Data Vault templates and hooks and everything like that in it. I'm going to just go over to WhereScape 3D. In fact, I think Andrew mentioned this as well. From the design standpoint, I really think that WhereScape secret for implementing a Data Vault design. It really helps you to do that. I'm going to walk through this real quick. I don't have a lot of time, but we're going to start off with just a tour of this.
On the left hand side, this is our repository and these are the steps. The first thing you do is you create a connection. You create then a source. I've already reverse engineered a source of very simple little orders database here. I'm going to take this and create a data vault design from it, which is a conceptual design for a data vault. I will turn that into a data vault. I will create the load and stage tables to support that. And then we'll export this to red. If I have time, I'll come back and build out the business vault, which is the second part. The first real value in this is doing that data vault design in an iterative agile manner. The second part is not having to worry about those supporting tables and how you get the data there. That all of that's done from the 3D tool.
What I'm going to do is I'm going to display this source that I reverse engineered. Now, this is a very simple database with 14 tables here. And I'm lucky. I have relationships. We have the ability to reverse engineer the relationships, to derive those relationships. If they don't exist, we have a nice little GUI you can go through and derive those foreign keys as well. So luckily, I have all of this. I have this nice system in place. I have down here, I have a bunch of easy buttons is what I call them. I can create that conceptual design of your data vault. We're going to call this the sales and I'm going to call this 1.0 from reverse engineering just to keep it simple so that you can keep track of it. Now, the first thing that pops up is attributes. With day ofault 2.0, they have a method of identifying the attributes, whether you want them in a low volatility satellite or what is the business key, or if you have a multi active satellite if you need to.
You can even create your own attributes. If you want to create PII or HIPAA, something like that, you can easily do that. The next step is if you are a data vault expert and you understand what your design is going to be, you can go ahead and put your hubs in. I'm going to skip this and really show you where the power of come in. What I'm going to do is I'm just going to take a few of these items. I'm going to take customers and orders and drag them into the middle and orders details and products and employees. I'm going to drag these across so that we have five tables here. Now it brought across the relationships. There's two things we need to do. We need to turn this into a data vault. It does that based on those attributes, those satellites and the business keys and all of that.
Now we can come in to each entity and I can say, oh, customer ID, that's a business key. The rest of these are medium volatility satellites. I can do that. And so it assigns colors. Again, I have an easy button for that. If you have hundreds and hundreds of tables, you don't want to go into each entity, into each attribute and select it. We have automation built in for that. I just did customers, so I can take it out, but I'm going to do the other four tables and we go through. There are flags and switches to make it match up with exactly your system. For the most part out of the box, it gives you a first shot at that data vault design. I'm just going to click next it goes through and runs some routines to set those. You can go and look and we can say, okay, so order.
It says Customer ID is a linked business key and Employee ID is a linked business key. Order ID is a linked business key and a business key. The rest are set to medium volatility. Maybe I look at something like oh products and I go, I think my unit price because of the changes that have happened recently, that's a high volatility. We have price changes all the time because of things that are going on, because of the pandemic shortages and things like that. The units in stock that's also high volatility and our units on order. I'm going to pull those out and put those into a high volatility satellite just to pull those out. I know we have had upheaval in everything going on in the last year. Now we look at this and you can see that there's lots of different colors to signify, different things.
You can always go in and make modifications. Now I'm going to come over to my easy buttons and I can do things like add source mappings. If I have more than one table that leads to these products, I can add a source mapping, merge those together. I can go ahead and generate my data vault now though, that's what I'm going to do. I'm going to call this sales from MCR and we're going to do that.
Now we have some naming standards and stuff like that, but this page is the most important. The secret behind three D and WhereScape is everything is automated based on model conversions and 3D. Model conversion routines are little small things that need to be done to take that logical conceptual model of a data vault and turn that into a real data vault. It creates the satellites, it understands hash key generations.
These model conversion routines actually run based on templates. You can modify both the model conversion routine and you can modify the template underneath. That's kind of what you get when you get WhereScape Data vault express, you get the data vault templates to be able to do this and the model conversion routines to run through. We go through this, you can see it's running a bunch of little small steps, doing things like copying the hub hash keys to the links and the satellites and splitting satellite transaction attributes out. It's just doing a lot of things that you normally would have to do if you were coding this yourself. We end up with a data model, data vault, data model. Here with the blue is hubs, the red is links and the yellow are satellites. If we come down here and look, we can look at the products and we can see there's the high rate of change that has our unit price units and stock units on order pulled out.
The rest of the products are in the medium rate of change, the rest of the product information. That's very easy to go back and to look at this. Now, the next thing you can do is you can say, well, that's not quite right. Let me come in and I can come in here and I can do a quick copy and just show you some of the agile properties here. I make a quick copy of it. I'm going to go ahead and display this new model. I'm looking at this and I go, what, I forgot to add to the products, I forgot to add the categories. I'm going to go up and I'm going to drag and drag the category name in. I don't need the category ID because I just wanted to add the category name. Now what I need to do is it does not have that attribute.
I need to go in, find the category name and just set the attribute to the medium volatility satellite. Now I can then generate another data vault from this and I'm going to call this 1.1 more and we're going to go through, run those same model conversion routines. Just like that, I have a new data model. From this point it doesn't look that different, but it's going to add some differences. At the next step, we have the employees, we have the products, and we can see under the medium rate of change that we now have that category name added and it actually comes from a different table. So here's the source columns listed. So that's the beauty of this. Now the next step, if we remember going over here, the next step is to create those load and stage tables. With WhereScape red, if I bring back up that model, so what you do is you create the load tables, then you create the stage tables, and then from the stage tables you create your data vault, your raw data vault.
With 3D, so this is a bottom up approach. You're creating load stage, then your raw data vault with WhereScape 3D. We're doing a top down approach. We're taking and creating the data vault. I'm going to push a button and it's going to automatically create those load and stage tables to support my data model. Now I'm going to do that, going to create this, we're going to call this Sales 1.1 Load and Stage. You can call this anything, but it's not a version, it's just for you to separate out. Maybe you call it sprint one, sprint two, sprint three. That sort of thing. Now we have another group of model conversion routines that understand things like Hub hashes and change hashes and things like that. It's going to create all of that stuff for us. This is going to take a few more steps. One of the things we did was we turned originally we had five source tables and we turned that into like 13 or 14 Data Vault tables.
We added an additional source table and now we have 26 tables. This is really where the power of WhereScape comes in. You really need WhereScape right triangle to help you tame the beast of Data Vault. Data Vault is a wonderful methodology but it can really get large and hard to manage without a tool and without a consulting company to show you how to do this. Now you can see we have our load tables. Here's that categories table that we pulled in separately and it underscores it with the source system. That's just a default. You can set it up however you want to. We also have the stage tables. Here's our stage tables to build out our Data Vault. Now the next step, if you'll notice this says TGT. This is a target system agnostic model, what I used to call in the data modeling days, a logical model.
We need to make this a physical model, tie this to a particular data warehouse so we can do that with another easy button if you need to, if you just need to generate the DDL, there's a button to do that as well. The first thing we need to do is make this a physical model. I'm going to call this Sales 1.1 and we're going to say Red. I can set this to SQL Server because that's my target that I'm going to use in my metadata.
You can just as easily create Snowflake DDL. We can create redshift, postgres, Google cloud teradata. We're going to do that, just going to run through create that. Now the next step is you can manage your target locations. This is again a place that you can automate. I'm showing you this so that you can get an idea. We can take our load tables and I can put those in a staging schema and I can take my stage tables and do the same thing and move them to a.
The hubs links and satellites I'm going to go ahead and put in my data warehouse. And so we're going to do that. And the next thing has is groups. With WhereScape, Red has groups and projects. You can create a group, so you can group together by business unit or by subject area or even by Sprints. I can create a group called Sprint One and put all of these tables into it so that they'll show up. Now this is a logical grouping. It's a label in Gmail. You can be in multiple groups at the same time. We're going to do that and then we're going to run one last template which is going to prepare this for Red. Now we're done. If we go and look at the model, you can actually see it has SQL Server date and time as opposed to that. The last step that we're going to run is to actually export this to Red and it's done via XML.
We create XML that gets loaded into our metadata for Red. At this point the design is done. Now I need to point it to the metadata. If you can have separate metadata for dev, test, prod, whichever environments you have, I'm going to point that at my development environment that I'm working on. It's going to read my metadata first and say, oh, there's 25 new objects, do you want to create these? Yes, I want to create these objects and then we go ahead and load it. The first thing it does is it updates that metadata, says here's all the new tables, then it goes off and creates these tables.
Now you can have instructed it to just create the DDL. You could check it into your change management system if you need to, but since this is a development environment, I'm just going to go ahead and push it directly into my development environment.
Now we're going ahead and we've updated all the tables, created the tables, and now we're generating that code. Like Pat said andrew, that it really is generating lines and lines of code, weeks worth of code that you're writing. It's creating it for you.
Because it's the SQL Server, it's creating the constraints and unique indexes and things and then finally it compiles that for you. If I go back into Red where I have this, I can refresh it and we can see that Sprint One has showed up. Here's all of our tables are pushed over there. Now they're not loaded yet, so we would need to load them. I can easily do that based on a Sprint by Sprint basis. I can build a job that says let's run, let's load this. And so I can do that. While this is running and loading, I can go ahead and go back to if I go back to my Data vault design, there's a button down here, another easy button that says generate that business fault.
Because one of the key components of Data Vault is it's an insert only methodology. You end up with outdated rows that are not current. We have the ability with a click of a button to be able to create those current views. We're going to call this 2.0 Business Vault. It's going to create the current views, the point in time tables, and if you have any bridge tables that were marked, it will create those as well. Puts in the ghost record for you and it runs through a group of model conversion routines. Again, that's our secret. It creates these pits, bridges, current views, everything that you need. In just a few moments we have done that. We have created it. Here's our current view of the employees table. And so again, it happened. Lines and lines, hundreds of lines of code were written in just a few minutes.
Now we have the ability to push that back to red, but at this point, I really would like to just show you the documentation. We just have a few minutes left, but I just wanted to show you how easy it is to create your data vault. Create your business vault as well. So we do the documentation. As Matt said, it's a click of a button. I'm going to create this for all objects. It creates HTML. You can put it on your intranet site. One of the key things that I always like to point out on this page is this line right here. Do you want to include shadows on your diagram boxes? Yes, it was that important. They put it in the GUI. Now we create the documentation and then I can either display it in a browser or I can display it directly in the application.
So let me just display that. We create two kinds of documentation. We create that user documentation. What's in the data warehouse, what is your hubs, your satellites, your links? We also include a glossary that has every single column and comments for it. You can load this yourself via Excel spreadsheets. We also include the technical documentation. We look at that same Sprint. It includes the stage load tables. If I had gotten to the point where we created a fact table, you can see where you can put some of that business information in there, like the grain of your fact table and the purpose, and you can even see the procedures. This is the code that was created to update this fact table. It's very interesting how quickly we can build this documentation. I know when I used to do data warehouses, we ran around with our hair on fire and documentation was an afterthought for most.
We would spend a week or two after the project was implemented creating the documentation. Now it takes you five minutes with a couple of clicks. Hopefully that gives you a good idea of what WhereScape can do for you as far as Data Vault goes. To be able to create your raw vault, to be able to create your business vault, and also to create your Star schema that sits on top of that. I'll turn it back over to Lauren if we have any questions.
Great. Thank you so much, Brian. Yes. Please start entering your questions into the chat window and we'll address as many as possible. So go ahead and submit those there. Let's see. Andrew and Travis, I think we have a few that came through for you too. Why did you start with Lift and Shift and not go straight to building the Data Vault?
I guess I'll address that one. I mentioned when were in the presentation that we wanted to start getting value out of the snowflake investment as quickly as possible. Our desire was to begin the Data Vault right away. I think what this demonstrated is that both WhereScape and right triangle were able to adjust, do that replication, begin showing the value and also that hooked to the client on, okay, well, we have a platform now they start to understand the elastic capabilities. It actually made adopting concepts of the data vault a little easier. That was a big change to go from where they were to where they were going. So it was a nice bridge step. Although we would prefer, I think, to really start moving towards the Data vault right away. I think ultimately it saves you money and it gets you where you need to be a little faster.
I'll add one more thing to that. The Lift and Shift also gained our client the ability to start moving their bi environment over to the snowflake environment and that helped them get return on their investment right away.
Great, thank you. And one more related to this. What were the challenges you faced in converting from the Lift and Shift data warehouse once you started building the database?
I'll give my perspective and let Travis jump in. As they began converting their VI environment and looking at those reprojections that were building those virtual reprojections from the existing warehouse when we started building the data vault, and as you've seen in Brian's demonstration, you have the business vault. It was really about connecting once we started to build out the data vault, connecting through the business vault to those target bi environments, So just making sure that the plumbing lined up well and connected so that we reduced or eliminated as much as possible any rework on that bi side.
Right. With the Agile methodology, we're able to target specific areas of the data warehouse that they wanted to replace first and make sure we sourced and had those rolling through the data vault into the target environment and let them turn off the maintenance to some of their more costly ETL jobs at the beginning, and then work through each of those based off the key highest value targets. On the data warehouse side.
Like we had a question come through for Brian. Brian, I don't know if you want to respond to this one live. It looks like you might be typing it, but the question is, what hash key function do you typically use in snowflake for hash keys and change hash key?
Well, that's a great question. I was just about to send a message because I wasn't sure. I think we're out of time. We do by default, we create MD Five. The MD Five hash, but we support all of them. So that's just the default. You can go in and set it up so that's we support the 256 and everything. I know MD Five, if you have a large implementation, you might end up with some hash confliction. So we support all of them, though.
We had one more come through. I think we can probably answer these last two. Let's see, how long did the project take? Going back to Andrew and Travis to build the data vault and the business vault to the point that the client couldn't use it for analytics.
I'll hedge so if we get to the point where they could use it for analytics, were at a point where we could do it in three sprints, and our sprints were four weeks long, so about three months, but then incrementally over the course of about six or seven sprints. We got the core the bulk of the core data in there, so were able to more robustly cover the analytic applications. As Travis said, using the Agile approach, were able to target specific needs earlier in the first handful of sprints. The six or seven sprints to get core data is light years faster than a traditional data warehouse, which would take 18 plus months to build, 20 years to get. That.
Great. We'll just answer this last question here again through Brian. Does generated snowflake code utilize snow pipes and streams for loading the various data layers?
I was going to just type that in because I know we're over, but we'll make this our last question. As of right now, out of the box, we only create snow sequel. We don't support those out of the box, like automated, but snow pipe and streams support for those are on our roadmap. Of course, because our application is so flexible, you can script anything you need to. You can change those templates right now by hand if you wanted to, and go in and have it utilize those for that functionality. That might be a place where right triangle would come in to really assist you in making modifications to the templates. That thing to take advantage of that. I'm sure that they could really help you with that.
What Brian didn't also say is you can change the automated templates to where it will automatically generate those on a consistent basis. If we figured out a repeated snow pipe or type of ETL or ELT that we wanted to use to help load the data and we could make it repeatable, then we could change the templates to where it would generate those snow pipe commands to be a part of our ELT.