Eco System

Create and design your own datawarehouse eco system

With the SEPIDATA datawarehouse platform you can manage large parts of your datawarehouse eco system. The focus of the SEPIDATA platform is on the creation and delivery of the analytical data model used to derive business intelligence. By delivering meta data API’s our system can integrate with data management solutions to exchange information about models, lineage, interfaces, source systems and more. Add a datawarehouse front-end (BI) on top of that and the eco system is complete.

Prepare for your datawarehouse

Requirements

Get a clear overview of information requirements (target consumer types and their tooling, is there a need for capturing history, what regulations are applicable, what non-functionals are appplicable).

Backlog

Setup a product backlog, the team & sprint configuration. Start creating epics and (data) stories and assign them to future sprints. Define the initial team velocity and prepare prioritation, estimation and sprint planning.

Architecture & governance

Define your datawarehouse solution architecture, data architecture, data governance and select DWH solution building block that best fit the requirements. Setup data governance within the organization.

Build and configure the environment(s)

Runtime environment

Install database server(s) from development to production (depending on sizing and volumes a separate application server is recommended). Install the SEPIDATA runtime agent on an  application or on the database server (depending on sizing).

Developer environment

Make sure that developer machines have Visual Studio and SQL Server developer edition installed. Download and install the SEPIDATA visual studio extension on developer machines.

Source control

Decide on the source control solution and the branching & merging strategy. Create an empty DWH solution in Visual Studio with our extension and add it to source control. Add the required DWH project types (based on your solution architecture) to the solution.

Incremental information delivery

Data discovery & staging

Discover data sources (database or flat files) by manually or via a discovery wizard. Setup the staging area, configure historical settings. Run data profiles on the RAW data. Everything is ready to ingest the source data into a new or existing staging area.

Design and build target model

Based on the methodology that is chosen design the target data model where data needs to be loaded to. Add the objects to the project via wizards or manually.

Build the transformation layer

Build the source to target mappings (transformations SQL statements). If applicable, do this for every applicable source that needs to be integrated into the target object (multi source strategy). Connect the target model (depending on methodology) to the transformation layer output. 

Test and load

Build and compile your solution and resolve any compilation errors and warnings. Test and load the solution on your local developer machine, fix bugs or loading issues. Validate early prototype with data stewards / business users and correct based on feedback.

Publish, package and deploy

Publish the code changes to the main branch. Create a deploy package within Visual Studio or deploy via build server / pipeline. Deploy your solution towards acceptance environment. Run the datawarehouse load and validate the output with business / stakeholders (business acceptance test).

Business Intelligence

 Publish the new DWH release to the SEPIDATA production application server and start building the BI front-end (reports / dashboards) or outgoing interfaces based on the new release of the information model.