I will be providing you service with the following steps:
1. Data IQ:
– This is the part where we understand your data specifics,
i.e. how is the data maintained, how regularly the data model is refreshed,
how is the data fetched from different sources, how is the data maintained
at various stores, etc.
2. Data sources:
– Here, we take proper insights into your data sources and finalise the required data.
– The sources can be your ERP/EAP systems or SQL Servers
– The data sources will be the data lake for us, from which we will extract our required data.
3. Data Points:
– Data sources are the data that will provide the data; but data points are the points;
from where we will set our connections.
– Data points are the points or a temp source you can create to provide us for fetching
– There is a benefit to creating this data point; in case of any technical failure, the source data\
will never get affected. Hence, providing data security from breaches or any attack.
4. Data Quality and Maintenance:
– After fetching the data; all the loading and transformation will take place on Azure
– We will set up Azure Data Factory Pipelines, i.e. Fetch the raw data from data points and
store it on Azure Data Factory.
– The raw data will be transformed and cleaned on the Databricks Notebook.
– The process of loading the raw data and transforming it, will be placed under a pipeline
that will trigger at regular intervals.
– After a clear vision of the data and all the required sources are set, we will work with
your team on planning the KPIs.
– These KPIs can be those parameters which can help to predict business-critical decision
– KPIs can help to make dashboards more productive.