Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

I NEED THIS IN 30 MINUTES. ORIGINAL WORK PLEASE. 1. Presume I were to ask you th

ID: 3814001 • Letter: I

Question

I NEED THIS IN 30 MINUTES. ORIGINAL WORK PLEASE.

1. Presume I were to ask you this question Today- right now as are taking this exam: “If we decrease prices by 5% for all of our weight loss meal programs starting next Monday, how will that affect market share over the next 18 months?” This is an example of “tell me “ type (or “flavor”) of business intelligence, according to our lecture material?

2. What is an example of a common internal-facing BI function in the retail industry

3. What is an example of a common customer-facing BI function in the auto insurance industry?

4. Please describe what the major differences are between an initial ELT lead tight before a data warehouse” goes live” and regular incremental/refresh loads (e.g., the weekly or daily ETL runs that occur after the data warehouse is operational).

Your answer must be Detailed enough to demonstrate that you Fully understand the differences. Your answer must address at a minimum

i.data selection and filtering for initial vs. incremental/ refresh ;

ii.relative data volumes of initial vs. incremental/ refresh;

iii.expected time duration of ETL runs for initial vs. incremental /refresh;

Explanation / Answer

If you would like to deploy a system of business intelligence or analyze previous records for your system, knowledge quality is one amongst the highest concerns! For a corporation to thrive within the market nowadays, knowledge consolidation and analysis is vital, quite virtually. Our terribly own ETL method comes into image here. From what I gathered regarding the full method, ETL may be a tree step method in direction and knowledge deposit -
Extract the info from varied sources into one repository, that might be homogenous or heterogeneous. Those knowledge sources are often relative databases, spreadsheets, XML files, CSV files etc.
Transform the info into the specified schema. this might embrace varied mapping functions to cleanse the info before it's enraptured into the new system, or filtering the info to create it apothegmatic and prepared to use, or ever-changing the info format victimization bound rules or search tables (since the format of the previous system might take issue from the new one!)
Load the remodeled knowledge into the destination information, or the warehouse.
Let's take associate degree example for higher understanding. Suppose you've got an internet order process system that processes the orders that customers place at you eCommerce. The system stores the order even once they're shipped (with the standing "completed"). however this would possibly clog your information with a whopping quantity of previous orders. you would possibly wish to maneuver those completed orders for higher management, data processing and analysis to a brand new system that contains simply those orders.

So, to consolidate the historical knowledge from all disparate sources, you found out associate degree ETL system, that transforms the info from the smaller databases into the a lot of important long-run databases.

Based on the complexness of your operating atmosphere, you'll obtain an acceptable ETL tool for your purpose, or lo and see, you'll even build one amongst your own employing a appropriate programming language! Speaking of complexness, however specifically to we all know however advanced is our environment? outline what number supply systems we've got feeding our ETL system. Next outline what reasonably transformation is needed in your knowledge and the way troublesome it to use this transformation to suit the present knowledge into the target system. Lastly, style a electrical circuit to perpetually check for errors and discrepancies in our ETL system. There you go! You currently have your own Extract, rework and Load!