70-463 training kit pdf free download






















You can print more and practice many times. With the help of our Microsoft microsoft 70 dumps pdf and vce product and material, you can easily pass the 70 Want to know Examcollection 70 training kit pdf Exam practice test features? Study Accurate Microsoft 70 exam answers to Latest 70 dumps questions at Examcollection. Gat a success with an absolute guarantee to pass Microsoft microsoft 70 Implementing a Data Ware. I am currently studying for the Microsoft 70 dumps exam.

Your success in Microsoft 70 pdf is our sole target and we develop all our exam 70 braindumps in a way that facilitates the attainment of this target. Not only is our 70 exam study material the best you can find, it is also the most detailed and the most updated. We provide real 70 exam exam questions and answers braindumps in two formats. The microsoft 70 PDF type is available for reading and printing.

With the help of our Microsoft 70 exam dumps pdf and vce product and material, you can easily pass the 70 exam. You do not need them for pivoting. If that data is not present in a DW, you will need to get it from an LOB database, probably with a distributed query. It is much simpler to store this data in your data warehouse.

In addition, queries that use this data per- form better, because the queries do not have to include data from LOB databases. Columns used in reports as labels only, not for pivoting, are called member properties. Key You can have naming and member property columns in multiple languages in your dimen- Terms sion tables, providing the translation for each language you need to support. SSAS can use your translations automatically.

For reports from a data warehouse, you need to manually select columns with appropriate language translation. In addition to the types of dimension columns already defined for identifying, naming, pivoting, and labeling on a report, you can have columns for lineage information, as you saw in the previous lesson.

Pivoting on MaritalStatus, for example, is unrelated to pivoting on YearlyIncome. None of these columns have any functional dependency between them, and there is no natural drill-down path through these attributes. Now look at the Dim- Date columns, as shown in Figure There is a functional de- pendency among them, so they break third normal form.

They form a hierarchy. Hierarchies are particularly useful for pivoting and OLAP analyses—they provide a natural drill-down path. You perform divide-and-conquer analyses through hierarchies. Hierarchies have levels. When drilling down, you move from a parent level to a child level.

At each level, you have members. This is why dimension columns used in reports for labels are called member properties. Key In a Snowflake schema, lookup tables show you levels of hierarchies. In a Star schema, you Terms need to extract natural hierarchies from the names and content of columns.

Nevertheless, because drilling down through natural hierarchies is so useful and welcomed by end users, you should use them as much as possible.

Note also that attribute names are used for labels of row and column groups in a pivot table. Therefore, a good naming convention is crucial for a data warehouse. You should al- ways use meaningful and descriptive names for dimensions and attributes. Slowly Changing Dimensions There is one common problem with dimensions in a data warehouse: the data in the dimen- sion changes over time. This is usually not a problem in an OLTP application; when a piece of data changes, you just update it.

However, in a DW, you have to maintain history. The question that arises is how to maintain it. Do you want to update only the changed data, as in an OLTP application, and pretend that the value was always the last value, or do you want to maintain both the first and intermediate values? Key The problem is best explained in an example. Now imagine that the customer moves to Ljubljana, Slovenia.

The fact that this customer contributed to sales in Vienna and in Austria in the past would have disappeared. You could use the same key, such as the business key, for your Customer dimension. You could update the City column when you get a change notification from the OLTP system, and thus overwrite the history. To recapitulate, Type 1 means over- Key writing the history for an attribute and for all higher levels of hierarchies to which that at- Terms tribute belongs. But you might prefer to maintain the history, to capture the fact that the customer contrib- uted to sales in another city and country or region.

In that case, you cannot just overwrite the data; you have to insert a new row containing new data instead. Of course, the values of other columns that do not change remain the same.

However, that creates a new problem. If you simply add a new row for the customer with the same key value, the key would no longer be unique. In fact, if you tried to use a primary key or unique constraint as the key, the constraint would reject such an insert. Therefore, you have to do something with the key. You should not modify the business key, because you need a connection with the source system.

The solution is to introduce a new key, a data warehouse key. In DW terminology, this kind of key is called a Key surrogate key. When you imple- ment Type 2 SCD, for the sake of simpler querying, you typically also add a flag to denote which row is current for a dimension member. Alternatively, you could add two columns showing the interval of validity of a value.

The data type of the two columns should be Date, and the columns should show the values Valid From and Valid To. Table shows an example of the flag version of Type 2 SCD handling. For exam- ple, in Table , you might want to maintain the history for the City column but overwrite the history for the Occupation column.

That raises yet another issue. When you want to update the Occupation column, you may find that there are two and maybe more rows for the same customer.

The question is, do you want to update the last row only, or all the rows? Table shows a version that updates the last current row only, whereas Table shows all of the rows being updated. Especially well-known is Type 3 SCD, in which you manage a limited amount of history through addi- Key tional historical columns. Table shows Type 3 handling for the City column. Which solution should you implement? You should discuss this with end users and subject matter experts SMEs.

They should decide for which attributes to maintain the history, and for which ones to overwrite the history. You should then choose a solution that uses Type 2, Type 1, or a mixture of Types 1 and 2, as appropriate. However, there is an important caveat. Such an attribute should be the original—the business key. In an OLTP database, business keys should not change. Business keys should also not change if you are merging data from multiple sources.

For merged data, you usually have to implement a new, surrogate key, because business keys from different sources can have the same value for different entities. However, business keys should not change; otherwise you lose the connection with the OLTP system. Using surro- gate keys in a data warehouse for at least the most common dimensions those representing customers, products, and similar important data , is considered a best practice. Not changing OLTP keys is a best practice as well. In this practice, you will explore some of them.

Try to figure out whether the tables in the diagram are prepared for a Type 2 SCD change. Add the DimSalesReason table to the diagram. Try to figure out whether there is some natural hierarchy between attributes of the DimSalesReason dimension. Your diagram should look like Figure For example, Color and Size are such attributes. In a Star schema, it is more difficult to spot natural hierarchies. Though you can simply follow the lookup tables in a Snowflake schema and find levels of hierarchies, you have to recognize hierarchies from attribute names in a Star schema.

If you cannot extract hierarchies from column names, you could also check the data. You implement a Type 2 solution for an SCD problem for a specific column. What do you actually do when you get a changed value for the column from the source system?

Add a column for the previous value to the table. Move the current value of the updated column to the new column. Update the current value with the new value from the source system. Insert a new row for the same dimension member with the new value for the updated column. Use a surrogate key, because the business key is now duplicated.

Add a flag that denotes which row is current for a member. Do nothing, because in a DW, you maintain history, you do not update dimen- sion data. Update the value of the column just as it was updated in the source system. Which kind of a column is not a part of a dimension? Attribute B. Measure C. Key D. Member property E. Name 3. How can you spot natural hierarchies in a Snowflake schema? You need to analyze the content of the attributes of each dimension. Lookup tables for each dimension provide natural hierarchies.

A Snowflake schema does not support hierarchies. You should convert the Snowflake schema to the Star schema, and then you would spot the natural hierarchies immediately. Lesson 3: Designing Fact Tables Fact tables, like dimensions, have specific types of columns that limit the actions that can be taken with them.

Queries from a DW aggregate data; depending on the particular type of column, there are some limitations on which aggregate functions you can use. Many-to-many relationships in a DW can be implemented differently than in a normalized relational schema.

You store measurements in columns. Logically, this type of column is called a measure. Measures Key are the essence of a fact table. They are usually numeric and can be aggregated. They store Terms values that are of interest to the business, such as sales amount, order quantity, and discount amount. From Lesson 1 in this chapter, you already saw that a fact table includes foreign keys from all dimensions.

These foreign keys are the second type of column in a fact table. All foreign keys together usually uniquely identify each row and can be used as a composite primary key. You often include an additional surrogate key. This key is shorter and consists of one or two columns only. The surrogate key is usually the business key from the table that was used as the primary source for the fact table.

For example, suppose you start building a sales fact table from an order details table in a source system, and then add foreign keys that pertain to the order as a whole from the Order Header table in the source system.

Tables , , and illustrate an example of such a design process. Table shows a simplified example of an Orders Header source table. The OrderId column is the primary key for this table. The CustomerId column is a foreign key from the Customers table. The OrderDate column is not a foreign key in the source table; however, it becomes a foreign key in the DW fact table, for the relationship with the explicit date dimen- sion.

Note, however, that foreign keys in a fact table can—and usually are—replaced with DW surrogate keys of DW dimensions. In addition, the source Order Details table has the ProductId foreign key column. The Quantity column is the measure. The Order Details table was the primary source for this fact table. The CustomerId and OrderDate columns take the source Orders Header table; these columns pertain to orders, not order details.

However, you should keep the OrderId and LineItemId columns to make quick controls and comparisons with source data possible. In addition, if you were to use them as the primary key, then the primary key would be shorter than one composed from all foreign keys.

The last column type used in a fact table is the lineage type, if you implement the lineage. Just as with dimensions, you never expose the lineage information to end users. However, you should consider which aggregate functions you will use in reports for which measures, and which ag- gregate functions you will use when aggregating over which dimension.

The simplest types of measures are those that can be aggregated with the SUM aggregate function across all dimensions, such as amounts or quantities. Measures that can be summarized across all dimensions are called additive measures. Key Some measures are not additive over any dimension.

Examples include prices and percent- Terms ages, such as a discount percentage. Such measures are called non-additive Key measures. Often, you can sum additive measures and then calculate non-additive measures Terms from the additive aggregations. For example, you can calculate the sum of sales amount and then divide that value by the sum of the order quantity to get the average price.

This way, you can simplify queries. For some measures, you can use SUM aggregate functions over all dimensions but time. Some examples include levels and balances. Such measures are called semi-additive measures. You should take care how you aggregate such measures in a report. For time measures, you can calculate average value or use the last value as the aggregate. Your measures are debit, credit, and balance.

What is the additivity of each measure? SSAS has support for semi-additive and non-additive measures. SSAS has two types of storage: dimensional and tabular. The dimensional model Terms more properly represents a cube.

However, the dimensional model includes even more meta- data than the tabular model. For example, SSAS offers the LastNonEmpty aggregate function, which properly uses the SUM aggregate function across all dimensions but time, and defines the last known value as the aggregate over time. The DAX language includes functions that let you build semi-additive expressions quite quickly as well.

Many-to-Many Relationships In a relational database, the many-to-many relationship between two tables is resolved through a third intermediate table. For example, in the AdventureWorksDW database, every Internet sale can be associated with multiple reasons for the sale—and every reason can be associated with multiple sales.

However, SSAS has problems with this model. For reports from a DW, it is you, the developer, who writes queries. In contrast, reporting from SSAS databases is done by us- ing client tools that read the schema and only afterwards build a user interface UI for select- ing measures and attributes.

To create the queries and build the UI properly, the tools rely on standard Star or Snowflake schemas. You create it from the primary key of the FactInternetSales table. However, you have to realize that the relationship between the FactInternetSales and the new DimFactInternetSales dimension is de facto one to one.

In addition, when you recreate such a dimension, you can expose it to end users for reporting. However, a dimension containing key columns only is not very useful for reporting. To make it more useful, you can add additional attributes that form a hierarchy. Date variations, such as year, quarter, month, and day are very handy for drilling down. Figure shows a many-to-many relationship with an additional intermediate dimension. In this practice, you are going to review one of them. Note that you have to conclude these details from the names of the mea- sure columns; in a real-life project, you should check the content of the columns as well.

Using the SUM aggregate function for these two columns is reason- able for aggregations over any dimension. Summing it over any dimension does not make sense.

You can use the SUM aggregate function over any dimension but time. Close the diagram and exit SSMS. Over which dimension can you not use the SUM aggregate function for semi-additive measures? Customer B. Product C. Date D. Employee 2. Which measures would you expect to be non-additive?

Price B. Debit C. SalesAmount D. DiscountPct E. UnitBalance 3. Which kind of a column is not part of a fact table? Lineage B. You have to prepare the schema for sales data. What kind of schema would you use? What would the dimensions of your schema be?

Do you expect additive measures only? In fact, the business would like to extend the project to a real, long-term data warehouse. Learning paths are not yet available for this exam. This course describes how to implement a data warehouse platform to support a BI solution. This course is intended for database professionals who need to fulfil a Business Intelligence Developer role. Primary responsibilities include:. This certification demonstrates your skills and breakthrough insights in developing and maintaining the next wave of mission-critical environments.

See two great offers to help boost your odds of success. Review and manage your scheduled appointments, certificates, and transcripts. Learn more about requesting an accommodation for your exam. Pricing is subject to change without notice. Pricing does not include applicable taxes.

Click Save. Locate the. Right-click the file, click Extract All, and then follow the instructions. The above link does not include the Practice Tests. If you purchased the eBook from this site, the practice test files are available for download on your Account page, under the Disc Contents tab. We've made every effort to ensure the accuracy of this book and its companion content. Any errors that have been confirmed since this book was published can be downloaded below.

Download the errata. If you find an error, you can report it to us through our Submit errata page.



0コメント

  • 1000 / 1000