Data Warehouse

Data-warehousing Basics


Informatica greatly simplify DataWarehouse design, and numerous routine tasks related to data transformation and migration (ETL - Extract, Transform, and Load), day-to-day maintenance and management.


ETL is an  process to extract data, mostly from different types of system, transform it into a structure that's more appropriate for reporting and analysis and finally load it into the database. The figure below displays these ETL steps.
ETL architecture and steps

An overview of a data warehouse and ETL architecture displaying what is ETL.

But, today, ETL is much more than that. It also covers data profiling, data quality control, monitoring and cleansing, real-time and on-demand data integration in a service oriented architecture (SOA), and metadata management.
1. ETL - Extract from source

In this step we extract data from different internal and external sources, structured and/or unstructured. Plain queries are sent to the source systems, using native connections, message queuing, ODBC or OLE-DB middleware. The data will be put in a so-called Staging Area (SA), usually with the same structure as the source. In some cases we want only the data that is new or has been changed, the queries will only return the changes. Some ETL tools can do this automatically, providing a changed data capture (CDC) mechanism.
2. ETL - Transform the data

Once the data is available in the Staging Area, it is all on one platform and one database. So we can easily  join and union tables, filter and sort the data using specific attributes, pivot to another structure and make business calculations. In this step of the ETL process, we can check on data quality and cleans the data if necessary. After having all the data prepared, we can choose to implement slowly changing dimensions. In that case we want to keep track in our analysis and reports when attributes changes over time, for example a customer moves from one region to another.
3. ETL - Load into the data warehouse

Finally, data is loaded into a data warehouse, usually into fact and dimension tables. From there the data can be combined, aggregated and loaded into datamarts or cubes as is deemed necessary.
Data profiling and data quality control

Profiling the data, wil give direct insight in the data quality of the source systems. It can display how many rows have missing or invalid values, or what the distribution is of the values in a specific column. Based on this knowledge, one can specify business rules in order to cleanse the data, or keep really bad data out of the data warehouse. Doing data profiling before designing your ETL process, you are better able to design a system that is robust and has a clear structure.
Metadata management

Information about all the data that is processed, from sources to targets by transformations, is often put into a metadata repository; a database containing all the metadata. The entire ETL process can be 'managed' with metadata management, for example one can query how a specific target attribute is built-up in the ETL process, called data lineage. Or, you want to know what the impact of a change will be, for example the size of the order identifier (id) is changed, and in which ETL steps this .


Informatica has a simple visual interface. You do most of the work by simply dragging and dropping with your mouse in the Designer. This graphical approach makes it also very easy to understand what is going on

Informatica can communicate with all major databases, can move/transform data between them. It can move huge volumes of data in a very effective way. It can throttle the transactions (do big updates in small chunks to avoid long locking and filling the transactional log). It can effectively do joins between tables in different databases on different servers. The tasks are performed by Informatica Server (Unix or MS Windows). You get a client application called "Server Manager" to work with the server.

You design your processes in a client application called "Designer". This is where you you tell what the source databases and tables will be, what will be the targets, and how you move/transform the data.

Informatica uses its own database called "Metadata Repository Database", or simply a Repository. Repository stores the data (rules) needed for data extraction, transformation, loading, and management. You get a client application "Repository Manager" to work with the repository.

Working with Informatica

Here are the pieces of the puzzle:

* source database(s), target database(s), repository metadata database
* Informatica server
* PC-based client software (Designer, Server Manager, Repository Manager)

Setting everything up is also straighforward. Once the server components are installed and configured, you install the client applications, configure ODBC, register the Informatica Server in the Server Manager. Create a Repository, create users and groups, edit users profiles. Add source and target definitions, set up mapping between the sources and targets, create a session for each mapping - and run sessions (resulting in writing data to targets).

Remaining can be read from