Business Intelligence in the MultiValue Community

Business Intelligence is certainly not a new concept. It has been utilized by businesses for almost two decades. Due to the lack of native support for the non-first normal form database structures, it has not been an easy journey for the MultiValue community to adopt most Business Intelligence (BI) solutions. Almost all the top solutions require SQL access via ODBC or JDBC. Rocket® UniVerse, Rocket® UniData, and Rocket® D3 DBMS all support exposing the databases as SQL-compliant. However, this comes with a price in performance, maintenance (SQL Mapping), and knowledge of a language that is not natural for most MultiValue professionals.

As a result, some MultiValue application providers have taken the strategic approach of using various ETL (extract, transact and load) processes to migrate the MV application data into a relational data warehouse or data mart. The end user community then performs their analysis on this derived data source. This approach is certainly valid. There certainly are success stories from some of our partners. On the surface, this may seem to help mitigate the data access dilemma; however, as most companies soon realize, using this approach to Business Intelligence can be costly, as companies must:

  • Purchase and maintain a relational database
  • Add or internally develop expertise in the additional database
  • Add or internally develop a strong knowledge of SQL
  • Increase expenditures on dedicated hardware for the data warehouse or data mart
  • Develop and maintain the ETL processes to normalize and load the relational database with the MV data
  • Dedicate resources to a single purpose solution

Although this is certainly not an exhaustive list of what is needed for this deployment architecture, it is easy to ascertain that this approach adds quite a bit of expense and complexity to the overall application solution. Another critical impact of this approach is that it naturally increases the likelihood that the data being analyzed may be too latent for time-critical decision making. This is directly impacted by how frequently the synchronization between the transactional/operational systems and the data warehouse or data mart occurs.

Native Access Data in a MultiValue Database

One of the ironies of making an MV database SQL-compliant, either within the MV database or migrating the data to a relational database, is that you essentially have to break apart the natural relationships of the MV database design by normalizing the nested structures, only to join them back together via SQL queries. This essentially requires that either the end user has knowledge of the table relationships or that highly skilled people have to develop a metadata layer that abstracts the user from knowing the complex joins that have to be created to re-assemble the related data into a correct query result.

What if you did not have to deconstruct a database design that inherently has the relationships already defined and matured? What if an end user could simply select a field (attribute) with the related data from another file and return the correct results? It is common practice for MV developers to accomplish this by leveraging correlatives and translates. Ultimately, a native approach is far more performant than either physically un-nesting the data structures or virtualizing via an SQL mapping approach. It also means no more flattening of data .

Translates and links occur at the core level of the database engine thus providing the most efficiency. Any data dictionary definition that is core to the MV database can be utilized, including any data associations. Native access MV data, as it was meant to be used, is an ideal approach.

This is available with Rocket® CorVu NG for Rocket® U2 and Rocket® D3. CorVu NG provides a robust BI design solution that gives developers the ability to create comprehensive dashboards, which they can use to monitor critical business metrics. The dashboards also allow guided analysis to root cause in addition to visually rich report content using direct native queries to your MV database.

Sure, this all sounds great. You simply point an enabled BI solution directly to your MV database and you are golden. Your users will most certainly be happy. Or will they?

Just because you can now directly access your application's data doesn't mean you should do so without proper planning. You need a better understanding of the impact caused by users doing BI consumption.

The main reasons for developing data warehouses are: (a) to centralize heterogeneous data from across the enterprise; (b) to offload heavy queries from the transaction systems; and (c) to optimize the data structure and schema for queries versus transactions.

Data is typically pre-aggregated for metric/KPI (Key Performance Indicator), trend analysis for dashboards, and high-level analysis with the ability to then drill down to subsequent lower levels of detail. Unlike the ideal for the transaction system, in warehousing the data columns (fields) are heavily indexed for fast retrieval. Since this solution makes the warehouse and the transaction system one-in-the-same, indexes on the system need to be prudently balanced. Transaction write times can be dramatically impacted if too many indexes are maintained. We can't sacrifice the speed of business to improve the speed of analysis.

The Higher Value, Multi-Purpose Approach

The point that I am driving at is that BI activities are different from normal transaction workflows, and can negatively impact the production transaction system's performance. It is well advised to consider off-loading the BI activity to a secondary server. Unlike the typical data warehouse approach, a great way to accomplish this with the U2 and D3 databases is to leverage HADR (U2) or Hot Backup (D3) solutions. The distinct advantage of this approach is that by adding these solutions, which are focused on helping prevent disaster, it can also serve a dual purpose as the primary BI data source.

If you are running a U2 database, there is also a Report Server solution that allows you to replicate only the data required for BI analysis; it is not intended for complete duplication or fail-over for the primary system. This works because we don't need to transform the data to another database format.

The data replication is executed dynamically and rapidly. The subscribing server has very low latency compared to the transaction system. This results in the ability for close to real-time analysis without querying the primary system. It does not preclude you from doing some special "real-time" queries against the primary system should the need arise. (Figure 1)

Corvu_figure_1

As part of the replication process, you can use write triggers to duplicate selected data to an account specifically for BI use. This allows you to create critical indexes that would not be in conflict with the transaction system if the secondary system needed to switch roles. Additionally, you could also create aggregates in the BI account for higher level metrics and KPIs to optimize query performance for dashboards.

If you want to learn more about Business Intelligence Best Practices for a MultiValue deployment, I will be giving a session focused on Dashboard Best Practices and a Lab on Business Intelligence at MultiValue University.

Rocket MultiValue University 2015 is October 5-8 at Harrah's Hotel and Casino in Las Vegas. To learn more and to register, visit www.rocketsoftware.com/mvu .

Dorien Gardner

Dorien Gardner is a Rocket Sales Engineer. For the last 13 years Dorien worked for SAP focused on Business Objects for Business Intelligence as a Principal Solutions Engineer. Previously he managed the Pre-and Post-Sales engineers at Pick Systems for 5 years and was a Research Analyst and SE for Unidata for 5 years. He continues to be an evangelist for MV-style databases and has worked with numerous relational, in-memory, and OLAP data sources.

View more articles

Featured:

Jul/Aug 2015

menu
menu