Inmon Versus Kimball

Understanding Inmon Versus Kimball

Terms: Ralph Kimball, Bill Inmon, Data Mart, Data Warehouse

As is well documented, for many years there has been a raging debate between two different philosophies of data warehousing – one proposed by Bill Inmon and another proposed by Ralph Kimball. Bill Inmon proposed a centralized data warehouse with very strong structure, and Ralph Kimball, who promoted decentralized data marts. At Software Decisions, we don’t spend that much time on this topic, however, the topic of data warehousing versus data marts is yet another example of a debate which is been more based upon opinion than based upon research. This is pointed out in the paper Which Data Warehouse Architecture is More Successful.

“Considering the importance of the architecture choice, surprisingly little research on the topic exists. The literature tends to either describe the architectures, provide case-study examples, or present survey data about the popularity of the various options. There has been little rigorous, empirical research, and this motivated us to investigate the success of the various architectures.”

Something we found interesting was that according to the article the most common data warehousing/data mart platforms in the survey were provided by Oracle, Microsoft and IBM – none of which are well rated in our Software Selection Packages. Something, which is further interesting, is that the debate on data warehousing has mirrored so many debates in that opinions and marketing initiatives have come before research and evidence. Its curious that we have so many professors in so many universities globally yet so little research into the most contested areas of information technology. This is by no means a comprehensive conclusion, however, the current BI vendors making the most headway towards user adoption are the BI Light vendors, that can connect to many data sources and the BI Heavy software vendors, many of whom offer data warehousing solutions are growing much more slowly.

References

Ariyachandra, Thilini. Watson, Hugh J. Which Data Warehouse Architecture is More Successful. Business Intelligence Journal. VOL. 11. NO. 1.

Software Functionality

Usage

Part of the Software Decisions enterprise software risk model, and a criterion of software measurement and part of the Software Selection Package.

Definition

This is how well the functionality of the application has the potential to match the business process as well as the functionality’s reliability. This is itself a composite score because it includes one score of functionality quality, and one for functionality scope. The scores at the Software Decisions website explain detail on each application in terms of how well it scores on each subcategory of functionality. While many – particular large software vendors would prefer if people believed that functionality scope trumped functionality quality, this is simply not true. One of the most important lessons from enterprise software is just because an application “has” functionality in the release notes or in its marketing literature does not mean that the functionality is equal with other vendors. This sounds completely obvious, but in fact, many companies behave as if this functionality between applications is equal. A perfect example of this is SAP. Of all the vendors evaluated by Software Decisions, SAP has the most functionality that either operates poorly, is simply broken, or never worked to begin with. This “kitchen sink” development approach places the most functionality into the application, and will get by a software selection that is more a box checking exercise than an in-depth evaluation of the software. Determination of the application functionality score takes a detailed analysis of the application in terms of the real ability to leverage functionality. It also means making value judgments as to how frequently the functionality can actually be put into action.

Software Implementability

Usage

Part of the Software Decisions enterprise software risk model, and a criterion of software measurement and part of the Software Selection Package.

Definition

Any application can be scored for how easily it can be implemented. There are many factors, which go into this. One is master data parameter maintenance; another is how hard or difficult it is to configuration the application. Implement-ability is a specifically measurable entity. Some of the lowest scoring applications in terms of implementability are ERP systems and BI Heavy applications. Older applications also tend to be less implementable. SaaS applications are generally more implementable than those delivered on premises. The more control the software vendor has over the application the better the implement-ability, which is why SaaS scores so well in this regard.

Software Usability

Usage

Part of the Software Decisions enterprise software risk model, and a criterion of software measurement and part of the Software Selection Package.

Definition

Applications that rank high in usability have users naturally gravitate to them. They require less training and are easier inherently to understand and troubleshoot when things go wrong. Highly usable applications don’t need to be forced on users, as users naturally want to access them in order to do their jobs more efficiently.

Software Maintainability

Usage

Part of the Software Decisions enterprise software risk model, and a criterion of software measurement and part of the Software Selection Package.

Definition

This score is related to the implement ability score – but looks at longer-range factors. Applications differ drastically on the basis of maintainability – and the maintainability of an application greatly affects its total cost of ownership – as according to our TCO analysis database roughly 60% of the TCO of an application is related to its maintenance costs.

Sales Information Quality

Usage

Part of the Software Decisions enterprise software risk model, and a criterion of software measurement and part of the Software Selection Package.

Definition

Enterprise software vendors differ greatly in their sales approach, how they motivate and compensate salespeople, how well their salespeople know the application, how much their market is saturated versus the number of resources they deploy into sales as well as other factors. All of this directly affects the quality of information that buyers can expect from their sales interactions.

Implementation Capabilities

Usage

Part of the Software Decisions enterprise software risk model, and a criterion of software measurement and part of the Software Selection Package.

Definition

The large software vendors tend to outsource most of their consulting – in exchange for being recommended by the major software vendors. Therefore, the role of the implementation consultant for the large software vendors becomes to partially support the major consulting company’s implementation resources in addition to providing value to the end client. Smaller software vendors tend to staff much more of the overall external implementation team. Many factors work into how effective the software vendor’s implementation capabilities are including how long the consultants have worked for the software vendor, their motivation, the internal fairness of the software vendor with respect to how they treat their employees. Another part, which is greatly overlooked, is how much authority consulting actually has. In many software vendors the sales division is far to powerful in relation to implementation and this means that information provided by the consulting arm is censored in order to be in line with earlier false information provided by sales that was used in order to close the sale. This topic is covered in the following article.

Support Capabilities

Usage

Part of the Software Decisions enterprise software risk model, and a criterion of software measurement and part of the Software Selection Package.

Definition

Obviously, support is a very important measurement of any software vendor. They are the horse’s mouth on the software after the implementation is live, they have resources with many years of experience in the application and will be go to source when the buyer’s internally trained resources cannot figure out the answer. It must also be understood that poorly designed software cannot be overcome with great support. Poorly designed software is a losing situation even if a great deal is invested in support because support is so expensive to supply. When an application is well designed, it means that the vendor support personnel can figure out what is wrong more quickly.

Internal Efficiency

Usage

Part of the Software Decisions enterprise software risk model, and a criterion of software measurement and part of the Software Selection Package.

Definition

There are enormous differences between various software vendors in terms of their level of bureaucracy. As companies enlarge, they generally become more bureaucratic and their efficiency goes down. They make up for this with market power. Market power really only helps with marketing and gaining acceptance for an application, not with things that actually help development or implementations. However, bureaucracy is not identical for all vendors of a similar size, and some small vendors have a shocking amount of bureaucracy. Bureaucracy imposes a significant cost on customers. Questions take longer to get answered, requests get lost; who can actually make important decisions on topics is increasingly in doubt, and politics ends up determining what answers are received rather than what is technically true or false. I consider the bureaucracy level of software vendors to be one of the most underestimated risks and costs of choosing among vendors during software selection. Interestingly, I have never once seen bureaucracy listed as a criteria in any software selection exercise by any major consulting company – perhaps because they rate very highly in bureaucracy themselves.

How to Understand the Current Innovation Level

The Usage of the Current Innovation Level

Part of the Software Decisions enterprise software risk model, and a criterion of software measurement and part of the Software Selection Package.

Definition

Software vendors go through a lifecycle where they are small and innovative and then tend to calcify and become more marketing and financially driven entities while their development productivity drops significantly. At the end of their lifecycle, they may do almost no innovation and spend most of their energies in marketing, acquisitions and in bureaucracy. The current innovation level is important for corporate buyers because enterprise software is a long-term commitment. Even SaaS application purchases, which hypothetically can be cancelled within a month, still have significant lock in, and costs for transferring to a new application which are related to retraining, data migration, becoming comfortable with a new software vendor, etc.. An enterprise application will typically be used by a buyer for at least seven years, and the buyer will normally upgrade throughout the lifetime of the application’s usage. Therefore, the buyer is buying not only the software in its present state, but also in its future state. This rating provides the buyer with an idea of the future potential of the software vendor’s applications. While it states it in the title, this is the current level of innovation. It is not the historical level of innovation. As stated, software vendors go through a lifecycle of innovation, where they are very innovative in the beginning, and less and less innovative. The software vendor’s innovation level in previous years is not relevant to the predicted future innovation level of the software vendor.