Monday, September 29, 2008

The Data Intelligence Gap: Part Two

In part one, I wrote about the evolution of a corporation and how rapid growth leads to a data intelligence gap. It makes sense that a combination of people, process and technology combine to close the gap, but just what kind of technology can be used to help you cross the divide and connect the needs of business with the data available in the corporation?

Of course, the technology needed depends on the company’s needs and how mature they are about managing their data. Many technologies exist to help close the gap, improve information quality and meet the business needs of the organization. Let’s look at them:



CATEGORY

TECHNOLOGIES

HOW IT CLOSES THE GAP

Preventative

Type-Ahead Technology

This technology watches the user type helps completes the data entry in real time. For example, products like Harte-Hanks Global Address help call center staff and others who enter address data into your system by speeding up the process and ensuring the data is correct.

Data Quality Dashboard

Dashboards allow business users and IT users to keep an eye on data anomalies by constantly checking if the data meets business specifications. Products like TS Insight even give you some attractive charts and graphs on the status of data compliance and the trend of its conformity. Dashboards are also a great way to communicate the importance of closing the data intelligence gap. When your people get smarter about it, they will help you achieve cleaner, more useful information.

Diagnostic and Health

Data Profiling

Not sure about the health and suitability of any given data set? Profile it with products like TS Discovery, and you’ll begin to understand how much data is missing, outlier values in the data, and many other anomalies. Only then will you be able to understand the scope of your data quality project.

Batch Data Quality

Once the anomalies are discovered. A batch cleansing process can solve many problems with name and address data, supply chain data and more. Some solutions are batch-centric, while others can do both batch cleansing and scalable enterprise-class data quality (see below).

Infrastructure

Master Data Management (MDM)

Products from the mega-vendors like SAP and Oracle or products from smaller specialists like Siperian and Tibco provide master data management technology. It features, for example, data connectivity between applications, the ability to create a “gold” customer or supply chain record that can be shared between applications in a publish and subscribe model.

Enterprise-Class Data Quality

Products like the Trillium Software System provide real time data quality to any application in the enterprise, including the MDM solution. Beyond the desktop data quality system, the enterprise-class system should be fast enough and scalable enough to provide an instant check of information quality in almost any application with any number of users.

Data Monitoring

You can often use the same technology to monitor data as you do for profiling data. These tools keep track of the quality of the data. Unlike data quality dashboards, the IT staff can really dig into the nitty-gritty if necessary.

Enrichment

Services and Data Sources

Companies like Harte-Hanks offer data sources that can help fill the gaps when mission-critical data is missing. You can buy data and services to segment your database, check customer lists for change of address, look for customers on the do-not-call list, reverse phone number look ups, and more.


These are just some of the technologies involved in closing the data intelligence gap. In my next installment of this series, I’ll look at people and process. Stay tuned.

Monday, September 22, 2008

Are There Business Advantages to Poor Data Management?

I have long held the belief, perhaps even religion, that companies who do a good job governing and managing their data will be blessed with so many advantages over those who don’t. This weekend, as I was walking through the garden, the serpent tempted me with an apple. Might there actually be some business advantage in poorly managing your data?

The experience started when I noticed a bubble on the sidewall of my tire. Just a small bubble, but since I was planning on a trip down and back on the lonely Massachusetts Turnpike (Mass Pike) on a Sunday night, I decided to get it checked out. No need to risk a blow-out.

I remembered that I had purchased one of those “road hazard replacement” policies. I called the nearest location of a chain of stores that covers New England. Good news. The manager assured me that I didn’t need my paperwork and that the record would be in the database.

Of course, when I arrived at the tire center, no record of my purchase or my policy could be found. Since I didn’t bring the printed receipt, the tire center manager gave me a couple of options: 1) Drive down the Mass Pike with the bubbly tire and come back again on Monday when they could “access the database in the main office”; or 2) Drive home, find paperwork, come back to store... Hmm. Not sure where it was. 3) Buy a new tire at full price.

I opted to buy a new tire and attempt to claim a refund from the corporate office later when I found my receipts. The jury is still out on the success of that strategy.

However, this got me thinking. Could the inability for the stores to maintain more that 18 months of records actually be a business advantage? How many customers lose the paperwork, or even forget about their road hazard policies and just pay the replacement price? How much additional revenue was this shortcoming actually generating each year? What additional revenue would be possible if the database only stored 12 months of transactions?

Finding fault in the one truth - data management is good - did hurt. However, I realized that advantages of the poor data infrastructure design at the tire chain is very short-sighted. True, it actually may lower pay-outs on the road hazard policies short-term, but eventually, this poor customer database implementation has to catch up to them in decreased customer satisfaction and word-of-mouth badwill. There are so many tire stores here competing for the same buck, eventually, the poor service will cause most good customers to move on.

If you're buying tires soon in New England and want to know what tire chain it was, e-mail me and I'll tell. But before I tell you all, I'm going to hold out hope for justice... and hope that our foundation beliefs are still intact.

Saturday, September 20, 2008

New Data Governance Books

A couple of new, important books hit the streets this month. I’m adding these books to my recommended reading list.

Data Driven: Profiting from Your Most Important Business Asset is Tom Redmond’s new book making the most of your data to sharpen your company's competitive edge and enhance its profitability. I like how Tom uses real-life metaphors in this book to simplify the concepts of governing your data.

Master Data Management is David Loshin’s new book that provides help for both business and technology managers as they strive to improve data quality. Among the topics covered are strategic planning, managing organizational change and the integration of systems and business processes to achieve better data.

Both Tom and David have written several books on data quality and master data management, and I think their material gets stronger and stronger as they plug in new experiences and reference new strategies.

EDIT: In April of 2009, I also released my own book on data governance called "The Data Governance Imperative".
Check it out.>>

Monday, August 11, 2008

The Data Intelligence Gap: Part One

There is a huge chasm in many corporations today, one that hurts companies by keeping them from revenue, more profit, and better operating efficiency. The gap, of course, lies in corporate information.

On one side of the gap lies corporate data, which may contain anything from unintelligible garbage to very valuable data. However, it is often very challenging to identify the difference. On the other side of the chasm are business users, ever needing stronger corporate intelligence, longing for ways to stimulate corporate growth and improve efficiency. On this side of the chasm, they know what information is needed to make crucial decisions, but are unsure if the data exists to produce accurate information.
Data needs standardization, organization and intelligence, in order to provide for the business.
Companies often find themselves in this position because rapid corporate growth tends to have a negative impact on data quality. As the company grows and expands new systems are put in place and new data silos form. During rapid growth, corporations rarely consider the impact of data beyond the scope of the current silo. Time marches on and the usefulness of data decays. Employee attrition leads to less and less corporate knowledge about the data, and a wider gap.
So, exactly what is it that the business needs to know that the data can’t provide? Here are some examples:
What the Business Wants to Know
Data needed
What’s inhibiting peak efficiency
Can I lower my inventory costs and purchase prices? Can I get discounts on high volume items purchased?
Reliable inventory data.
Multiple ERP and SCM systems. Duplicate part numbers. Duplicate inventory items. No standardization on parts descriptions and numbers. Global data existing in different code pages and languages.
Are my marketing programs effective? Am I giving customers and prospects every opportunity to love our company?
Customer attrition rates. Results of marketing programs.
Typos. Lack of standardization of name and address. Multiple CRM systems. Many countries and systems.
Are any customers or prospects “bad guys”? Are we complying with all international laws?
Reliable customer data for comparison to “watch” lists.
Lack of standards. Ability to match names that may have slight variations against watch lists. Missing values.
Am I driving the company in the right direction?
Reliable business metrics. Financial trends.
Extra effort and time needed to compile sales and finance data – time to cross-check results.
Is the company we’re buying worth it?
Fast comprehension of the reliability of the information provided by the seller.
Ability to quickly check the accuracy of the data, especially the customer lists, inventory level accuracy, financial metrics, and the existence of “bad guys” in the data.
Again, these are some of the many reasons where data lacks intelligence and can’t provide for the needs of the corporation. It is across this divide that data quality vendors must set up bridges... and it is this chasm that data governance teams must cross.
We’ll cover that in part two of this story. I’ll cover what kind of solutions and processes help the data governance team cross the great data divide and bring data intelligence to their organizations.

Thursday, July 24, 2008

Forget the Data. Eat the Ice Cream.

It’s summer and time for vacations. Even so, it’s difficult for a data-centric guy like me to shut off thoughts of information quality, even during times of rest and relaxation.
Case in point, my family and I just took a road trip from Boston to Burlington, VT to visit the shores of Lake Champlain. We loaded up the mini-van and headed north. Along the way, you drive along beautiful RT 89, which winds its way through the green mountains and past the capital - Montpelier.
No trip to western Vermont is complete without a trip to the Ben and Jerry’s ice cream manufacturing plant in Waterbury. They offer a tour of the plant and serve up a sample of the freshly made flavor of the day at the end. The kids were very excited.
However, when I see a manufacturing process, my mind immediately turns to data. As the tour guide spouted off statistics about how much of any given ingredient they use, and which flavor was the most popular (Cherry Garcia), my thoughts turned to the trustworthiness of the data behind it. I wanted him to back it up by telling me what ERP system they used and what data quality processes were in place to ensure the utmost accuracy in the manufacturing process. Inside, I wondered if they had the data to negotiate properly with the ingredients vendors and if they really knew how many heath bars, for example, they were buying across all of their manufacturing plants. Just having the clean data and accurate metrics around their purchasing processes could save them thousands and thousands of dollars.
The tour guide talked about a Jack Daniels flavored ice cream that was now in the “flavor graveyard” mostly because the main ingredient was disappearing from the production floor. I thought about inventory controls and processes that could be put in place to stop employee pilfering.
It went on and on. The psychosis continued until my daughter exclaimed “Dad. This is the coolest thing ever! That’s how they make Chunky Monkey!” She was right. It was perhaps the coolest thing ever to see how they made something we use nearly every day. It was cool to take a peak inside the corporate culture of Ben and Jerry’s. It popped me back into reality.
Take your vacation this year, but remember that life isn’t only about the data. Remember to eat the ice cream and enjoy.

Tuesday, July 1, 2008

The Soft Costs of Information Quality

Choosing data quality technology simply on price could mean that you end up paying far more than you need to, thanks to the huge differences in how the products solve the problems. While your instinct may tell you to focus solely on the price of your data quality tool, your big costs come in less visible areas – like time to implement, re-usability, time spend preprocessing data so that it reads into the tool, performance and overall learning curve.

As if it wasn’t confusing enough for the technology buyer having to choose between a desktop and enterprise-class technology, local and global solutions, or built-in solution vs. universal architecture, now you have to work out soft costs too. But you need to know that there are some huge differences in the way the technologies are implemented and work day-to-day, and those differences will impact your soft costs.

So just what should you look for to limit soft costs when selecting an information quality solution? Here are a few suggestions:

  • Does the data quality solution understand data at the field level only or can it see the big picture? For example, can you pass it an address that’s a blob of text, or do you need to pass it individual first name, last name, address, city, state, postal code lines. Importance: If the data is misfielded, you’ll have a LOT of work to do to get it ready for the field level solution.
  • On a similar note, what is the approach to multi-country data? Is there an easy way to pre-process mixed global data or is it a manual process? Importance: If the data has mixed country of origin, again you’ll have to do a lot of preprocessing work to do to get it ready.
  • What is the solution’s approach to complex records like “John and Diane Cougar Mellencamp DBA John Cougar”? Does the solution have the intelligence to understand all of those people in a record or do I have to post-process this name?
  • Despite the look of the user interface, is the product a real application or is it a development environment? Importance: In a real application, an error will be indicated if you pass in some wild and crazy data. In a development environment, even slight data quirks will cause nothing to run and just getting the application to run can be very time consuming and wasteful.
  • How hard is it to build a process? As a user you’ll need to know how to build an entire end-to-end process with the product. During proof of concept, the data quality vendor may hide that from you. Importance: Whether you’re using it on one project, or across many projects, you’re eventually going to want to build or modify a process. You should know up-front how hard this is. It shouldn’t be a mystery, and you need to follow this during the proof-of-concept.
  • Are web services the only real-time implementation strategy? Importance: Compared to a scalable application server, web services can be slow and actually add costs to the implementation.
  • Does the application actually use its own address correction worldwide or a third party solution? Importance: Understanding how the application solves certain problems will let you understand how much support you’ll get from the company. If something breaks, it’s easier for the program’s originator to fix it. A company using a lot of third party applications may have challenges with this.
  • Does the application have different ways to find duplicates? Importance: During a complex clean-up, you may want to dedupe your records based on, say e-mail and name for the first pass. But what about the records where your e-mail isn’t populated? For those records, you’ll need to go back and use other attributes to match. The ability to multi-match allows you to achieve cleaner, more efficient data by using whatever attributes are best in your specific data.

I could go on. The point is – there are many technical, in-the-weeds differences between vendors, and those differences have a BIG impact on your ability to deliver information quality. The best way to understand a data quality vendor’s solution is to look over their shoulder during the proof-of-concept. Ask questions. Challenge the steps needed to cleanse your data. Diligence today will save you from having to buy Excedrin tomorrow.

Wednesday, June 25, 2008

Data Quality Events – Powerful and Cozy

For those of you who enjoy hobnobbing with the information quality community, I have a couple of recommendations for you. These events are your chance to rub elbows with different factions of the community. In the case of these events, the crowds are small but the information is powerful.

MIT Information Quality Symposium
We’re a couple of weeks away from the MIT Information Quality Symposium in Boston. I’ll be sharing the podium with a couple of other data quality vendors in delivering a presentation this year. I’m really looking forward to it.
Dr. Wang and his cohorts from MIT fill a certain niche in information quality with these gatherings. Rather than a heavily-sponsored, high pressure selling event, this one really focuses on concepts and the study of information quality. There are presenters from all over the globe, some who have developed thought-provoking theories on information quality, and others who just want to share the results of a completed information quality project. The majority of the presentations offer smart ways of dissecting and tackling data quality problems that aren’t so much tied to vendor solutions as they are processes and people.
My presentation this year will discuss the connections between the rate at which a company grows and the degree of poor information in the organization. While a company may have a strong desire to own their market, they may wind up owning chaos and disorder instead, in the form of disparate data. It’s up to data quality vendors to provide solutions to help high-growth companies defeat chaos and regain ownership of their companies.
If you decide to come to the MIT event, please come by the vendor session and introduce yourself.


Information and Data Quality Conference
One event that I’m regrettably going to miss this year is Larry English’s Information and Data Quality Conference (IDQ) taking place September 22-25, in San Antonio, Texas. I’ve been to Larry’s conferences in past years and have always had a great time. What struck me, at least in past years, was the fact that most of the people who went to the IDQ conference really “got it” in terms of the data quality issue. Most of the people I’ve talked with were looking for sharing advice on taking what they knew as the truth – that information quality is an important business asset – and making believers out of the rest of their organizations. Larry and the speakers at that conference will definitely make a believer out of you and send you out into the world to proclaim the information quality gospel. Hallelujah!

Thanks
On another topic, I’d like to thank Vince McBurney for the kind words in his blog last week. Vince runs a blog covering IBM Information Server. In his latest installment, Vince has a very good analysis of the new Gartner Magic Quadrant on data quality. Thanks for the mention, Vince.

Disclaimer: The opinions expressed here are my own and don't necessarily reflect the opinion of my employer. The material written here is copyright (c) 2010 by Steve Sarsfield. To request permission to reuse, please e-mail me.