Wednesday, June 25, 2008

Data Quality Events – Powerful and Cozy

For those of you who enjoy hobnobbing with the information quality community, I have a couple of recommendations for you. These events are your chance to rub elbows with different factions of the community. In the case of these events, the crowds are small but the information is powerful.

MIT Information Quality Symposium
We’re a couple of weeks away from the MIT Information Quality Symposium in Boston. I’ll be sharing the podium with a couple of other data quality vendors in delivering a presentation this year. I’m really looking forward to it.
Dr. Wang and his cohorts from MIT fill a certain niche in information quality with these gatherings. Rather than a heavily-sponsored, high pressure selling event, this one really focuses on concepts and the study of information quality. There are presenters from all over the globe, some who have developed thought-provoking theories on information quality, and others who just want to share the results of a completed information quality project. The majority of the presentations offer smart ways of dissecting and tackling data quality problems that aren’t so much tied to vendor solutions as they are processes and people.
My presentation this year will discuss the connections between the rate at which a company grows and the degree of poor information in the organization. While a company may have a strong desire to own their market, they may wind up owning chaos and disorder instead, in the form of disparate data. It’s up to data quality vendors to provide solutions to help high-growth companies defeat chaos and regain ownership of their companies.
If you decide to come to the MIT event, please come by the vendor session and introduce yourself.


Information and Data Quality Conference
One event that I’m regrettably going to miss this year is Larry English’s Information and Data Quality Conference (IDQ) taking place September 22-25, in San Antonio, Texas. I’ve been to Larry’s conferences in past years and have always had a great time. What struck me, at least in past years, was the fact that most of the people who went to the IDQ conference really “got it” in terms of the data quality issue. Most of the people I’ve talked with were looking for sharing advice on taking what they knew as the truth – that information quality is an important business asset – and making believers out of the rest of their organizations. Larry and the speakers at that conference will definitely make a believer out of you and send you out into the world to proclaim the information quality gospel. Hallelujah!

Thanks
On another topic, I’d like to thank Vince McBurney for the kind words in his blog last week. Vince runs a blog covering IBM Information Server. In his latest installment, Vince has a very good analysis of the new Gartner Magic Quadrant on data quality. Thanks for the mention, Vince.

Monday, June 16, 2008

Get Smart about Your Data Quality Projects

With all due respect to Agent Maxwell Smart, there is a mini battle between good and evil, CONTROL and KAOS, happening in many busy, fast-growing corporations. It is, of course, with information quality. Faster growing companies are more vulnerable to chaos because by opening up new national and international divisions, expanding through acquisition, manufacturing offshore, and doing all the other things that an aggressive company does, it leads to more misalignment and more chaotic data.

While a company may have a strong desire to “own the world” or at least their market, they may wind up owning chaos and disorder instead - in the form of disparate data. The challenges include:

  • trying to reconcile technical data quality issues, such as different code pages like ASCII, Unicode and EBCDIC
  • dealing with different data quality processes across your organization, each that that deliver different results
  • being able to cleanse data from various platforms and applications
  • dealing with global data, including local languages and nuances
Agent 99: Sometime I wish you were just an ordinary businessman.
Maxwell Smart: Well, 99, we are what we are. I'm a secret agent, trained to be cold, vicious, and savage... but not enough to be a businessman.

In an aggressive company, as your sphere of influence increases, it’s harder to gather key intelligence. How much did we sell yesterday? What’s the sales pipeline? What do we have in inventory worldwide? Since many company assets are tied to data, it’s hard to own your own company assets if they are a jumble.

Not only are decision-making metrics lost, but opportunity for efficiency is lost. With poor data, you may not be able to reach customers effectively. You may be paying too much to suppliers by not understanding your worldwide buying power. You may be driving your own employees away from innovations, as users begin to avoid new applications because of data.
KAOS Agent: Look, I'm a sportsman. I'll let you choose the way you want to die.
Maxwell Smart: All right, how about old age?

So, it’s up to data quality vendors to provide solutions to help high-growth companies “get smart” and defeat chaos (kaos) to regain ownership of their companies. They can do it with smart data-centric consulting services that help bring together business and IT. They can do it with technology that is easy to use and powerful enough to tackle even the toughest data quality problems. Finally, they can do it with a great team of people, working together to solve data issues.

Agent 99: Oh Max, you're so brave. You're going to get a medal for this.
Maxwell Smart: There's something more important than medals, 99.
Agent 99: What?
Maxwell Smart: It's after six. I get overtime.

Monday, June 9, 2008

Probabilistic Matching: Part Two

Matching algorithms, the functions that allow data quality tools to determine duplicate records and create households, are always a hot topic in the data quality community. In a previous installment of the Data Governance and Data Quality Insider, I wrote about the folly of probabilistic matching and its inability to precisely tune match results.

To recap, decisions for matching records together with probabilistic matchers are based on three things: 1) statistical analysis of the data; 2) a complicated mathematical formula, and; 3) and a “loose” or “tight” control setting. Statistical analysis is important because under probabilistic matching, data that is more unique in your data set has more weight in determining a pass/fail on the match. In other words, if you have a lot of ‘Smith’s in your database, Smith becomes a less important matching criterion for that record. If the record has a unique last name like ‘Afinogenova’ that’ll carry more weight in determining the match.

The trouble comes when you don’t like the way records are being matched. Your main course of action is to turn the dial on the loose/tight control to see if you can get the records to match without affecting record matching elsewhere in the process. Little provision is made for precise control of what records match and what records don’t. Always, there is some degree of inaccuracy in the match.

In other forms of matching, like deterministic matching and rules-based matching, you can very precisely control which records come together and which ones don’t. If something isn’t matching properly, you can make a rule for it. The rules are easy to understand. It’s also very easy to perform forensics on the matching and figure out why two records matched, and that comes in handy should you ever have to explain to anyone exactly why you deduped any given record.

But there is another major folly of probabilistic matching – namely performance. Remember, probabilistic matching relies heavily on statistical analysis of your data. It wants to know how many instances of “John” and “Main Street” are in your data before it can determine if there’s a match.

Consider for a moment a real time implementation, where records are entering the matching system, say once per second. The solution is trying to determine if the new record is almost like a record you already have in your database. For every record entering the system, shouldn’t the solution re-run statistics on the entire data set for the most accurate results? After all, the last new record you accepted into your database is going to change the stats, right? With medium-sized data sets, that’s going to take some time and some significant hardware to accomplish. With large sets of data, forget it.

Many vendors who tout their probabilistic matching secretly have work-arounds for real time matching performance issues. They recommend that you don’t update the statistics for every single new record. Depending on the real-time volumes, you might update statistics nightly or say every 100 records. But it’s safe to say that real time performance is something you’re going to have to deal with if you go with a probabilistic data quality solution.

Better yet, you can stay away from probabilistic matching and take a much less complicated and much more accurate approach – using time-tested pre-built business rules supplemented with your own unique business rules to precisely determine matches.

Friday, June 6, 2008

Data Profiling and Big Brown

Big Brown is positioned to win the third leg of the Triple Crown this weekend. In many ways picking a winner for a big thoroughbred race is similar to planning for a data quality project. Now, stay with me on this one.

When making decision on projects, we need statistics and analysis. With horse racing, we have a nice report that is already compiled for us called the daily racing form. It contains just about all the analysis we need to make a decision. With data intensive projects, you’ve got to do the analysis up front in order to win. We use data profiling tools to gather a wide array of metrics in order to make reasonable decisions. Like in our daily racing form, we look for anomalies, trends, and ways to cash in.

In data governance project planning, where there are company-wide projects abound, we may even have the opportunity to pick the projects that will deliver the highest return on investment. It’s similar to picking a winner at 10:1 odds. We may decide to bet our strategy on a big winner and when that horse comes in, we’ll win big for our company.

Now needless to say, neither the daily racing form nor the results of data profiling are completely infallible. For example, Big Brown’s quarter crack in his hoof is something that doesn’t show up in the data. Will it play a factor? Does newcomer Casino Drive, for whom there is very little data available, have a chance to disrupt our Big Brown project? In data intensive projects, we must communicate, bring in business users to understand processes, study and prepare contingency plans in order to mitigate risks from the unknown.

So, Big Brown is positioned to win the Triple Crown this weekend. Are you positioned to win on your next data intensive IT project? You can better your chances by using the daily racing form for data governance – a data profiling tool.

Tuesday, June 3, 2008

Trillium Software News Items

A couple of big items hit the news wire today from Trillium Software that are significant for data quality enthusiasts.

Item One:
Trillium Software cleansed and matched the huge database of Loyalty Management Group (LMG), the database company that owns the Nectar and Air Miles customer loyalty schemes in the UK and Europe.
Significance:
LMG has saved £150,000 by using data quality software to cleanse its mailing list, which is the largest in Europe, some 10 million customers strong. I believe this speaks to Trillium Software’s outstanding scalability and global data support. This particular implementation is an Oracle database with Trillium Software as the data cleansing process.


Item Two:
Trillium Software delivered the latest version of the Trillium Software System version 11.5. The software now offers expanded cleansing capabilities across a broader range of countries.
Significance:
Again, global data is a key take-away here. Being able to handle all of the cultural challenges you encounter with international data sets is a problem that requires continual improvement from data quality vendors. Here, Trillium is leveraging their parent company’s buyout of Global Address to improve the Trillium technology.


Item Three:
Trillium Software released a new mainframe version of version 11.5, too.
Significance:
Trillium Software continues to support data quality processes on the mainframe. Unfortunately, you don’t see other enterprise software companies offering many new mainframe releases these days, despite the fact that the mainframe is still very much a viable and vibrant for managing data.

Disclaimer: The opinions expressed here are my own and don't necessarily reflect the opinion of my employer. The material written here is copyright (c) 2010 by Steve Sarsfield. To request permission to reuse, please e-mail me.