Sunday, November 23, 2008

Picking the Boardwalk and Park Place DQ Projects

This weekend, I was playing a game of Monopoly with my kids. Monopoly is the ultimate game of capitalism. It’s a great way to teach a young one about money. (Given the length of the game, a single game can be a weekend long lesson.) The companies that we work for are also playing the capitalism game. So, it’s not a stretch that there are lessons to be learned while playing this game.

As I took in hefty rents from Pacific Ave, I could see that my daughter was beginning to realize that it’s really tough to win if you buy low-end properties like Baltic and Mediterranean, or any of the properties on that side of the board. Even with hotels, Baltic will only get you $450. It’s only with the yellow, green and blue properties that you can really make an impression on your fellow players. She got excited by finally getting a hold of Boardwalk and Park Place.

Likewise, it’s difficult to win at the data governance game if you pick projects that have limited upside. The tendency might be to fix the data of the business users who are complaining the most or those that the CEO tells you to fix. The key is to keep capitalism and the game of monopoly in mind when you pick projects.

When you begin picking high value targets with huge upside potential, you’ll begin to win at the data governance game. People will stand up and notice when you begin to bring in the high-end returns that Boardwalk and Park Place can bring in. You’ll get better traction in the organization. You’ll be able to expand your domain across Ventnor, St. James Place, gathering up other clean data monopolies.

This is the tactic that I’ve see so many successful data governance initiatives take at Trillium Software. The most successful project managers are also good marketers, promoting their success inside the company. And if no one will listen inside the company, they promote it to trade journals, analysts and industry awards. There’s nothing like a little press to make the company look up and notice.

So take the $200 you get from passing GO and focus on high value, high impact projects. When you land on Baltic, pass it by, at least at first. By focusing on the high impact data properties, you’ll get a better payoff in the end.

To hear a few more tips, I recommend the webinar by my friend Jim Orr at Trillium Software. You can listen to his webinar here.

Wednesday, November 19, 2008

What is DIG?

In case you haven’t heard, financial services companies are in a crunch time right now. Some say the current stormy conditions are unprecedented. Some say it’s a rocky time, but certainly manageable. Either way, financial service companies have to be smarter than ever in managing risk.

That’s what DIG is all about, helping financial services companies manage risk from their data. It's a new solution set from Trillium Software.

In Europe, BASEL II is standard operating procedure at many financial services companies and the US is starting to come on board. BASEL II is complex, but includes mandates for increased transparency of key performance indicators, such as probability of default (PD) and of Loss Given Default (LGD) to better determine Exposure At Default (EAD). Strict rules on capital risks reserve provisions penalize those institutions highly exposed to risk and those unable to provide ‘provably correct’ analysis of their risk position.

Clearly, the lack of risk calculations had something to do with the situation that banks are in today. Consider all the data that it takes to make a risk compliance calculation: customer credit quality measurements, agency debt ratings, accounts receivables, and current market exposures. When this type of data is spread out over multiple systems, it introduces risk that can shake the financial world.

To comply with BASEL II, financial services companies and those who issue credit have to be smarter than ever in managing data. Data drives decision-making and risk calculation models. For example, let’s say you’re a bank and you’re calculating the risk of your debtors. You enrich your data with Standard & Poor's ratings to understand the risk. But if the data is non-standardized, you may have a hard time matching the Standard & Poor's data to your customer. If not found, a company with a AA- bond rating might default as BB- in the database. After all, it is prudent to be conservative if you don’t know the risk. But that error can cause thousands, even millions to be set unnecessarily aside. These additional capital reserves can be a major drag on the company.

With the Data Intelligence and Governance (DIG) announcement from Trillium Software, we’re starting to leverage our enterprise technology platform to fix the risk rating process to become proactive participants in the validation, measurement, and management of all data fed into risk models. The key is to establish a framework for the context of data and best practices for enterprise governance. When we leverage our software and services to work on key data attributes and set up rules to ensure the accuracy of data, it could work to save the financial services companies a ton of money.

To support DIG, we’ve brought on board some additional financial services expertise. We’ve revamped our professional services and are working closely with some of our partners on the DIG initiative. We’ve also been updating our software, like our data quality dashboard, TS Insight, to help meet financial services challenges. For more information, see the DIG page on the Trillium Software web site.

Wednesday, November 12, 2008

The Data Governance Insider - Year in Review

Today is the one year anniversary of this blog. We’ve covered some interesting ground this year. It’s great to look back and to see if the thoughts I had in my 48 blog entries made any sense at all. For the most part, I’m proud of what I said this year.


Probably the most controversial entries this year were the ones on probabilistic matching. This was where I pointed out some of the shortcomings of the probabilistic technique to matching data. Some people read and agreed. Others added their dissension.


Visitors seemed to like the entry on approaching data intensive projects with data quality in mind. This is a popular white paper on Trilliumsoftware.com, too. We'll have to do more of those nuts and bolts articles in the year ahead.


As a data guy, I like reviewing the stats from Google Analytics. In terms of traffic, it was very slow going at first, but as traffic started to build, we were able to eke out 3,506 Visits with 2,327 of those visits unique. That means that either someone came back 1,179 times or 1,179 people came back… or some combination of the two. Maybe my mother just loves reading my stuff.


The visitors came from the places you’d expect. The top ten were United States, United Kingdom, Canada, Australia, India, Germany, France, Netherlands, Belgium, and Israel. We had a few visitors from unexpected places - one visitor from Kazakhstan apparently liked my entry on the Trillium Software integration with Oracle, but not enough to come back. A visitor from the Cayman Islands took a breaking from SCUBA diving to read my story on the successes Trillium Software has had with SAP implementations. There's a nice webinar that we recorded that's available there. A visitor from Croatia took time to read my story about data quality on the mainframe. Even outside Croatia, the mainframe is still a viable platform for data management.


I’m looking forward to another year of writing about data governance and data quality. Thanks for all your visits!

Disclaimer: The opinions expressed here are my own and don't necessarily reflect the opinion of my employer. The material written here is copyright (c) 2010 by Steve Sarsfield. To request permission to reuse, please e-mail me.