Showing posts with label Big Data. Show all posts
Showing posts with label Big Data. Show all posts

Monday, March 20, 2017

Tapping Into Non-Relational Data

Modern BI and Analytics programs use non-reatlional data management for six key functions. You should be cognizant of these functions when you add non-relational technology to your data architecture.





Kinds of Non-Relational Storage

Most IT professionals are familiar with the RDBMS. Relational databases store data in tables that are defined in advance. The definition specifies the columns that comprise the table, their data types, and so forth. The design is referred to as a data model or schema.

Relational storage is immensely useful, but it is not the only game in town. There are several alternative types of data storage including:
  • Key-value stores store data in associative arrays comprised of sort keys and associated data values. These flexible data structures can be distributed across nodes of commodity hardware, and are manipulated using distributed processing algorithms (map-reduce). Hadoop functions as a key-value store.
  • Document stores track collections of documents that have self-defining structure, often represented in XML or JSON formats. A document store may be built on top of a key-value store. MongoDB is a document store.
  • Graph databases store the connections between things as explicit data structures (similar to pointers). These are stored separately from the things they connect. This contrasts with the RDBMS approach, where relationship information is stored within the things being associated (keys within tables). Neo4j is a graph database. 
Reasons for Non-Relational Storage

In the age of big data, organizations are tapping into these forms of storage for a variety of reasons. Here are just a few:
  • No model: not requiring a predefined schema enables storing new data that has yet to be explored and modeled.
  • Low cost: the cost of data management can be significantly lower than RDBMS storage.
  • Better fit: some business use cases are a natural fit to alternative paradigms.
But don’t simply add a box to your data architecture for non-relational data. You must plan for specific usage paradigms, and make sure your architecture and processes support them.

Uses for Non-Relational Storage

There are six major use cases for non-relational data in modern data architectures:
  • Capture  Non-relational storage facilitates intake of raw data.  New data sets are captured in non-relational data stores, where they become “raw material” for various uses. A non-relational data store such as Hadoop can be used to capture data without having to model it first. This is often referred to as a data lake. I prefer to call it a landing zone.
  • Explore  Non-relational storage facilitates exploration and discovery. Exploration is the search for value in new data sets. Exploration applies analytic methods to captured data, often combining it with existing enterprise data. The goal is to find value in the data, and to identify things that will be worth tracking on a regular basis. 
  • Archive  Non-relational storage serves as an archive. Data for which immediate value has not been identified is moved to an archive. From here it can be fetched for future use. Archiving data helps ensure the data lake does not become the fabled “data swamp.” An alternative to archiving unused data is simply to purge it.
  • Deploy  Non-relational storage supports production analytics. When value is found in data, it is transitioned to a production environment and processes are automated to keep it up to date. Deployments range from simple reports to complex analytic models.
  • Augment  Non-relational storage serves as a staging area for the data warehouse. In many cases, the insights gained from exploration prove valuable enough to track on a regular basis. Augmentation is the process of adding elements to a relational data warehouse that come from non-relational sources or analytic processes. A credit score, for example, might be incorporated into a customer dimension.
  • Extend:  Non-relational storage expands what can be maintained in the data warehouse. Sometimes there is lasting value in non-relational data, but is not appropriate to migrate it to relational storage. In such cases, the non-relational data is moved to a non-relational extension of the data warehouse. Applications can link relational and non-relational data. For example, non-relational XML documents may made available for “drill down” from a dimensional cube.
In addition to these six primary use cases, non relational platforms may serve several utility functions. These include staging, data standardization, cleansing, and so forth.

Learn More

Join me for my course Data Modeling in the Age of Big Data, offered exclusively through TDWI. At the time of this writing, it is offered next at TDWI Chicago on May 11, 2017. You can also bring this course to your site through TDWI Onsite Education. For more information contact me.


Thursday, November 14, 2013

Facebook's Ken Rudin on Analytics

If you are interested in how business analytics impact your BI program, carve out forty-five minutes of time to watch Ken Rudin's recent TDWI keynote: "Big Data, Bigger Impact." The video is embedded below.

Rudin is the director of analytics at Facebook. In his presentation, he discusses several topics that are of interest to readers of this blog. Among them:
  • Big data technology should be used to extend your traditional BI solution, not replace it. Facebook has realized this, and is working to bring in relational technology to answer traditional business questions.
  • Successful analytics programs bring together centrally managed core data metrics with a variety of data that is not centrally managed. Rudin shares different ways he has been able to make this happen.
  • A similar balance can be attained with your organizational structure. Use of "embedded analysts" provides the business benefits of decentralization, while maintaining the efficiencies and scale advantages of a centralized program.
These are just a few of the points made during his talk. If you don't have the time to watch it now, bookmark this page for later.

You'll also want to check out Wayne Eckerson's latest book, Secrets of Analytical Leaders. (Details below.)

Big Data, Bigger Impact
Ken Rudin
TDWI World Conference, Chicago 5/6/2013




Recommended Reading

Wayne Eckerson's excellent book, Secrets of Analytical Leaders,features more insights from Ken Rudin and others.

I highly recommend this book if you are interested in analytics.

Get it from Amazon.com in paperback or Kindle editions.


Wednesday, September 25, 2013

Optimizing warehouse data for business analytics

Business analytics often integrate information from your data warehouse with other sources of data. This post looks at the best practices of warehouse design that make this possible.

I receive a lot of questions regarding the best way to structure warehouse data to support an analytics program. The answer is simple: follow the same best practices you've already learned.

I'll cover these practices from a dimensional modeling perspective. Keep in mind that they apply in any data warehouse, including those modeled in third normal form.


1. Store Granular Facts

Analytic modelers often choose sources external to the data warehouse, even when the warehouse seems to contain relevant data. The number one reason for this is insufficient detail. The warehouse contains summarized data; the analytic model requires detail.

In this situation, the analytic modeler has no choice but to look elsewhere.  Worse, she may be forced to build redundant processes to transform source data and compile history. Luckily, this is not a failure of warehouse design principles; its a failure to follow standard best practices.

Best practices of  dimensional design dictate that we set the grain of base fact tables at the lowest level of detail possible. Need a daily summary of sales? Store the individual order lines. Asked to track the cost of tips? Store detail about each leg.

Dimensional solutions can contain summarized data. This takes the form of cubes, aggregates, or derived schemas. But these summaries should be derived exclusively from detailed data that also lives in the warehouse.

Like all rules, this rule has exceptions. There are times when the cost/benefit calculus is such that it doesn't make sense to house highly granular indefinitely. But more often than not, summary data is stored simply because basic best practices were not followed.

2. Build “Wide” Dimensions

The more attributes there are in your reference data (aka dimensions), the more useful source material there is for analytic discovery. So build dimensions that are full of attributes,  as many as you can find.

If the grain of your fact table gives the analytics team “observations” to work on, the dimensions give them “variables.”  And the more variables there are, the better the odds of finding useful associations, correlations, or influences.

Luckily, this too is already a best practice. Unfortunately, it is one that is often misunderstood and violated. Misguided modelers frequently break things down into the essential pieces only, or model just to specific requirements.  

3. Track Changes to Reference Data (and Use Effective Dating)

When reference data changes, too many dimensional models default to updating corresponding dimensions, because it is easier.

For example, suppose your company re-brands a product. It's still the same product, but with a new name. You may be tempted to simply update the reference data in your data warehouse. This is easier than tracking changes.  It may even seem to make business sense, because 90% of your reports require this-year-versus-last comparison by product name.

Unfortunately, some very important analysis may require understanding how consumer behavior correlates with the product name. You've lost this in your data set. Best practices help avoid these problems.

Dimensional models should track the change history of reference data. In dimensional speak, this means application of  type 2 slow changes as a rule. This preserves the historic context of every fact recorded in the fact table.

In addition, every row in a dimension table should track "effective" and "expiration" dates, as well as a flag rows that are current. This enables the delivery of type 1 behavior (the current value) even as we store type 2 behavior. From an analytic perspective, it also enables useful "what if" analysis.

As with all rules, again there are exceptions. In some cases, there may be good reason not to respond to changes in reference data by tracking history. But more often than not, type 1 responses are chosen for the wrong reason: because they are easier to implement.

4. Record Identifying Information, Including Alternate Identifiers

Good dimensional models allow us to trace back to the original source data. To do this, include transaction identifiers (real or manufactured) in fact tables, and maintain identifiers from source systems in dimension tables (these are called "natural keys").

Some of this is just plain necessary in order to get a dimensional schema loaded. For example, if we are tracking changes to a product name in a dimension, we may have multiple rows for a given product. The product's identifier is not a unique identifier, but we must have access to it. If we don't, it would become impossible to load a fact into the fact table.

Identifying information is also essential for business analytics. Data from the warehouse is likely to be combined with data that comes from other places. These identifiers are the connectors that allow analytic modelers to do this.  Without them, it may become necessary to bypass the warehouse.

Your analytic efforts, however, may require blending new data with your enterprise data. And that new data may not come with handy identifiers. You have a better chance blending it with enterprise data if your warehouse also includes alternate identifiers, which can be used to do matching. Include things like phone numbers, email addresses, geographic coordinates—anything that will give the analytics effort a fighting chance of linking up data sources.

Summary

If you've been following the best practices of dimensional modeling, you've produced an asset that maximized value for analytic modelers:

  • You have granular, detailed event data.
  • You have rich, detailed reference data.
  • You are tracking and time-stamping changes to reference data. 
  • You've got transaction identifiers, business keys, and alternate identifiers.  

It also goes without saying that conformed dimensions are crucial if you hope to sustain a program of business analytics.

Of course, there are other considerations that may cause an analytic modeler to turn her back on the data warehouse. Latency issues, for example, may steer them to operational solutions. Accessibility and procedural issues, too, may get in the way of the analytic process.

But from a database design perspective, the message is simple: follow those best practices!

Further Reading

You can also read more in prior posts.  For example:
You can also read more in my book, Star Schema: The Complete Reference.  If you use the links on this page to pick up a copy on Amazon, you will be helping support this blog.  

 It covers the best practices of dimensional design in depth. For example:

  • Grain, identifiers, keys and basic slow change techniques are covered in Chapter 3, "Stars and Cubes"
  • The place of summary data is covered in Chapter 14, "Derived Schemas" and Chapter 15, "Aggregates"
  • Conformance is covered in Chapter 5, "Conformed Dimensions"
  • Advanced slow change techniques are explored in Chapter 8, "More Slow Change Techniques"

Wednesday, July 17, 2013

Business Analytics and Dimensional Data

Readers of this blog frequently ask about the relationship of business analytics to the dimensional data that is recorded in data marts and the data warehouse.

Business analytics operate on data that often does not come from the data warehouse. The value of business analytics, however, is measured by its impact on business metrics that are tracked in the data warehouse. 

Business analytics may also help adjust our notion of which metrics matter the most.

The Data Warehouse and Dimensional Data

The dimensional model is the focal point of business information in the data warehouse. It describes how we track business activities and measure business performance. It may also be the foundation for a performance management program that links metrics to business goals.

Dimensional data is the definitive record of what matters to the business about activities and status. Clearly defined performance indicators (facts) are recorded consistently and cross referenced with standardized and conformed reference data (dimensions).

In this post, when I talk about "the data warehouse," I will have this dimensional data in mind.

Business Analytics

Business analytics seek to provide new insight into business activities. Analytics do not always operate on business metrics, and they don't rely exclusively on information form the data warehouse. Dimensional information may be an input, but other sources of data are also drawn upon.

The outputs of business analytics, however, aim directly at the metrics tracked by our dimensional models. Insights from analytics are used by people to move key metrics in the desired directions. These results are called impacts.

Business analytics may also help in another way. Sometimes, they help us determine which metrics are actually the most important.

A great illustration of these dynamics can be found in the business of Major League Baseball. (If you don't follow baseball, don't worry. You don't have to understand baseball to follow this example.)

Metrics in Baseball

Major league baseball has long been in the business of measurement. Followers of the game are familiar with the "box score" that summarizes each game, "standings" that illustrate the relative performance of teams, and numerous statistics that describe the performance of each player.

These metrics have precise definitions and have been recorded consistently for almost 150 years.1 Like the metrics in your data warehouse, they are tracked systematically. Professional baseball teams can also set goals for these metrics and compare them to results, much like a scorecard in your performance management program.

How does one improve these results? If you run a baseball team, part of the answer lies in how you choose players. In the book Moneyball2 Michael Lewis describes how the Oakland Athletics used a set of techniques known as sabermetrics3 to make smarter choices about which players to add to their roster.

These analytics allowed the A's to make smarter choices with measurable impact--improving performance and reducing costs. Analytics also motivated the A's to change the emphasis given to various metrics.

Business Analytics and the Oakland Athletics

The traditional approach to selecting players was focused on long-held conventional wisdom about what makes a valuable player. For example, offensive value was generally held to derive from the ability to contact the baseball, and with a player's speed. These skills are at least partially evident in some of the standard baseball metrics -- things like the batting average, stolen bases, runs batted in and sacrifices.

The Oakland A's looked to data to refine their notion of what a valuable player looks like. How do the things players do actually contribute to a win or loss? To do this, the A's went beyond the box scores and statistics -- beyond the data warehouse, so to speak.

By studying every action that is a part of the game -- what players are on base, what kind of pitches are thrown, where the ball lands when it is hit, etc -- the A's realized they could be smarter about assessing how a player adds value. These business analytics led to several useful conclusions:
  • Batting averages don't tell the whole story about a player's ability to get on base; for example, they exclude walks.
  • Stolen bases don't always contribute to scoring; much depends on who comes to bat next.
  • Runs batted in tell as much about who hits before a player as they do about the player himself
  • Sacrifices, where an out is recorded but a runner advances, were found to contribute less to the outcome of a game than conventional wisdom held.
You may or may not understand these conclusions, but here is the important thing: the analytics suggested that the A's could better assess a player's impact on winning games by turning away from conventional wisdom. Contact and speed are not the best predictors for winning game.  "Patience at the plate" leads to better outcomes.

Impact for the A's

By using these insights to make choices, the A's were able to select less expensive players who could make a more significant contribution to team results. These choices resulted in measurable improvement in many of the standard metrics of baseball--the win/loss ratio in particular. These insights also enabled them to deliver improved financial results.

Analytics also helped the A's in another way: they refined exactly which metrics they should be tracking. For example, in assessing of offensive value, on base percentage should be emphasized over batting average. They also created some of their own metrics to track their performance over time.

The Impact of Analytics

Business analytics tell us what to look for, what works, or what might happen. Examples are signs of impending churn, what makes a web site "sticky", patterns that might indicate fraud, and so forth.

These insights, in turn, are applied in making business decisions. These choices provide valuable impact that can by tracking traditional business metrics. Examples include increased retention rates, reduced costs associated with fraud, and so forth.

These impacts are the desired outcome of the analytic program. If the analytics don't have a demonstrable impact on metrics, they are not providing value.

Business analytics can also help us revise our notion of what to track in our data warehouses, or which metrics to pay closest attention to. Number of calls to the support center, for example, may be less of an indicator of customer satisfaction than the average time to resolve an issue.

Conclusion

As you expand the scope of your BI program to include analytics, remember that your desired outcome is a positive impact on results. Move the needle on business metrics, and the analytics have done their job.

Thanks to my colleague Mark Peco, for suggesting that I use Moneyball as a way to explain analytics without revealing the proprietary insights attained by my customers. 

Notes

[1] The box score and many of these statistics were established in the mid 1800's by a sports writer named Henry Chadwick.

[2] Moneyball by Michael Lewis (Norton, 2011).

[3] The Oakland A's are a high-profile example of the use of sabermetrics, but did not originate the concept. See wikipedia for more information.

Related Posts

See also these posts:





Wednesday, June 5, 2013

In the Era of Big Data, The Dimensional Model is Essential

Don't let the hype around big data lead you to believe your BI program is obsolete. 

I receive a lot of questions about "big data."  Here is one:
We have been doing data warehousing using Kimball method and dimensional modeling for several years and are very successful (thanks for your 3 books, btw). However, these days we hear a lot about Big Data Analytics, and people say that Big Data is the future trend of BI, and that it will replace data warehousing, etc.

Personally I don't believe that Big Data is going to replace Data Warehousing but I guess that it may still bring certain value to BI.  I'm wondering if you could share some thoughts.

"Big data" is the never-ending quest to expand the ways in which our BI programs deliver business value.

As we expand the scope of what we deliver to the business, we must be able to tie our discoveries back to business metrics and measure the impact of our decisions. The dimensional model is the glue that allows us to achieve this.

Unless you plan to stop measuring your business, the dimensional model will remain essential to your BI program. The data warehouse remains relevant as a means to instantiate the information that supports this model. Reports of its death have been greatly exaggerated.

Big Data

"Big Data" is usually defined as a set of data management challenges known as "the three V's" -- volume, velocity and variety. These challenges are not new. Doug Laney first wrote about the three V's in 2001 -- twelve years ago.1And even before that, we were dealing with these problems.

Photo from NASA in public domain.
Consider the first edition of The Data Warehouse Toolkit, published by Ralph Kimball in 1996.2 For many readers, his "grocery store" example provided their first exposure to the star schema. This schema captured aggregated data! The 21 GB fact table was a daily summary of sales, not a detailed record of point-of-sale transactions. Such a data set was presumably too large at the time.

That's volume, the first V, circa 1996.

In the same era, we were also dealing with velocity and variety. Many organizations were moving from monthly, weekly or daily batch loads to real-time or near-real time loads. Some were also working to establish linkages between dimensional data and information stored in document repositories.

New business questions

As technology evolves, we are able to address an ever expanding set of business questions.

Today, it is not unreasonable to expect the grocery store's data warehouse to have a record for every product that moves across the checkout scanner, measured in terabytes rather than gigabytes. With this level of detail, market basket analysis is possible, along with longitudinal study of customer behavior.

But of course, the grocery store is now looking beyond sales to new analytic possibilities. These include tracking the movement of product through the supply and distribution process, capturing interaction behavior of on-line shoppers, and studying consumer sentiment.

We still measure our businesses

What does this mean for the dimensional model? As I've posted before, a dimensional model represents how we measure the business. That's not something we're going to stop doing. Traditional business questions remain relevant, and the information that supports them is the core of our BI solution.

At the same time, we need to be able to link this information to other types of data. For a variety of reasons (V-V-V), some of this information may not be stored in a relational format, and some may not be a part of the data warehouse.

Making sense of all this data requires placing it in the context of our business objectives and activities.

To do this, we must continue to understand and capture business metrics, record transaction identifiers, integrate around conformed dimensions, and maintain associated business keys. These are long established best practices of dimensional modeling.

By applying these dimensional techniques, we can (1) link insights from our analytics to business objectives and (2) measure the impact of resultant business decisions. If we don't do this, our big data analytics become a modern-day equivalent of the stove-pipe data mart.

The data warehouse

The function of the data warehouse is to instantiate the data that supports measurement of the business. The dimensional model can be used toward this aim (think: star schema, cube.)

The dimensional model also has other functions. It is used to express information requirements, to guide program scope, and to communicate with the business. Technology may eventually get us to a point where we can jettison the data warehouse on an enterprise scale,but these other functions will remain essential. In fact, their importance becomes elevated.

In any architecture that moves away from physically integrated data, we need a framework that allows us to bring that data together with semantic consistency. This is one of the key functions of the dimensional model.

The dimensional model is the glue that is used to assemble business information from distributed data.

Organizations that leverage a bus architecture already understand this. They routinely bring together information from separate physical data marts, a process supported by the dimensional principle of conformance. Wholesale elimination of the data warehouse takes things one step further.

Notes
  1. Doug Laney's first published treatment of "The Three V's" can be found on his blog.
  2. Now out of print, this discussion appeared in Chapter 2, "The Grocery Store."  Insight into the big data challenges of 1996 can be found in Chapter 17, "The Future."
  3. I think we are a long time away from being able to do this on an enterprise scale. When we do get there, it will be as much due to master data management as it is due to big data or virtualization technologies. I'll discuss virtualization in some future posts.
More reading

Previous posts have dealt with this topic.
  • In Big Data and Dimensional Modeling (4/20/2012) you can see me discuss the impact of new technologies on the data warehouse and the importance of the dimensional model.

Tuesday, April 30, 2013

The Role of the Dimensional Model in Your BI Program

The dimensional model delivers value long before a database is designed or built, and even when no data is ever stored dimensionally. While it is best known as a basis for database design, its other roles may have more important impacts on your BI program.

The dimensional model plays four key roles in Business Intelligence:
  1. The dimensional model is the ideal way define requirements, because it describes how the business is measured
  2. The dimensional model is ideal for managing scope because it communicates to business people (functionality) and technical people (complexity) 
  3. The dimensional model is ideal as a basis for data mart design because it provides ease of use and high performance
  4. The dimensional model is ideal as a semantic layer because it communicates in business terms
Information Requirements

The dimensional model is best understood as an information model, rather than data model. It describes business activities the same way people do: as a system of measurement. This makes it the ideal form to express information needs, regardless of how information will be stored.

Image by Gravityx9
licensed under 
Creative Commons 2.0
A dimensional model defines business metrics or performance indicators in detail, and captures the attendant dimensional context. (For a refresher, see the post What is a Dimensional Model from 4/27/2010.) Metrics are grouped based on shared granularity, cross referenced to shared reference data, and traced to data sources.

This representation is valuable because business questions are constantly changing. If you simply state them, you produce a model with limited shelf life. If you model answers to the question of today, you've provided perishable goods.

A dimensional model establishes information requirements that endure, even as questions change. It provides a strong foundation for multiple facets of BI:
  • Performance management, including dashboards and scorecards
  • Analytic processing, including OLAP and ad hoc analysis
  • Reporting, including both enterprise and operational reports
  • Advanced analytics, including business analytics, data mining and predictive analytics
All these disciplines center on business metrics. It should be no surprise that when Howard Dresner coined the term Business Intelligence, his definition referenced "facts and fact based systems." It's all about measurement.

Program Roadmap and Project Scope

A dimensional model can be used to describe scope because it communicates to two important audiences.
  • Business people: functionality   The dimensional model describes the measurement of a business process, reflecting how the process is evaluated by participants and observers. It communicates business capability.
  • Technical personnel: level of effort A dimensional model has technical implications: it determines the data sources that must be integrated, how information must be integrated and cleansed, and how queries or reports can be built. In this respect, it communicates level of effort. 
These dual perspectives make the dimensional design an ideal centerpiece for managing the roadmap for your BI program. Fully documented and mapped to data sources, a dimensional model can be divided into projects and prioritized. It is a blueprint that can be understood by all interested parties. A simple conformance matrix communicates both intended functionality and technical level of effort for each project.

At the project level, a dimensional design can be used as the basis for progress reporting. It can also serve as a nonambiguous arbiter of change requests. Changes that add data sources or impact grain, for example, are considered out of scope. This is particularly useful for organizations that employ iterative methodologies, but its simplicity makes it easy to reconcile with any development methodology.

Database Design

The dimensional model is best known as the basis for database design. The term "star schema" is far more widely recognized than "dimensional model" (a fact that influenced the name of my most recent book).

In fact, the dimensional model is the de facto standard for data mart design, and many organizations use it to shape the entire data warehouse. It has an important place in the W.H. Inmon's Corporate Information Factory, Ralph Kimball Dimensional Bus architecture, and even in one-off data marts that lack an enterprise focus.

Implemented in a relational database, the dimensional model becomes known as a star schema or snowflake. Implemented in a multidimensional database, it is known as a cube. These implementations offer numerous benefits. They are:
  • Easily understandable by business people
  • Extraordinarily flexible from a reporting and analysis perspective
  • Adaptable to change
  • Capable of very high performance
Presentation and the Semantic Layer

A dimensional representation is the ideal way to present information to business people, regardless of how it is actually stored. It reflects how people think about the business, so it is used to organize the catalog of items they can call on for analysis.

Many business intelligence tools are architected around this concept, allowing a semantic layer to sit between the user and database tables. The elements with which people can frame questions are categorized as facts and dimensions. One need not know what physical data structures lay beneath.

Even the earliest incarnations of the semantic layer leveraged this notion. Many organizations used these tools to impose a dimensional view directly on top of operational data. Today, semantic layers are commonly linked to dimensional data marts.

A dimensional representation of business activity is the starting point for a variety of BI activities:
  • Building enterprise reports
  • Defining performance dashboards
  • Performing ad hoc analysis
  • Preparing data for an analytic model
The concept of dimensional presentation is receiving renewed attention as federated solutions promise the construction of virtual solutions rather than physical ones.

Further information

I briefly covered these four roles in an interview last year:
Many of these themes have been discussed previously:
Although I've touched on these topics before, I wanted to bring them together in a single article. In the coming months, I will refer back to these concepts as I address common questions about big data, agile BI and federation.

In the mean time, please help support this blog by picking up a copy of my latest book.

Friday, September 21, 2012

Business Analytics at Work, Almost

Back in February, business analytics briefly broke into the mainstream consciousness thanks to an article in the New York Times Magazine called How Companies Learn Your Secrets by Charles Duhigg.

When I teach, I frequently use an example from this article. I get lots of email asking me for more information, so I figured I might as well post it here.

Breaking Habits

Duhigg's purpose is to show that human behavior is driven by habit. The key to changing behavior is understanding the underlying habits. 

He tells several stories where businesses use this information to drive positive results.

One of his stories happens to touch on predictive analytics.  

The setup is simple: back in 2002, marketers at Target had realized that shopping habits are disrupted with the arrival of a first-born. Bring expecting parents into stores, and you have an opportunity to cultivate new patterns. The result: new lifetime customers for all manners of product--not just baby items.

The key, then, was to identify people who might be pregnant. With the help of a team of statisticians, Target figured out that they could do just that.

Enter Analytics

Duhigg describes how Target tapped into their vast data reserves, and eventually came up with a market basket profile that was a good predictor of pregnancy. This profile could then be applied to their customer base to identify thousands of people who were likely to be expecting.

They were surprisingly effective. Duhigg tells the story of a gentleman berating a store manager for marketing baby products to his teenage daughter:

“She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?”

The manager didn’t have any idea what the man was talking about. He looked at the mailer. Sure enough, it was addressed to the man’s daughter and contained advertisements for maternity clothing, nursery furniture and pictures of smiling infants. The manager apologized and then called a few days later to apologize again.

On the phone, though, the father was somewhat abashed. “I had a talk with my daughter,” he said. “It turns out there’s been some activities in my house I haven’t been completely aware of. She’s due in August. I owe you an apology.”
The analytics, it seems, did their job. But did they really?

Analytics and Business Strategy

This anecdote went viral when the Times published Duhigg's article. Ethical issues notwithstanding, it offered an updated version of the apocryphal  "beer and diapers" tale from the 1990's.

But there's more here than market basket analysis. This story illustrates a key aspect of predictive analytics: its important to engage with the owner of the business problem. In this case, it seems, that did not happen. Instead, someone made a tactical decision about how to act on analytics.

In this case,

  • A business analyst had already identified something important: pregnancy. 
  • An analytic modeler took things from there, and developed a way to identify pregnant shoppers. 
  • The analyst chose to mail to people identified by the analytic model.

This last step is where the fumble took place. Every analytic initiative must be connected to the owner of the business problem. This person may not be actively engaged in the analytics, but it is her job to decide what to do with the insights. Deciding how to act on analytics is a key part of the process. It involves communication, and requires the attention of executives who are responsible for business objectives.

The Book

Duhigg has now published a book on the same topic. You might want to check it out.  Just remember, this is not a book about analytics -- it's about habits.


The Power of Habit: Why We Do What We Do in Life and Business by Charles Duhigg.  (Random House, 2012)




Friday, April 20, 2012

Big Data and Dimensional Modeling [Video]


I am often asked, "Is the dimensional model obsolete in the world of big data?"

I always respond with a question of my own: "Do you plan to stop measuring your business?"  Of course, the answer is always "no." 

Dimensional modeling is all about measurement, and so it will continue to have a place in our BI programs.

Here, I discuss the topic with Paul Kautza, Director of Education at The Data Warehousing Institute (TDWI).



In this interview, I talk about four ways we use dimensional models:
  1. To capture business requirements (e.g. measurement definitions)
  2. To define and manage a roadmap for a BI program (scope)
  3. As the basis for a database design (star schema, snowflake schema, cube)
  4. As a presentation layer for the business (semantic layer)
Developments in the world of big data may impact the third category, but not always.