Sunday, November 27, 2011

When is 'Big Data' too big for Analytics?

- 'Foreword'Apologies for the lack of recent posts.  I've been *very* busy on many Data Mining Analytics projects in my role as a Data Mining Consultant for SAS.  The content of my work is usually sensititive and therefore discussing it in any level of detail in public blog posts is difficult.

This specific post is to help promote the launch of the new IAPA website and increase focus on Analytics in Australia (and Sydney, where I am normally based).  The topic of this post is something that has been at the forefornt of my mind and seems to be a central theme of many of the projects I have been working on recently.  It is certaininly a current problem for many Marketing/Customer Analytics departments.  So here are a few thoughts and comments on 'big data'. Apologies for typos, it is mostly written piecemeal on my iPhone during short 5 mins breaks...


How big is too big (for Analytics)?
I frequently read Analytics blogs and e-magazines that talk about the 'new' explosion of big data. Although I am unconvinced it is new, or will improve anytime soon, I do agree that despite technology advances in analytics the growth of data generation and storage seems to be outpacing most Analyst's ability to transform data into information and utilize it to greater benefit (both operationally and analytically). The term 'Analysis Paralysis' has never been so relevant!

But from a practical perspective what conditions cause data to become unwieldy? For example, take a typical customer services based organisation such as a bank, telcom, or public dept: how can the data (de)-evolve to a state that makes it 'un-analysable' (what a horrible thought..). Even given mild (by today's standards) numbers of variables and records, certain practices and conditions can lead to bottlenecks, widespread performance problems, and delays that make any delivery of Analytics very challenging.

So, below is a series of my most recent observations from Analytics projects I have been involved with that involved resolving, or encountered 'big data' problems:

- Scaleable Infrastructure.
Data will grow. Fast. In fact it will probably more than double in the next few years. CPU capacity of data warehousing and analytics servers need to improve to match.
As an example, I was working on a telcom Social Network Analysis project recently where we were processing weekly summaries of mobile telephone calls for approx 18million individuals. My role was to analysis the social interactions between all customers and build dozens of propensity scores, using the social influence of others to predict behaviour. In total I was probably processing hundreds of millions of records of data (by a dozen or so variables). This was more than the client typically analysed.
After a week  of design and preliminary work I began to conasider ways to optimise the performance of my queries and computations, and I asked about the server specifications. I assumed some big server with dozens of processors, but unfortunately what I was connecting to was a dual core 4GB desktop PC under an Analyst's desk...

- Variable Transformations
A common mistake by inexperienced data miners is to ignore or short-cut comprehensive data preparation steps. All data that involves analysis of people is certain to include unusual characteristics. One person's outlier is another's screw-up :)
So, what is the best way to account for outliers, skewed distributions, poor data sparsity, or highly likely erreonous data features? Well an approach (that i am not keen on) taken by some is to apply several variable transformations indiscriminatly to all 'raw' variables and subsequentially let a variable selection process pick the best input variables for propensity modeling etc. When combined with data which represents transposed time series (so a variable represents a value in 'month1' the next variable the same value dimension in 'month2' etc) then this can easily generate in excess of 20,000 variables (by say 10 million customers...). It is true there are variable selection methods that handle 20,000 quite well, but the metadata and processing to create those datasets is often significant and the whole process often incurs excessive costs in terms of time to delivery of results.
Additional problems that may arise when you start working with many thousands of variables is that variable naming needs to be easily understood and interpretable. The last thing a data miner wants to do is spend hours working out what those transformed and selected important variables in the propensity model actually mean and represent in the raw data.
Which leads me to my next point..

- Variable / Data Understanding
One of the core skills of a good data miner is the understanding and translate complex data in order to solve business problems.
As organisations obtain more data it is not just about more records, often the data reveals new subtle operational details and customer behaviors not previously known, or completely new sources of data (FaceBook, social chat, location based services etc). This in turn often requires extended knowledge of the business and operational systems to enable the correct data warehouse values or variable manipulations and selections to be made.
An analyst is expected to understand most parts of an organization's data at a level of detail most individuals in the organisation are not concerned with, and this is often a momental task.
As an example of 'big data' bad practice, I've encountered verbose variables names which immediately require truncation (due to IT / variable name limit reasons), others which make understand the value or meaning of the variable difficult, or naming conventions which are undocumented. For example: "number_of_broken_promises" is one of the funniest long max variable names I've seen, whilst others such as "ccxs_ytdspd_m1_pct" can be guessed when you have the business context but definitely require detailed documentation or a key.

- Diverse Skillsets
'big data' often requires big warehouse and analytics systems (see point 1) and so an analyst must have understanding of how these systems work properly.
Through personal experience I'm always aware of table indexes on a Teradata system for example. By default the first column in a warehouse table will be the index, so if you incorrectly use a poorly managed or repetitive variable such as 'gender' or 'end_date' then the technology of a big data system works against you. I've seen this type of user error on temp tables or analytics output tables far too many times.  Big Data often involves bringing information from a greater number of sources, so understanding the source systems and data warehouse involved is an important challenge.

I hope this helps.  I strongly recommend getting involved with the IAPA and Sydney Data Miner's Meetup if you are based in  Australia or Sydney.
 - Tim

Sunday, February 13, 2011

Just how much do you trust your telco?

Although not one of my favourites, I confess to owning the film ‘Minority Report’. Set a few decades in the future, there is a section of the film in which Tom Cruise walks through a shopping centre and is inundated with targeted offers to buy products and services sold in adjacent retail stores. Is this futuristic scenario really that distant? Of course in the film there are sinister eye scanners and creepy robot spiders to identify the individual instead of a near field device such as a smart phone (iPhone or Android), but the principal is the same (and lets ignore the issue of stolen identity for another discussion :)

Many Android handsets now have near field communications (NFC) technology. According to some reputable sources (http://www.bloomberg.com/news/2011-01-25/apple-plans-service-that-lets-iphone-users-pay-with-handsets.html) the 5th generation of iPhone will also include near field communications (NFC) technology, which amongst other things can allow users to pay for goods and services just like they currently do with their credit card.

Many iPhone users already buy songs and applications from iTunes, which has made it become a significant global billing platform, and provides a notable proportion of revenue for Apple (4.1% of its total quarterly earnings for Q1, see http://www.fiercemobilecontent.com/story/apples-itunes-revenues-top-11-billion-q1/2011-01-19)

If iPhone, iPad and iPod users adopt widespread use of NFC for purchase of everyday groceries and general retail goods, then iTunes could quickly curve a huge slice out of the VISA and Mastercard revenue stream.

The use of smart phone applications for communication (such as Facebook and Twitter) have already taken significant chunks out of telco’s revenues from traditional voice communication. As smart devices and apps further empower users, telcos face the greater danger of becoming a dumb pipe. In my opinion there is the opportunity for NFC to enable telco’s to develop a closer relationship with customers and act as the information conduit (rather than Google or Apple).

With varying degrees of success, telco’s currently perform a lot of data mining to understand usage patterns, household demographics, forecasting of network demand etc. Much of this analysis is marketing focused, with an objective to gain new customers, retain a customer, and/or spend more. Most importantly for data miners these marketing activities usually involve intelligently processing very large amounts of data. There are a lot of parallels with data mining performed by VISA and Mastercard, so you would think that telco’s might have the infrastructure and experience to play in the area of credit cards.

Some telcos are able to provide single billing, whereby the entire household has a single bill for multiple mobile services, wireless broadband, fixed/land telephony, cable TV etc. If a telco already has the rating system to charge for usage of high transaction telephony services, and also provide a single unified household billing platform, then incorporating the purchase of retail goods and a NFC system should not be a challenge for a telco. From my experience I’ve not seen VISA or many retail banks offer a single bill for your household purchases, across multiple individuals and products. This capability places telco’s head and shoulders above banks and credit card companies in the customer experience stakes.

Most developed countries have 3G or better mobile networks, and when combined with smart phones can easily pin-point the location of a customer. If telco’s used NFC to process and learn each customers (or household’s) purchase habits and preferences, then there is no reason why they couldn’t recommend products and offers for shopping centres or stores in your immediate vicinity in real-time. The additional revenue opportunities might even be able to cover the cost of moderate telephony usage, so customers could get a mobile plan subsidised by advertising and purchase revenue. For example, the telco would develop the trusted relationship with the customer, and many retailers could pay a commission to target specific customer segments, or individuals in the vicinity that buy similar products. Retailers wouldn’t need to implement their own loyalty cards to identify customers, they could simply get summarised information about who shops at their stores, how often, share of wallet etc from the telco company that manages the relationship with the customer. I would relish the opportunity to analysis *that* kind of data!

Granted there are a lot of challenges, but the fantasy of Minority Report might not be that unrealistic…