Thursday, October 28, 2010

Not your typical financial risk model…

I’ve not done a lot of analysis in the finance industry, and my Google searches didn’t yield helpful insights for similar data mining. I just finished a project and would like some feedback. I’m trying to explain this as a data preparation and analysis approach to solve a specific problem. I’ve described as best I could without names or actual data. I also did a lot of presentation and extra info for the segments not described here. If anyone has relevant words of wisdom, or suggestions for a different approach they would have taken, then please describe it! Otherwise, perhaps this will be helpful to others…


The business problem to solve was generating customer insight (Businesses with loans), with considerations for each client business' financial health and business loan repayment risk.

The first thing we concentrated on was tax payments. The data I had access to contained typical finance account monthly summaries (eg. balance at close of month, total $ of transactions etc) but also two years of detailed transactional history of all outgoing and inbound money transfers/payments (eg. including tax payments made by many thousands of businesses). We examined two years of summary data and also all transactions for only those money transfers/payments that involved the account number belonging to the tax man.

The core idea was to understand each businesses tax payments over time in order to get an accurate view of their financial health. Obviously this would have great importance in predicting future loan repayments or likelihood of future financial problems. One main objective was to understand if tax payment behavior differed significantly between customers, and a secondary consideration was the risk profiles of any subgroups or segments that could be identified.

It was a quick preliminary investigation (less than two weeks work) so I tackled the problem very simplistically to meet deadlines.

For the majority of client businesses tax payments occur quarterly or monthly, so I first summarized the data to a quarterly aggregation, for example;


As you can see above, each customer could have many records (actually it was a maximum of 8, one for each quarter over a two year period), each record showing the account balance at the end of the quarter and the net sum of payments made to (or from!) the tax man.

Then I created two offset copies of Tax Payments, one being the previous record (Lag) and the other being the subsequent record (Lead) like so;


I then simply scaled the data so that everything was between 0-1 by using;

(X – (minimum of X)) / ((maximum of X) - (minimum of X))

Obviously, where X is one of the variables representing quarterly account balance or tax payments, and the maximum is within Customer ID.

For example the raw data here;


Got rescaled to;


I did the all raw balance and tax payment variable rescaling this way so that I could later run a Pearson’s correlation, and k-means clustering, and also graph data easily on the same axis (directly compare balance and tax payments). Some business customers had very large account balances, but small tax payments.

For example I could eventually generate a line chart like this showing a specific business’ relationship between balance (dotted line) and tax payments (bold red line);


I then ran a simple Pearson’s correlation with the variable ‘Balance’ correlated against the 3 tax payment variables (original, lag , and lead) with a correlation Group By clause on the Customer ID. This would output three correlation scores, one for the original (account balance and tax payments in same month), second for the correlation between current account balance and previous month’s tax payments, and the third for the current account balance and future month tax payments.

My thought process was to use the highest correlation score (along with balance and tax payment amounts as described below) to build k-means clusters to segment the customer base. Hopefully the segments would reflect, amongst other things, the strongest relationship between account balance and tax payments.

I joined the correlation outputs to the data and then I flipped/transposed and summarized the data so that each quarter was a new column for balance and tax payments, creating a very wide and summarized data set. For example;


…also including the correlation, lag, lead and original value variables in the single record per customer…

Now I have a dataset that is a nice single record per customer, and concentrated on representing the growth or decline in tax payments over the 2 year period. I did this quite simply by converting the raw payments into percentages (of the sum of each customer’s payments over the two years). In some cases a high proportion of the customer’s payments occurred many months ago, which represents a decline in recent quarters.

I then built a K-means model using inputs such as;

- the highest correlation score (of the three per customer) and categorical encoding of the correlations (eg. ‘negative correlation’ / ‘positive correlation’, ‘lag’ / ‘lead’ etc)
- Data manipulated payment sums
- Variables representing growth or decline in payments over time.

The segments that were generated have proved to perform very well. Many features of the client business that were not used in the segmentation (eg number of accounts per client, and risk propensity) could be distinguished quite clearly by each segment.

When I examined the incidence of risk (failure or problems repaying a business loan) for a three month period (also with a three month gap) I found some segments had almost double the risk propensity.

Timeline described below;



As you can see, there were a very small number of risk outcomes (just 204 in three months) but each of these is very high value, so any lift in risk prediction is beneficial. I hate working with such small samples, but sometimes you get given lemons….



Suppose I built five clusters, here’s an example summary of the type of results I managed to get;


Where ‘Risk Index’ is simply calculated as;

(‘% Of Total Risk’ – ‘% Of Client Count’ ) / ‘% Of Client Count’


So, this is showing that cluster 5 has 67.91% higher propensity to be a bad risk that the entire base (well, in the analysis…). Conversely cluster 2 is much less (-70%) likely to be a bad risk than the average customer.

Maybe not your typical financial risk model….





Wednesday, June 16, 2010

SNA Presentation in Melbourne (IAPA event)

The Institute of Analytics Professionals of Australia (IAPA) requested I give a generic presentation next week on social network analysis and how it can be used for activities such as customer insights, marketing (acquisition, retention, up-sell), fraud detection etc. 

See their current newsletter;
http://alerts.iapa.org.au/

or website;
http://www.iapa.org.au/

I'll make the presentation as vendor neutral and informative as possible (but obviously I can't discuss details of any previous confidential work by myself or SAS).

If you are in Melbourne on Wednesday 23rd June, then feel free to book and attend the presentation.  As with all IAPA events it is free and a great opportunity to 'social network' :) with others interested in analysis and data mining.

I hope to see you there!

Wednesday, April 14, 2010

Personal Changes Are Afoot

My blog posts and contributions to forums may have to take a back seat for a while. There are at multitude of reasons for this, a few being;

 - baby No.2 due in 5 months
 - starting a new job
    - which means lots of work finalising and handing over data mining projects at my current employer (Optus)
    - lots of new stuff to read and learn at the new employer (SAS)

My new job is going to take me away from using Clementine and SPSS software (which will be weird after using it every day for over 10 years..), although I might be working on some data mining projects.
I’ll try to contribute a post if I'm doing really cool data analysis that I can talk about…

Thursday, March 11, 2010

Breaches of data confidentiality can be costly

In a previous post last year I mentioned a particularly nasty and blatant breach of confidentiality regarding fixed line telephony data. The update is that Optus recently won a court case in Federal Court to seek damages against Telstra;


http://www.itnews.com.au/News/168876,optus-wins-telstra-confidentiality-breach-ruling.aspx


This news seemed to slip the major national newspapers, which is quite surprising as it is likely to involve significant amounts of money. To be honest I’m not concerned with the consequences, but as a data miner it does interest me how data *is* used, and how it *could* be used.


As technology advances I’m certain the general public will see more examples of invasions of personal privacy and breaches of data confidentiality that enable organisations to gain the upper hand (unless or until they are caught).  Keep it honest people!