Simply Salford Blog

Diary of a Data Scientist - Inside the Mind of a Statistician

Posted by Charles Harrison on Wed, Jun 29, 2016 @ 07:00 AM

Cross-post from Diary of a Data Scientist, a first hand account of the life of a data scientist; sharing the struggles, triumphs, and day-to-day perspective of a technical research professional. Click to subscribe to the Diary of a Data Scientist Blog here.

Read More

Topics: data science, predictive modeling, statistician, diary of a data scientist

What Type of Automation Does A Data Scientist Need?

Posted by Salford Systems on Fri, Jun 10, 2016 @ 07:00 AM

Cross-post from Dan Steinberg's Blog, on data mining automation. Dan's article discusses the Salford Systems' approach to modeling automation, which is to assist the analyst as much as possible by anticipating the routine stages of model building. The goal is to speed up the decision making process that goes into building a predictive model, and help avoid missing useful test measures and diagnostics. The goal is NOT to replace the data scientist, but to achieve fast and accurate models!

The last thing most data scientists want is a machine that replaces them! The idea that we can build a machine to conduct sophisticated analyses from start to finish has been around for some time now and new attempts surface every few years. The fully automated data scientist is going to be attractive to some organizations with no analytics experience whatsoever but for more sophisticated organizations the promise of such automation is bound to be met with skepticism and worry. Can you imagine visiting a machine learning driven medical service, accepting a diagnosis and prescriptions, and even undergoing surgery with no human oversight involved? Today, even though the pilots say that the airplanes can be 100% computer flown, few of us are ready to take a pilotless airplane ride even though the driverless car appears to be making impressive headway.

In our opinion, automation in predictive analytics is not just a luxury or a future hope. It is an essential component of our everyday modeling practice. The automation we develop for ourselves works its way into every release of our Salford Predictive Modeler. We look at this automation as a way to assist the human data scientist by doing what automation has always done best: relieving the data scientist of tedious repetitive and fairly simple tasks, such as rerunning a cross-validation many times using different random seeds and summarizing the results so that the learning from the experiment is immediately visible to the analyst. Today, some of our automated pipelines do indeed begin from a rather early stage in data exploration and drive all the way through to the delivery of a candidate deployable predictive model encompassing on the order of 15 stages of data processing, remodeling, and automated decision making. We view this as a way to quickly assemble a collection of results that an experienced data scientist can review, critique, modify, and rerun, on the way to arriving at a predictive model (or models) that is vetted by humans and can be trusted.

To a large extent the running of even a single Random Forests model can be viewed as predictive modeling automation. The user has no need to concern themselves with the issues that plague legacy statisticians such as missing values, transformations of predictors, possible interaction effects, outliers in the predictors, or multicollinearity. However, without some human oversight there is going to be genuine risk of what one of my most experienced colleagues refers to as “blunders” that can cause enormous pain if not caught before deployment or before critical decisions are taken. Data science veterans know well of predictive models that went bad due to a mismatch of training data and the data to which the models were to be applied. Just today I discussed this issue with a client confronting such a mismatch; the medical training data was gathered in different regions of the world than the regions in which the model is hoped to be used. We know that even how the data will be collected in different parts of the world will differ, and data errors will not be rare or innocuous. The point of the exercise is to save lives and we cannot accomplish our mission with just routine modeling. In developing an automated system to predict sales of products promoted in a network of large grocery stores we found products that appear to violate the “law of demand” (higher prices cause lower units sold, everything else being equal). Clearly, our system did not recommend increasing the prices during special promotions. If such problems were rare exceptions we could argue that full-on automation of predictive modeling could be largely safe and effective and a few simple rules might help us catch the odd problem cases. In our experience of over more than two decades of predictive modeling, unexpected problems in some part of the process leading from data acquisition to the final deployed model is the rule and not the exception.

By no means am I arguing against a warm embrace of automation in data science and predictive modeling. We have been promoting such automation since we first released a commercial version of the CART decision tree in collaboration with Leo Breiman and his coauthors. (This was before many of today’s data scientists were even born.) We have been building progressively more automation into our SPM product and into the systems we have built for our clients over the years and we will continue to do so. One of our systems retrained itself on new data every six hours, spit out millions of predictions per day, and operated with no downtime for three years before it was retired in favor of more modern technology. The automation we are trying to build is a set of tools that allow data scientists to spend more time thinking about the problems they are trying to solve, to recognize possible problems that can impede their progress or damage the generalization power of their models, and to arrive at the needed results far faster than was ever possible, even a few years ago. However, at least for the present, we see the data scientist as a mandatory participant in the process and our job is to assist them.

 Check out Dan Steinberg's blog for more on the Salford Predictive Modeler®, data mining, and predictive analytics.

Dan Steinberg blog


Read More

Topics: SPM, CART, data mining, data science, predictive modeling, Dan Steinberg, Leo Breiman, Salford Predictive Modeler

Predicting Customer Churn with Gradient Boosting

Posted by Salford Systems on Fri, May 6, 2016 @ 07:00 AM

Customer churn presents a particularly vexing problem for businesses; every company loses clients or customers over time. It's no wonder that companies are pouring money and time into this issue, we've all heard that it's less costly to retain a customer than to attract a new one.   Let's take the wireless telecommunications industry as an example. In 2003, the wireless telecom industry had 20-40% of customers leaving their provider in a given year. As once-explosive subscriber growth rates slowed down, retaining existing customers became increasingly important to a company's overall profitability. Currently, annual churn rates for telecommunications companies average between 10-67%. If the customers who are likely to churn can be identified, the company can target them with retention campaigns, giving them an incentive to stay and preventing loss of revenue.

Read More

Topics: TreeNet, stochastic gradient boosting, gradient boosting machine, customer churn, gradient boosting, Customer attrition

How to Interpret Model Performance with Cost Functions

Posted by Salford Systems on Fri, Apr 15, 2016 @ 07:00 AM

Cost functions are used to evaluate model quality, as they are directly related to the performance of machine learning and predictive analytics models. The problem you are trying to solve should dictate which cost function you use to analyze your model. In this ten part video series, we cover the importance of cost functions, how they are used, and how they relate to model performance.  

Read More

Topics: cost functions, logistic function, binary classification, log-likelihood, recall, least squares deviation, huber estimator, precision, ROC curve, gain and lift charts, multinomial classification, expected cost

The Shape of the Trees in Gradient Boosting Machines

Posted by Salford Systems on Fri, Mar 25, 2016 @ 01:09 PM

Our CEO and founder, Dr. Dan Steinberg recently wrote about gradient boosting machines. Gradient boosting machines are a powerful machine learning technique, and have been deployed with great success over the years in Kaggle competitions.

Read More

Topics: TreeNet, stochastic gradient boosting, machine learning, gradient boosting machine, Jerome Friedman, gradient boosting, gradient boosting machine learning

Salford Systems' CART Featured in New Predictive Analytics Book

Posted by Salford Systems on Wed, Mar 9, 2016 @ 09:03 AM

Eric Siegel’s Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die is a nontechnical overview of modern analytics with detailed discussion of how machine learning is being deployed across all industries and in all major corporations.  Eric is a hugely entertaining writer and brings with him the expertise you would expect of a Columbia University trained Ph.D..  Geoffrey Moore writes that the book is “deeply informative” and Tom Peters calls the book “The most readable ‘big data’ book I’ve come across. By far”.

Read More

Topics: CART, classification, predictive modeling, classification trees, decision trees, regression trees, predictive analytics, decision tree

Random Forests: The Machine Learning Algorithm

Posted by Salford Systems on Thu, Mar 3, 2016 @ 10:56 AM

We recently came across the article, "Random Forest---the go-to machine learning algorithm" from TechWorld Australia.

Read More

Topics: RandomForests, Random Forest, Random Forests, bootstrap sampling, classification, Regression, classification trees, machine learning, regression trees

Machine Learning [Visualization]

Posted by Salford Systems on Fri, Feb 26, 2016 @ 08:51 AM

We recently came across a neat interactive visual introduction to machine learning. It's an excellent explanation on how decision trees work, using data about houses to distinguish homes in New York from homes in San Francisco, for technical and non-technical audiences alike.

Recap, taken from the Read More

Topics: overfitting, machine learning, decision trees, decision tree

Webinar Recap: 3 Ways to Improve Regression, Part 2

Posted by Kaitlin Onthank on Thu, Jan 28, 2016 @ 10:22 AM

Did you miss our webinar yesterday? It's never too late to register to get the recording

Read More

Topics: stochastic gradient boosting, Nonlinear Regression, Regression Splines, Regression

January Tech Support Cases

Posted by Salford Systems on Tue, Jan 26, 2016 @ 10:00 AM

1. Model Translation to SAS, C, Java, PMML

Read More

Topics: SPM, VM, Mac