Dimensionality and data reduction in telecom churn prediction
Abstract
Purpose
Churn prediction is a very important task for successful customer relationship management. In general, churn prediction can be achieved by many data mining techniques. However, during data mining, dimensionality reduction (or feature selection) and data reduction are the two important data preprocessing steps. In particular, the aims of feature selection and data reduction are to filter out irrelevant features and noisy data samples, respectively. The purpose of this paper, performing these data preprocessing tasks, is to make the mining algorithm produce good quality mining results.
Design/methodology/approach
Based on a real telecom customer churn data set, seven different preprocessed data sets based on performing feature selection and data reduction by different priorities are used to train the artificial neural network as the churn prediction model.
Findings
The results show that performing data reduction first by self-organizing maps and feature selection second by principal component analysis can allow the prediction model to provide the highest prediction accuracy. In addition, this priority allows the prediction model for more efficient learning since 66 and 62 percent of the original features and data samples are reduced, respectively.
Originality/value
The contribution of this paper is to understand the better procedure of performing the two important data preprocessing steps for telecom churn prediction.
Keywords
Acknowledgements
This work was supported in part by the National Science Council of Taiwan under Grant No. NSC 99-2410-H-008-034-.
Citation
Lin, W.-C., Tsai, C.-F. and Ke, S.-W. (2014), "Dimensionality and data reduction in telecom churn prediction", Kybernetes, Vol. 43 No. 5, pp. 737-749. https://doi.org/10.1108/K-03-2013-0045
Publisher
:Emerald Group Publishing Limited
Copyright © 2014, Emerald Group Publishing Limited