Comparative Performance Analysis of Boosting Ensemble Learning Models for Optimizing Marketing Promotion Strategy Classification
Downloads
This study evaluates the performance of four boosting algorithms in ensemble learning, namely AdaBoost, Gradient Boosting, XGBoost, and CatBoost, for optimizing the classification of marketing promotion strategies. The rise of digitalization has driven the use of machine learning to understand consumer behavior better and enhance the effectiveness of promotional campaigns. Using the Marketing Promotion Campaign Uplift Modeling dataset from Kaggle, this study examines the capabilities of each algorithm in handling complex and imbalanced customer data. The evaluation metrics include accuracy, precision, recall, F1-score, and Area Under the Curve (AUC). Results indicate that XGBoost excels in precision, while Gradient Boosting achieves the highest AUC value, demonstrating superior ability in distinguishing positive and negative classes. CatBoost provides stable performance with categorical data, whereas AdaBoost shows strength in recall but is prone to false-positive predictions. Although all four algorithms exhibit good performance, the main challenge lies in addressing class imbalance. This study offers insights for marketing practitioners in selecting the most suitable algorithm and highlights the importance of data-balancing strategies to improve predictive accuracy in data-driven marketing
P. Dhal dan C. Azad, A comprehensive survey on feature selection in the various fields of machine learning, vol. 52, no. 4. Applied Intelligence, 2022.
S. Deepa dan B. Booba, “Predict Diabetes Healthcare Analytics Using Hybrid Gradient Boosting Machine Learning Model,” vol. 30, no. 5, hal. 2928–2945, 2024, doi: 10.53555/kuey.v30i5.3371.
S. González, S. García, J. Del Ser, L. Rokach, dan F. Herrera, “A practical tutorial on bagging and boosting based ensembles for machine learning: Algorithms, software tools, performance study, practical perspectives and opportunities,” Inf. Fusion, vol. 64, no. July, hal. 205–237, 2020, doi: 10.1016/j.inffus.2020.07.007.
S. M. Ganie, P. K. D. Pramanik, S. Mallik, dan Z. Zhao, “Chronic kidney disease prediction using boosting techniques based on clinical parameters,” PLoS One, vol. 18, no. 12 December, hal. 1–21, 2023, doi: 10.1371/journal.pone.0295234.
F. Mazhar, W. Akbar, M. Sajid, N. Aslam, M. Imran, dan H. Ahmad, “Boosting Early Diabetes Detection: An Ensemble Learning Approach with XGBoost and LightGBM,” J. Comput. & Biomed. Informatics, vol. 6, no. 02, hal. 127–138, 2024.
U. e. Laila, K. Mahboob, A. W. Khan, F. Khan, dan W. Taekeun, “An Ensemble Approach to Predict Early-Stage Diabetes Risk Using Machine Learning: An Empirical Study,” Sensors, vol. 22, no. 14, hal. 1–15, 2022, doi: 10.3390/s22145247.
M. H. D. M. Ribeiro dan L. dos Santos Coelho, “Ensemble approach based on bagging, boosting and stacking for short-term prediction in agribusiness time series,” Appl. Soft Comput. J., vol. 86, hal. 105837, 2020, doi: 10.1016/j.asoc.2019.105837.
P. S. Washburn, Mahendran, Dhanasekharan, Periyasamy, dan Murugeswari, “Investigation of the severity level of diabetic retinopathy using AdaBoost classifier algorithm,” Mater. Today Proc., vol. 33, 3037–3042, 2020, doi: 10.1016/j.matpr.2020.03.199.
F. NUSRAT, B. UZBAŞ, dan Ö. K. BAYKAN, “Gradient Boosting Classification kullanarak Diabetes Mellitus Tahmini,” Eur. J. Sci. Technol., no. September, hal. 268–272, 2020, doi: 10.31590/ejosat.803504.
A. Asselman, M. Khaldi, dan S. Aammou, “Enhancing the prediction of student performance based on the machine learning XGBoost algorithm,” Interact. Learn. Environ., vol. 31, no. 6, hal. 3360–3379, 2023, doi: 10.1080/10494820.2021.1928235.
A. Ogunleye dan Q. G. Wang, “XGBoost Model for Chronic Kidney Disease Diagnosis,” IEEE/ACM Trans. Comput. Biol. Bioinforma., vol. 17, no. 6, hal. 2131–2140, 2020, doi: 10.1109/TCBB.2019.2911071.
[12] J. T. Hancock dan T. M. Khoshgoftaar, “CatBoost for big data: an interdisciplinary review,” J. Big Data, vol. 7, no. 1, 2020, doi: 10.1186/s40537-020-00369-8.
A. A. Ibrahim, R. L. Ridwan, M. M. Muhammed, R. O. Abdulaziz, dan G. A. Saheed, “Comparison of the CatBoost Classifier with other Machine Learning Methods,” Int. J. Adv. Comput. Sci. Appl., vol. 11, no. 11, hal. 738–748, 2020, doi: 10.14569/IJACSA.2020.0111190.