Fixed Voters Clustering to Determine the Level of Beginner Voters using Data Mining Techniques
Downloads
Data mining clustering technique is used to classify the level of beginner voters using the K-Means method. Fixed voter clusters are used for decision making for stakeholders regarding the information on beginner voters in each district and sub-district. The error calculation method is used to measure the level of error value for each distance calculation used, the distance calculation method used ie Euclidean, Manhattan, and Minkowski Distance with the Means Square Error (MSE) approach to measure the level of the error value. The calculation results show that the lowest error occurs in the calculation of the Minkowski Distance model 3 cluster, where the error rate is 11%, while the highest error rate occurs in the calculation of the Manhattan Distance model 5 cluster, which is 38%
P. Parvin, “Democracy Without Participation: A New Politics for a Disengaged Era,” Res Publica, 2018, DOI: 10.1007/s11158-017-9382-1.
J. M. Stonecash, “Democracy for Realists: Why Elections do not Produce Responsive Government,” The Forum, 2017, DOI: 10.1515/for-2017-0024.
M. S. Alelaimat, “Factors affecting political participation (Jordanian universities students’ voting: field study 2017-2018),” Review of Economics and Political Science, vol. ahead-of-p, no. ahead-of-print, 2019, DOI: 10.1108/reps-05-2019-0072.
M. B. H. Ibrahim, M. T. Jufri, S. N. Alam, Zakaria, M. A. Akbar, and E. Budiman, “Statistical Analysis of Performance Goals Effect to Lecturer Work Achievement in Higher Education,” 2018, DOI: 10.1109/EIConCIT.2018.8878571.
D. E. McNabb and D. E. McNabb, “Fundamentals of Quantitative Research,” in Research Methods for Public Administration and Nonprofit Management, 2018.
R. Wirth, “CRISP-DM : Towards a Standard Process Model for Data Mining,” Proceedings of the Fourth International Conference on the Practical Application of Knowledge Discovery and Data Mining, 2000.
M. Hancock, “The Data Mining Process,” in Practical Data Mining, 2011.
F. Nurul Auliah, A. Lawi, E. Budiman, and S. Astuti Thamrin, “Selection of Informative Genes to Classify Type 2 Diabetes Mellitus using Support Vector Machine,” Institute of Electrical and Electronics Engineers Inc., Apr. 2019. DOI: 10.1109/ICCED46541.2019.9161111.
B. Boehmke, B. Greenwell, B. Boehmke, and B. Greenwell, “K-means Clustering,” in Hands-On Machine Learning with R, 2020.
E. Maria, E. Budiman, Haviluddin, and M. Taruk, “Measure distance locating nearest public facilities using Haversine and Euclidean Methods,” in Journal of Physics: Conference Series, 2020, vol. 1450, no. 1, pp. 131–134, DOI: 10.1088/1742-6596/1450/1/012080.
A. Deshpande, A. Louis, and A. Singh, “On Euclidean k-means clustering with α-centre proximity,” 2020.
I. Dabbura, “K-means Clustering: Algorithm, Applications, Evaluation Methods, and Drawbacks,” Towards Data Science, 2018.
J. Fürnkranz et al., “Mean Squared Error,” in Encyclopedia of Machine Learning, 2011.
F. Aziz, A. Lawi, and E. Budiman, “Increasing Accuracy of Ensemble Logistics Regression Classifier by Estimating the Newton Raphson Parameter in Credit Scoring,” 2019, DOI: 10.1109/ICCED46541.2019.9161078.
N. Dengen, Haviluddin, L. Andriyani, M. Wati, E. Budiman, and F. Alameka, “Medicine Stock Forecasting Using Least Square Method,” 2018, DOI: 10.1109/EIConCIT.2018.8878563.
Haviluddin, N. Dengen, E. Budiman, M. Wati, and U. Hairah, “Student Academic Evaluation using Naïve Bayes Classifier Algorithm,” 2018, doi: 10.1109/EIConCIT.2018.8878626.
A. Singh, A. Yadav, and A. Rana, “K-means with Three different Distance Metrics,” International Journal of Computer Applications, 2013, DOI: 10.5120/11430-6785.
E. Budiman, Haviluddin, N. Dengan, A. H. Kridalaksana, M. Wati, and Purnawansyah, Performance of Decision Tree C4.5 Algorithm in Student Academic Evaluation, vol. 488. 2018. DOI: 10.1007/978-981-10-8276-4_36