Delphi Methods and Ensemble Classifiers

Ensemble classifiers are a bit like Delphi methodology, in that they utilize multiple models (or experts) to arrive at a model that offers better predictive performance than would a single model (Dalkey & Helmer, 1963; Acharya, 2019). These are independent or parallel classifiers, implementing a majority vote amongst the classifiers like the Delphi method. A variety of individual classifiers can be used, including logistic regression, nearest neighbor methods, decision trees, Bayesian analysis, or discriminate analysis. According to Dietterich (2002), ensemble classification overcomes three major problems: Statistical, Computational, and Representational. The Statistical problem involves the hypothetical space being too large for the data itself, producing multiple accurate hypotheses yet only one being chose. The Computational problem involves the algorithm’s inability to guarantee the best hypothesis. The Representational problem involves the hypothetical space being devoid of any good approximation of the target.

Ensemble methods include bagging, boosting, and stacking. Bagging is considered a parallel or independent method; boosting and stacking are both sequential or dependent methods. Parallel methods are used when the independence between the base classifiers is advantageous, including error reduction; sequential methods are used when dependence between the classifiers is advantageous, such as correcting mislabeled examples or converting weak learners (Smolyakov, 2017).

Random forests are not exactly ensemble classifiers but do produce results from multiple decision trees and aggregate the results, like Bagging (Liberman, 2017). These train on different datasets and features, both randomly selected. Bias and variance errors are mitigated by way of low correlation between the models. Again, like ensemble classifiers and even Delphi method decision-making, learners operating as a committee should outperform any of the individual learners.

References

Connolly, T. & Begg, C. (2015).  Database Systems: A Practical Approach to Design, Implementation, and Management (6th ed.). London, UK: Pearson.

Dalkey, N., & Helmer, O. (1963). An experimental application of the Delphi method to the use of experts. Management Science9(3), 458-467.

Dietterich, T. G. (2000). Ensemble methods in machine learning. International workshop on multiple classifier systems (pp. 1-15). Springer Berlin Heidelberg.

Dietterich, T. G. (2002). Ensemble Learning. In The Handbook of Brain Theory and Neural Networks, Second Edition, (M.A. Arbib, Ed.), (pp. 405-408). Cambridge, MA: The MIT Press.

Smolyakov, V. (2017). Ensemble learning to improve machine learning results. Retrieved from https://blog.statsbot.co/ensemble-learning-d1dcd548e936

Tembhurkar, M. P., Tugnayat, R. M., & Nagdive, A. S. (2014). Overview on data mining schemes to design business intelligence framework for mobile technology. International Journal of Advanced Research in Computer Science, 5(8).

Decision making with Delphi

The Delphi method brings subject matter experts with a range of experiences together in multiple rounds of questioning to arrive at the strongest consensus possible on a topic or series of topics (Okoli & Pawlowski, 2004; Pulat, 2014). The first round is typically used to generate the ideas for subsequent rounds’ weighting and prioritizing, by way of a questionnaire. This first round is the most qualitative of the steps. Subsequent rounds are more quantitative. According to Pulat (2014), ideas are listed and prioritized by a weighted point system with no communication between the subject matter experts. This is meant to avoid confrontation (Dalkey and Helmer, 1963). Results and available data requested by one or more experts can be shown to all experts, or new information that is considered potentially relevant by an expert (Dalkey & Helmer, 1963; Pulat, 2014).

While Delphi begins with and keeps a sense of qualitative research about it, traditional forecasting utilizes mostly quantitative methods, utilizing mathematical formulations and extrapolations as mechanical bases (Wade, 2012). Using past behavior as a predictor of future positioning, a most likely scenario is extrapolated (Wade, 2012; Wade, 2014). This scenario modeling confines planning to a formulaic process much like regression modeling. Both Delphi and traditional forecasting utilize quantitative methods, the difference being to what degree. A key question in deciding which method to use is what personalities are involved. Delphi methodology gives the most consideration to big personalities and potentially fragile egos, avoiding any direct confrontation or disagreements.

References

Dalkey, N., & Helmer, O. (1963). An experimental application of the Delphi method to the use of experts. Management Science9(3), 458-467.

Okoli, C., & Pawlowski, S. D. (2004). The Delphi method as a research tool: an example, design considerations and applications. Information & Management42(1), 15-29.

Pulat, B. (2014) Lean/six sigma black belt certification workshop: Body of knowledge. Creative Insights, LLC.

Wade, W. (2012) Scenario Planning: A Field Guide to the Future. John Wiley & Sons P&T. VitalSource Bookshelf Online.