Demystifying Black Box Models with Neural Networks for Accuracy and Interpretability of Supervised Learning
Available Online February 2018.
- https://doi.org/10.2991/pecteam-18.2018.22How to use a DOI?
- Neural Networks, Machine Learning, Black box Models, Supervised Learning
- Intensive data modelling on large datasets that were once limited to supercomputers and workstations can now be performed on desktop computers with scripting languages such as R and Python. Analytics, a field that is popular over this aspect of access to high computational capability enables people to try out different mathematical algorithms to derive the most precise values with just calling pre-written libraries. This precision in the case of black box models such as Neural Networks and Support Vector Machine comes at the cost of interpretability. Specifically, the importance of interpretability is realized while building classification models where understanding how a Neural Network functions in solving a problem is as important as deriving precision in values. The Path Break Down Approach proposed in this paper assists in demystifying the functioning of a Neural Network model in solving a classification and prediction problem based on the San Francisco crime dataset.
- Open Access
- This is an open access article distributed under the CC BY-NC license.
Cite this article
TY - CONF AU - Jagadeesh Prabhakaran PY - 2018/02 DA - 2018/02 TI - Demystifying Black Box Models with Neural Networks for Accuracy and Interpretability of Supervised Learning BT - International Conference for Phoenixes on Emerging Current Trends in Engineering and Management (PECTEAM 2018) PB - Atlantis Press SP - 126 EP - 130 SN - 2352-5401 UR - https://doi.org/10.2991/pecteam-18.2018.22 DO - https://doi.org/10.2991/pecteam-18.2018.22 ID - Prabhakaran2018/02 ER -