Analysing Forensic Speaker Verification by Utilizing Artificial Neural Network
- DOI
- 10.2991/assehr.k.211226.026How to use a DOI?
- Keywords
- Acoustic Features; Artificial Neural Networks; Forensic Linguistics; Formant Frequency
- Abstract
In this paper, we describe the use of Artificial Neural Network (ANN) to compute the acoustic features in analysing forensic speaker verification. In the computation, there are two datasets derived from speech recording of a simulated human trafficking crime, namely Forensic Evidence Data (FED) and Comparative Evidence Data (CED). In both datasets, sound segmentation is performed and then the acoustic features (Formant Frequencies F1, F2, F3, and F4) are extracted. The acoustic feature values are computed with ANN to predict an output with a targeted sound classification /a/, /i/ and /u/. The results are interpreted as forensic evidence against sound data in recorded evidence. With a result rate of more than 80%, this method might be studied more deeply to be developed and applied in evaluating recorded sound evidence for the legal case process.
- Copyright
- © 2021 The Authors. Published by Atlantis Press SARL.
- Open Access
- This is an open access article under the CC BY-NC license.
Cite this article
TY - CONF AU - Susanto Susanto AU - Deri Sis Nanda PY - 2021 DA - 2021/12/27 TI - Analysing Forensic Speaker Verification by Utilizing Artificial Neural Network BT - Proceedings of the International Congress of Indonesian Linguistics Society (KIMLI 2021) PB - Atlantis Press SP - 128 EP - 132 SN - 2352-5398 UR - https://doi.org/10.2991/assehr.k.211226.026 DO - 10.2991/assehr.k.211226.026 ID - Susanto2021 ER -