Cost-sensitive classification with deep neural networks
Kuupäev
2020
Autorid
Ajakirja pealkiri
Ajakirja ISSN
Köite pealkiri
Kirjastaja
Tartu Ülikool
Abstrakt
Traditional classification focuses on maximizing the accuracy of predictions. This
approach works well if all types of errors have the same cost. Unfortunately, in many
real-world applications, the misclassification costs can be different, where some errors
may be much worse than others. In such cases, it is useful to consider the costs and build
a classifier that minimizes the total cost of all predictions.
Earlier, cost-sensitive learning has received very little research with balanced datasets.
Mostly, it has been mostly considered as one of the measures that solves the class
imbalance problem. As the basis of the class imbalance problem is similar to costsensitive
learning, we can mainly rely on the research done regarding the class imbalance
problem.
The purpose of this thesis is to experiment on how successful different cost-sensitive
techniques are at minimizing the total cost compared to an ordinary neural network.
The used techniques involve making neural network cost-sensitive based on the output
probabilities. Additionally, oversampling, undersampling and loss functions that consider
the class weights are used. The experiments are performed on 3 datasets with different
degrees of difficulty and they involve binary and multiclass classification tasks. Also, 3
different cost matrix types are considered. The results show that all the techniques reduce
the total prediction cost compared to an ordinary neural network. The best results were
achieved using oversampling and cost-sensitive output modifications for both binary and
multiclass case.
Kirjeldus
Märksõnad
neural networks, cost-sensitive learning, binary classification, multiclass classification