November, 2020
Accuracy Estimation for an Incrementally Learning Cooperative Inventory Assistant Robot
Christian Limberg, Heiko Wersing, Helge Ritter
International Conference on Neural Information Processing (ICONIP)
Abstract: Interactive teaching from a human can be applied to extend the knowledge of a service robot according to novel task demands. This is particularly attractive if it is either inefficient or not feasible to pre-train all relevant object knowledge beforehand. Like in a normal human teacher and student situation it is then vital to estimate the learning progress of the robot in order to judge its competence in carrying out the desired task. While observing robot task success and failure is a straightforward option, there are more efficient alternatives. In this contribution we investigate the application of a recent semi-supervised confidence-based approach to accuracy estimation towards incremental object learning for an inventory assistant robot. We evaluate the approach and demonstrate its applicability in a slightly simplified, but realistic setting. We show that the configram estimation model (CGEM) outperforms standard approaches for accuracy estimation like cross-validation and interleaved test/train error for active learning scenarios, thus minimizing human training effort.
October, 2020
Prototype-Based Online Learning on Homogeneously Labeled Streaming Data
Christian Limberg, Jan Philip Göpfert, Heiko Wersing, Helge Ritter
International Conference on Artificial Neural Networks (ICANN)
Abstract: Algorithms in machine learning commonly require training data to be independent and identically distributed. This assumption is not always valid, e. g. in online learning, when data becomes available in homogeneously labeled blocks, which can severely impede especially instance-based learning algorithms. In this work, we analyze and visualize this issue, and we propose and evaluate strategies for Learning Vector Quantization to compensate for homogeneously labeled blocks. We achieve considerably improved results in this difficult setting.
August, 2020
Beyond Cross-Validation—Accuracy Estimation for Incremental and Active Learning Models
Christian Limberg, Heiko Wersing, Helge Ritter
MDPI Machine Learning and Knowledge Extraction (MAKE)
Abstract: For incremental machine-learning applications it is often important to robustly estimate the system accuracy during training, especially if humans perform the supervised teaching. Cross-validation and interleaved test/train error are here the standard supervised approaches. We propose a novel semi-supervised accuracy estimation approach that clearly outperforms these two methods. We introduce the Configram Estimation (CGEM) approach to predict the accuracy of any classifier that delivers confidences. By calculating classification confidences for unseen samples, it is possible to train an offline regression model, capable of predicting the classifier’s accuracy on novel data in a semi-supervised fashion. We evaluate our method with several diverse classifiers and on analytical and real-world benchmark data sets for both incremental and active learning. The results show that our novel method improves accuracy estimation over standard methods and requires less supervised training data after deployment of the model. We demonstrate the application of our approach to a challenging robot object recognition task, where the human teacher can use our method to judge sufficient training.
September, 2019
Active Learning for Image Recognition using a Visualization-Based User Interface
Christian Limberg, Kathrin Krieger, Heiko Wersing, Helge Ritter
International Conference on Artificial Neural Networks (ICANN)
Abstract: This paper introduces a novel approach for querying samples to be labeled in active learning for image recognition. The user is able to efficiently label images with a visualization for training a classifier. This visualization is achieved by using dimension reduction techniques to create a 2D feature embedding from high-dimensional features. This is made possible by a querying strategy specifically designed for the visualization, seeking optimized bounding-box views for subsequent labeling. The approach is implemented in a web-based prototype. It is compared in-depth to other active learning querying strategies within a user study we conducted with 31 participants on a challenging data set. While using our approach, the participants could train a more accurate classifier than with the other approaches. Additionally, we demonstrate that due to the visualization, the number of labeled samples increases and also the label quality improves.
October, 2018
Improving Active Learning by Avoiding Ambiguous Samples
Christian Limberg, Heiko Wersing, Helge Ritter
International Conference on Artificial Neural Networks (ICANN)
Abstract: If label information in a classification task is expensive, it can be beneficial to use active learning to get the most informative samples to label by a human. However, there can be samples which are meaningless to the human or recorded wrongly. If these samples are near the classifier’s decision boundary, they are queried repeatedly for labeling. This is inefficient for training because the human can not label these samples correctly and this may lower human acceptance. We introduce an approach to compensate the problem of ambiguous samples by excluding clustered samples from labeling. We compare this approach to other state-of-the-art methods. We further show that we can improve the accuracy in active learning and reduce the number of ambiguous samples queried while training.
April, 2018
Efficient Accuracy Estimation for Instance-Based Incremental Active Learning
Christian Limberg, Heiko Wersing, Helge Ritter
European Symposium on Artificial Neural Networks (ESANN)
Abstract: Estimating systems accuracy is crucial for applications of incremental learning. In this paper, we introduce the Distogram Estimation (DGE) approach to estimate the accuracy of instance-based classifiers. By calculating relative distances to samples it is possible to train an offline regression model, capable of predicting the classifiers accuracy on unseen data. Our approach requires only a few supervised samples for training and can instantaneously be applied on unseen data afterwards. We evaluate our method on five benchmark data sets and for a robot object recognition task. Our algorithm clearly outperforms two baseline methods both for random and active selection of incremental training examples.