Reliably classifying huge amounts of textual data is a primary objective of many machine learning applications. However, state-of-the-art text classifiers require extensive computational resources, which limit their applicability in real-world scenarios. In order to improve the application of lightweight classifiers on edge devices, e.g. personal work stations, we adapt the Human-in-the-Loop paradigm to improve the accuracy of classifiers without re-training by manually validating and correcting parts of the classification outcome. This paper performs a series of experiments to empirically assess the performance of the uncertainty-based Human-in-the-Loop classification of nine lightweight machine learning classifiers on four real-world classification tasks using pre-trained SBERT encodings as text features. Since time efficiency is crucial for interactive machine learning pipelines, we further compare the training and inference time to enable rapid interactions. Our results indicate that lightweight classifiers with a human in the loop can reach strong accuracies, e.g. improving a classifier’s F1-Score from 90.19 to 97% when 22.62% of a dataset is classified manually. In addition, we show that SBERT based classifiers are time efficient and can be re-trained in < 4 seconds using a Logistic Regression model.