Certification and Trustworthy AI
Authors: Dijana Oreski (University of Zagreb, Faculty of Organization and Informatics) , Luka Katava (University of Zagreb, Faculty of Organization and Informatics) , Alen Kisic (VERN University)
The increasing availability of machine learning algorithms has posed the challenge of selecting appropriate algorithms for specific data analysis tasks. In domains such as education and business, where many practitioners are not specialists in
artificial intelligence, algorithm selection is often performed through trial-and- error experimentation or guided by limited methodological knowledge. Meta- learning has emerged as a promising approach for addressing this challenge by recommending algorithms based on characteristics of previously analysed datasets.
However, many meta-learning approaches rely on complex models whose decision processes remain difficult to interpret, limiting their suitability in contexts where transparency and accountability are required. This paper investigates the use of
explainable meta-learning models for machine learning algorithm selection in social science domains. Using datasets originating from education and business contexts, we construct a meta-dataset based on dataset characteristics represented
as meta-features. These meta-features serve as inputs to interpretable meta-models designed to recommend suitable algorithms for new datasets. We analyse the contribution of individual meta-features to the meta-model decisions, thereby
identifying dataset characteristics that drive algorithm recommendations. The results demonstrate that a subset of meta-features plays a key role in determining the
predictive power of the meta-model and forms the basis for explainable algorithm selection. By making these relationships explicit, the proposed approach enables transparent and interpretable recommendations that can support non-expert users in selecting appropriate analytical methods. The study contributes to discussions on trustworthy and responsible AI, particularly relevant in the context of emerging AI governance frameworks and certification initiatives that emphasise explainability, accountability, and user trust in AI systems.
Keywords:
How to Cite: Oreski, D. , Katava, L. & Kisic, A. (2026) “Explainable Selection of Machine Learning Algorithms in Social Sciences”, Proceedings of the Austrian Symposium on AI, Robotics, and Vision. 3(1).