Publications
Vacek on AI achievement gaps

Vacek, D. Against artificial achievements. Forthcoming in Analysis.
Abstract: In a recent paper published in Analysis, "Artificial achievements", Kieval argues that AlphaGo deserves credit for the achievement of defeating Lee Sedol in a Go competition. The present paper provides an argument against this proposal. The argument relies on the connection between the notions of achievement and praiseworthiness, as well as on the broader debate on responsibility in AI ethics.
Vacek on AI achievement gaps

Vacek, D. (2025). Meeting the AI achievement challenge: collective and vicarious achievements. Ethics and Information Technology 27, 25. https://doi.org/10.1007/s10676-025-09836-3
Abstract: The present paper tackles what we might call the AI achievement challenge, which has been the subject of recent debate in AI ethics. The challenge concerns the question of whether there are any achievement gaps due to artificial intelligence and what we should do if there are: how to fill them, or what policies are needed to decrease their impact on us if they cannot be filled. This paper argues that none of the proposed views is entirely satisfactory, even though they all have certain merits. The paper will provide two mutually compatible answers to the AI achievement challenge: one in terms of collective achievement and the other in terms of vicarious achievement.
Kosterec on moral agents in a vat

Kosterec, M. (2025). Moral Responsibility in a Vat. Acta Analytica 40, 251–258. Availble at: https://link.springer.com/article/10.1007/s12136-024-00602-6
Abstract: This paper investigates an ingenious argument by Andrew Khoury which, if valid, could shed new light on some of the most relevant discussions within the field of moral philosophy. The argument is based on the idea that if we deny the phenomenon of resultant moral luck, then the proper objects of moral responsibility must be internal willings. I analyse the argument and find it unsound. The argument does not adequately account for the positions of all relevant moral actors when it comes to the moral evaluation of agents and their actions.
Vacek on AI Ethics

Vacek and several other AI experts from various disciplines discussed about artificial intelligence with Slovak journalists from Živé - Aktuality. Vacek focused on problems and challenges of AI ethics.
https://obchod.aktuality.sk/umela-inteligencia-pripravte-sa-na-buducnost
Gladiš, Mesarčík, and Slosiarová on the ethical and fundamental rights risks of using wearable sleep monitoring devices

Gladiš, M., Mesarčík, M., & Slosiarová, N. (2024). Advising AI assistant: ethical risks of Oura smart ring. AI and Ethics, 1-13. Available at: https://link.springer.com/article/10.1007/s43681-024-00544-0
Abstract: Wearable devices with monitoring and recommendation functions are designed to provide personalised feedback and support to help individuals manage their health and well-being. One of the most widespread uses of these wearable devices is in the area of sleep monitoring. For users, this means they can make more informed decisions and the insight from the device allows them to influence the quality of their sleep better. However, with the use of these devices, certain values such as privacy and autonomy may be at stake. This is particularly true for new methods of artificial intelligence technologies that can provide an unprecedented level of detail about their users. According to the European regulation on artificial intelligence, these wearable assistants will be classified as high-risk and thus will have to undergo a demanding conformity assessment. That's why we decided to choose one of the most popular wearables that can provide recommendations for its users, the Oura Smart Ring and conduct a Human Rights, Ethical and Social Impact Assessment of it. This choice was made in part due to the wealth of publicly available information about this device. We have found that it can pose a high risk to the user from several ethical and legal perspectives that can easily be overlooked in the design and use of these technologies. We have also proposed countermeasures that could in theory reduce their potential harmful effect if implemented. This article contributes to a better understanding of the ethical and fundamental rights risks of using wearable sleep monitoring devices and thus helps to improve the safety of their use.
Kosterec on moral vicarious responsibility

Kosterec, M. (2024). Mind the Vicarious Responsibility. Grazer Philosophische Studien, 101(2), 101-117. https://doi.org/10.1163/18756735-00000218
Abstract: Vicarious responsibility exists. It is (also) a species of moral responsibility. Nevertheless, it is seldomly discussed or considered as such in the main debates within moral philosophy. This article presents a case for the relevance of vicarious moral responsibility to several of these discussions. It seeks to provide new insights for the debate between historicism and structuralism and presents a further case for distinguishing between various facets of moral responsibility. Furthermore, the article demonstrates that an agent need not enter into the relation of vicarious responsibility freely in order to be responsible. This idea then allows us to present new cases for compatibilism, demonstrating the consistency of moral responsibility with determinism.
Gavorník, Podroužek, Oreško, Slosiarová, and Grmanová on ethical issues of smart metering and non-intrusive load monitoring

Gavorník, A., Podroužek, J., Oreško, Š., Slosiarová, N., & Grmanová, G. (2024). Beyond privacy and security: Exploring ethical issues of smart metering and non-intrusive load monitoring. Telematics and Informatics, 90, 102-132. Available at: https://www.sciencedirect.com/science/article/pii/S0736585324000364
Abstract: Artificial intelligence is believed to facilitate cost-effective and clean energy by optimizing consumption, reducing emissions, and enhancing grid reliability. Approaches such as non-intrusive load monitoring (NILM) offer energy efficiency insights but raise ethical concerns. In this paper, we identify most prominent ethical and societal issues by surveying relevant literature on smart metering and NILM. We combine these findings with empirical insights gained from qualitative workshops conducted with an electricity supplier piloting the use of AI for power load disaggregation. Utilizing the requirements for trustworthy AI, we show that while issues related to privacy and security are the most widely discussed, there are many other equally important ethical and societal issues that need to be addressed, such as algorithmic bias, uneven access to infrastructure, or loss of human control and autonomy. In total, we identify 19 such overarching themes and explore how they align with practitioners' perspectives and how they embody the seven core requirements for trustworthy AI systems defined by the Ethics Guidelines for Trustworthy AI.
Vacek on our research in AI ethics
Kosterec on Transparent intensional logic

Kosterec, M. (2024): Transparent logics. Small Differences with huge Consequences. Brill: Leiden – Boston. ISBN: 978-90-04-70333-9. Available at: https://brill.com/display/title/70237
The book presents Transparent Intensional Logic in several of its latest realisations in such a way that it makes a case for the system and demonstrates how the theory can be applied to a wide range of cases. The work strikes a good balance between the philosophical-conceptual and the logical-formal. Transparent Logics prioritises depth over breadth and focuses on advanced formal semantics and philosophical logic, going beyond a mere introduction to the subject, but delving into the details instead.
Vacek on future AI (media, 18 July 2024, in Slovak)
Sambrotta on whether LLMs can be responsible for language production

Sambrotta, M. (2023). If God Looked Into AIs, Would He Be Able To See There Whom They Are Speaking Of? Philosophica Critica 9, 42-54. Available at: https://philosophicacritica.ukf.sk/uploads/1/3/9/8/13980582/philosophica_critica_2_2023_final.pdf#page=42
Abstract: Can Large Language Models (LLMs), such as ChatGPT, be considered genuine language users without being held responible for their language production? Affirmative answers hinge on recognizing them as capable of mastering the use of words and sentences through adherence to inferential rules. However, the ability to follow such rules can only be acquired through training that transcends mere formalism.Yet, LLMs can be trained in this way only to the extent that they are held accountable for their outputs and results, that is, for their language production.
Vacek on AI control
