Events

2024

The ninth seminar: a lecture on the Foundations of AI by Patrik Goldschmidt (21 March 2024, at KInIT)


The ninth seminar of the project took place on 21 March 2024. The seminar took place at the Kempele Institute of Intelligent Technologies. Partik Goldschmidt presented an extensive survey of the basics of AI from the technological point of view, along with the past, present, and future of AI. 

The eighth seminar: Köhler, Roughley and Sauer on technologically blurred accountability and AI responsibility gaps (29 February 2024, online)

The eighth seminar of the project took place on 29 February 2024. We have discussed the paper Köhler, S., Roughley, N., & Sauer, H. (2017). Technologically blurred accountability?: Technology, responsibility gaps and the robustness of our everyday conceptual scheme. In Moral agency and the politics of responsibility (pp. 51-68). Routledge. The authors argue that there are no AI accountability gaps. The paper by Tigard we discussed previously was building and expanding on the ideas of the present paper.

The seventh seminar: Tigard on responsibility gaps (18 January 2024, hybrid)

The seventh seminar of the project took place on 18 January 2024. We have discussed the paper Tigard, D. W. (2021). There is no techno-responsibility gap. Philosophy & Technology, 34(3), 589-607. The paper is a contribution to the AI responsibility gaps debate. Contrary to both techno-optimists (who maintain that there is a technology-related gap in responsibility that can be bridged) and techno-pessimists (who maintain that there is a technology-related gap in responsibility that cannot be bridged), Tigard argues that there is no such gap in responsibility.

A work in progress discussion: Kosterec on moral responsibility in a vat (11 January 2024, hybrid)

On 11 January 2024 we discussed Miloš Kosterec's work in progress project paper Moral responsibility in a vat. The paper responds to the paper Khoury, A. C. (2018). The objects of moral responsibility. Philosophical Studies, 175, 1357-1381. These papers are concerned with a thought experiment of moral agents in a vat, which provides an interesting future research direction within the project due to the similarities between potential moral agents in a vat and intelligent technologies as potential moral agents.

2023

The sixth seminar: Pascucci and Vacek on responsible robot control (14 December 2023, in person)

The sixth seminar of the project took place on 14 December 2023. We have discussed a submitted project paper of two members of the research team, M. Pascucci and D. Vacek, entitled "Responsible robot control". The paper builds on an earlier project paper by D. Vacek, Two remarks on the new AI control problem. AI and Ethics, 1-6. Available at: https://link.springer.com/article/10.1007/s43681-023-00339-9. The discussed paper tackles the issue of appropriate and inappropriate control over robots and artificial intelligence.

The fifth seminar: Sparrow on the ethics of sex robots (20 November 2023, online)

The fifth seminar of the project took place on 20 November 2023. We have discussed the paper Sparrow, R. (2017). Robots, rape, and representation. International Journal of Social Robotics, 9(4), 465-477. The author focuses on the ethics of the "rape" of robots. Sparrows considers both sex robots that can explicitly refuse consent and sex robots that lack capacity to refuse consent as morally problematic.

The fourth seminar: Green and Michel on machines and speech acts (30 October 2023, in person)

The fourth seminar of the project took place on 30 October 2023. We have discussed the paper Green, M., & Michel, J. G. (2022). What might machines mean?. Minds and Machines, 32(2), 323-338. The question whether machines can perform speech acts is relevant to "the possibility of weaving them into our moral fabric". To answer this difficult question, the authors propose to construe artificial speakers as proxies in the performance of speech acts.

A talk at the Philosophy as Crossing Borders conference: Oreško on the anthropomorphization of artificial intelligence (Košice, Slovakia)

Štefan Oreško participated in the international scientific conference "Philosophy as Crossing Borders" in Košice. He gave a talk on the anthropomorphization of artificial intelligence and its implications for moral responsibility. The talk aimed to explore the relationship between anthropomorphization of AI systems, the attribution of moral agency to them, and the question of their moral responsibility. See: https://www.sfz.sk/node/61 and https://www.linkedin.com/posts/kempelen-institute-of-intelligent-technologies_our-researcher-stefan-oresko-participated-activity-7128283097346756608-CiFA?

An interview with Mesarčík about AI regulation (31 October 2023, media appearance, in Slovak)

Podroužek about the risks associated with artificial intelligence: AI is a good servant but a bad master (30 October 2023, media appearance, in Slovak)

Podroužek on AI ethics in a radio show (18 October 2023, in Slovak)

See: https://devin.rtvs.sk/clanky/temy/341244/symposion-etika-umelej-inteligencie

Sumer challenge KInIT - AI regulatory frameworks (12 October 2023)

The workshop on AI ethics and regulatory frameworks took place on 12 October. We have presented five frameworks (Ethics by design, Z-Inspection, AI Verify, Human rights impact assessment guidance and toolbox, Practical fundamental rights impact assessments) with the focus on the identification of accountability roles and various methodology details concerning the construction of such frameworks.

The third seminar: Nyholm on the new AI control problem (28 September 2023, online)

The third seminar took place on 28 September 2023. We have discussed the paper Nyholm, S. (2023). Artificial Intelligence, Humanoid Robots, and Old and New Control Problems. In Social Robots in Social Institutions (pp. 3-12). IOS Press. In this paper, Nyhom focuses on the old and new AI control problems (the new AI control problem was introduced in Nyholm, S. (2022). A new control problem? Humanoid robots, artificial intelligence, and the value of control. AI and Ethics, 1-11). This topic relates to the results of the project published in the article Vacek, D. (2023). Two remarks on the new AI control problem. AI and Ethics, 1-6.


A talk at the Formalising Responsibility conference: Vacek on AI control and responsibility (Manchester, United Kingdom)


A three-day conference Formalising Responsibility organised by the UKRI-funded 'The Computational Agent Responsibility' project, which is a collaboration between the University of Leeds and the University of Manchester focusing on interdisciplinary work in philosophy and computer science on the topic of responsibility of autonomous systems, took place on 20 - 22 September 2023 in Manchester. The principal investigator of the project, Daniela Vacek presented there on the new AI control problem and vicarious responsibility. For more information, see  https://joecol.github.io/FormRes/


A talk at the XXVII. Czecho-Slovak Symposium on Analytic Philosophy (Bratislava, Slovakia)


XXVII. Czecho-Slovak Symposium on Analytic Philosophy organised by the Department of Logic and the Methodology of Sciences at the Faculty of Arts, Comenius University in Bratislava took place on 11 - 13 September 2023. The principal investigator of the project, Daniela Vacek presented there on the new AI control problem.


The second seminar: Hakli and Mäkelä on moral responsibility of robots (17 August 2023, in person)

The second seminar of the project took place on 17 August 2023. We discussed the paper Hakli, R., & Mäkelä, P. (2019). Moral responsibility of robots and hybrid agents. The Monist, 102(2), 259-275. This paper formulates a conceptual argument against the possibility of moral responsibility of robots.


The kick-off meeting: Sullins on moral responsibility of robots (20 July 2023, online)

The kick-off seminar of the project took place on 20 July 2023. We discussed the paper Sullins, J. P. (2011). When is a robot a moral agent. Machine ethics, 6(2001), 151-161. Sullins presents an optimistic picture which would, in principle, allow for moral responsibility of robots.

Vytvorte si webové stránky zdarma!