Publications
Permanent URI for this collectionhttps://repositorio.grial.eu/handle/123456789/34
Browse
2 results
Search Results
Item Safe AI in Education Manifesto. Version 0.4.0(2024-10-08) Alier-Forment, Marc; García-Peñalvo, Francisco José; Casañ, María José; Pereira, Juanan; Llorens-Largo, FaraónThe Safe AI in Education Manifesto outlines ethical principles for integrating AI into educational environments. It emphasizes the need for human oversight, ensuring AI complements rather than replaces educators. Decision-making must remain transparent and appealable, protecting the educational process's integrity. Confidentiality is paramount; institutions must safeguard student data and ensure AI systems comply with stringent privacy standards. AI tools should align with educational strategies, supporting learning objectives without enabling unethical practices or adding complexity. The manifesto calls for AI systems to respect didactic practices, adapting seamlessly to instructional designs without burdening educators or students. It stresses accuracy and explainability, requiring AI outputs to be reliable, transparent, and verifiable. Interfaces must be intuitive, communicating their limitations to foster trust and critical engagement. Ethical training and transparency in AI model development are essential, including minimizing biases and disclosing data sources. The manifesto commits to advancing AI’s potential in education while prioritizing privacy, fairness, and educational integrity, providing a living framework adaptable to technological evolution. It can be signed at: https://manifesto.safeaieducation.org/Item Pensamiento Computacional entre Filosofía y STEM. Programación de Toma de Decisiones aplicada al Comportamiento de “Máquinas Morales“ en Clase de Valores Éticos(Sociedad de Educación del IEEE (Capítulo Español), 2018-03-21) Seoane-Pardo, A. M.This article describes a learning activity on computational thinking in Ethics classroom with compulsory secondary school students (14-16 years old). It is based on the assumption that computational thinking (or better “logical thinking”) is applicable not only to STEM subjects but to any other field in education, and it is particularly suited to decision making in moral dilemmas. This will be carried out through the study of so called “moral machines”, using a game-based learning approach on self-driving vehicles and the need to program such cars to perform certain behaviours under extreme situations. Students will be asked to logically base their reasoning on different ethical approaches and try to develop a schema of decision making that could serve to program a machine to respond to those situations. Students will also have to deal with the uncertainty of reaching solutions that will be debatable and not universally accepted as part of the difficulty, more ethical than technical, to provide machines with the ability to take decisions where there is no such thing as a “right” versus “wrong” answer, and potentially both (or more) of the possible actions will bring unwanted consequences.