S04-02 08

GenAI and creative-cognitive depletion: an ethical issue. Use and abuse of generative AI in the field of culture and education.

Compartir en TWITTER/FACEBOOK/LINKEDIN

Deja tu comentario

Participa en esta ponencia enviádole tu pregunta o comentario a los autores

Añadir comentario

Firmantes

profile avatar
Emanuele Fulvio PerriUniversity of Pisa

Enfoque

GenAI and creative-cognitive depletion: an ethical issue.

Use and abuse of generative AI in the field of culture and education.

Introduction

It is a physiological matter today, glimpsing the posthuman knocking at the doors of transhumanism, to wonder whether new artificial intelligences are progressively gaining the ability to supplant humans in the creative production and in performing tasks and duties related to education. There is a proliferation of generative AI systems intended for creative writing, and it is practically obvious that progressively such tools will compete with the very users who choose to use them, they will condition their environment and future; with foresight, it is not difficult to predict that entire categories will be profoundly revolutionized by the use of these tools.The issue of AI-related creative writing is not only altering the paradigm of content generation, but also that of content fruition, especially in relation to the world of education and the ways in which GenAI-like systems based on GPT models-position themselves with respect to learning. It is easy to see the great potential within AI when interpreting it as a study support tool, especially in the performance of assignments, schoolwork, research, etc. It is equally easy, however, to recognize it as more of a problem-solving approach than an indicative one, which warns against improper, exaggerated, substitute uses of personal effort.

Generative AI shifts to being «from a source of knowledge to a learning partner» along the lines of that full meaning of partner that includes the element of trust-trust, reliance. But to commit education to AI is to take the subject away from education itself, to cease to be accountable for its own education. Federico Cabitza speaks of this in the terms of «epistemic sclerosis»: a process of progressive deresponsibilization with respect to one’s tasks (and the quality of their performance) due to a blind reliance on the machine (by extension, also software) as a precise and instantaneous performer.

Objectives

With this work, conducted on the ethical-philosophical level but never losing sight (even technically) of GenAI’s declinations and potential repercussions in the world of education, we intend to deal with the issues proposed so far and identify certain phenomena that require attention: a certain process of de-accountability that leads to deskilling and severely compromises the formation and circulation of knowledge within human cultural networks; the centralization of knowledge and culture, being relegated to the technomedial infosphere; etc.

Methods

From a methodological standpoint, the study is comparative and will aim at a brief analysis of contexts and uses, and then draw predictions and identify ethical resolutions.

Conclusions

Within this potential dystopian framework of cognitive disempowerment, knowledge depletion, and withdrawal from education, there are elements of positivity in the adoption of AI-based tools within the personal education and education environment, but they must be carefully evaluated from an ethical perspective, and they need to be sufficiently regulated so that their use is not indiscriminate.

Preguntas y comentarios al autor/es

Hay 08 comentarios en esta ponencia

    • profile avatar

      David Zatz Correia

      Comentó el 29/11/2023 a las 23:48:12

      Congratulations on your presentation.
      You talked about plagiarism and cheating, and the change in the ethical notion of trust. I would like to know what you think would be a possible way to rebuilding trust between teachers and students in this scenario. Thank you in advance for your reply.

      • profile avatar

        Emanuele Fulvio Perri

        Comentó el 30/11/2023 a las 12:39:49

        Thanks for your compliment, Dr. Correia. Your question is on point, considering the implications of AI for the future of education (at school, university, etc.). When it comes to trust, the trustor must accept a certain degree of risk: this is integral to trust and can't be avoided, otherwise we'd be talking of blind faith. This "degree of risk" is extremely high, right now, if we investigate the trustworthiness of GAI-based systems for educational: without regulation—hopefully we'll see soon the AI act applied in many EU's countries—and without the active supervising of an expert, students (and users in general) may take advantage of the great potential of the widely available generative tools for accomplishing tasks that should never be bounced off to someone else (read: a model, a software, etc.). Right now I can only think of two possible ways of «rebuilding trust between teachers and students in this scenario»: a) strongly limit the accessibility of LLM-based chatbots—like ChatGPT—by "censuring" them; b) rely extensively on a AI-detecting antiplagiarism software, in order to check every single content students provide; c) ban electronic devices at school (like many districts in the US have been doing for the last thirty years); d) build heterogeneous, interdisciplinary working groups of ethicists and AI experts, in order to build "trustworhty-by-design" AI systems that require the human to be "in the loop" in order to work. Obiviously, the first solution (a) is viable but incompatible with European democratic principles; the second one (b) is difficult to carry out in practice, since it'd need constant proofing—therefore, constant distrust; the third option (c) is a work-around but not a ultimate solution, since banning phones or jamming the frequencies so that internet connection drops won't solve the philosophical (and incredibly practical) problem of trust in the AI era, but just going to raise new problems (on-call availability, emergency calls, etc.); the last solution (d) is the one we all should encourage the most, since it draws on the expertise of competent figures to create a future of coexistence with AI under the sign trustworthiness—but it'll take time: that's the reason because universities and observatories are full-steam on Ethics of AI's research.
        This answer matches the points I'm basing my final article upon. I hope you can find it useful! Thanks again for reading/watching and commenting!

    • profile avatar

      Cristina Maria Golhiardi Malachias

      Comentó el 29/11/2023 a las 15:20:33

      Congratulations on the interesting presentation! I would like to ask you, based on your studies, how do you envision education in a scenario where the role of human beings may not to become the experts anymore?

      • profile avatar

        Emanuele Fulvio Perri

        Comentó el 29/11/2023 a las 18:15:24

        Thank you, Dr. Malachias, for you compliments and your interesting question. I'm into Ethics of AI (generative artificial intelligence, in particular) and there's a very important concept that is "human-in-the-loop". When an AI-based system/tool is developed or being employed, it is crucial that humans still mantein a decisive position so that the system in such a way that the AI is never "by itself"—that is a human is still indispendable for assuring the most ethical and right processes in the autonomous loop of the AI system. It goes without saying that, by guaranteeing compliance with the ethical principle of HITL, humans' expertise is preserved and valuable as never before; this applies to educational, but also to every other field in which AIs (and AGI, in the future) are employed.
        I hope my answer was enough to explain my position on the matter—which I'm also advocate in the final text of this future publication—; feel very welcome to keep on asking. I encourage you to check my contact details on my CICID profile, if you'd like to stay in touch. Thanks again for your time!

    • profile avatar

      Javier de la Hoz Ruiz

      Comentó el 29/11/2023 a las 08:56:13

      Enhorabuena por la gran presentación realizada, muy productiva y con mucho sentido. ¿Podríais facilitarme acceso adicional a algunos recursos relacionados en el área?

      • profile avatar

        Emanuele Fulvio Perri

        Comentó el 29/11/2023 a las 12:00:22

        Thank you, Dr. de la Hoz Ruiz. I can provide you articles and other resources to the extent that it's possible for me to access them. If you'd like to send me an e-mail, check my CICID profile for all my contacts. Thanks for taking the time to read and comment!

    • profile avatar

      Jonattan Rodríguez Hernández

      Comentó el 28/11/2023 a las 23:12:30

      Congratulations on the presentation, it is really interesting. What would be your recommendations to establish clear and effective ethical limits for the use of AI in educational and cultural settings to avoid de-responsibilization and cognitive decline?

      Thank you very much for your time.

      • profile avatar

        Emanuele Fulvio Perri

        Comentó el 29/11/2023 a las 12:24:41

        Thank you for reading/watching my presentation, Dr. Rodríguez Hernández. It is complex to find a universally acceptable answer to your question—which is the actual focal point of Ethics of AI, especially as GAI-based tools are being deliberately deployed often without any morality or, at least, ratio. Regulations (such as the AI act or any other political-based "restraining order") are mandatory, and it seems like we're closer to the end of waiting for these regulations to finally come in all EU's countries. Still, rules without instructions do not work, and applying random instructions without providing a strong education is almost useless. We can still hope for a correct, ethical use of AI—generative in particular, (LLMs, diffusion models, foundation models in general)—by educating to authenticity, by persuading students through education; this means educating also to waiting, since AI tools are tempting because they cut the times for all kind of activities. So, in short, regulation alone is futile: education is the key, and it must be provided by introducing ethics since the very first years of schooling. Right now is hard to think of something more ethical, enough democratic, less censorial than this; and this is exactly my point for the final article. I hope you're satisfied by my answer. Make sure to contact me, if you like. Thanks again for your time and your great question.


Deja tu comentario

Lo siento, debes estar conectado para publicar un comentario.

Organizan

Egregius congresos

Colaboran

Egregius ediciones