Academic Integrity

In the age of generative artificial intelligence, it is all the more important to uphold academic integrity at all times by ensuring that lecturers, students and staff adhere to the principles of responsibility, transparency and fairness.

Integrity in attitude

The attitude of scientific integrity not only applies to research, but should also be actively practised in teaching. Generative AI blurs the previously known boundaries. This is why the ‘human-in-the-loop’ is always needed to validate and scrutinise results and check for biases. This responsible and transparent approach is a cornerstone of academic integrity.

The creator of content is always responsible for its correctness and quality. This principle continues to apply and relates to course materials and documents, but also, for example, to learning records and academic papers. In the existence of generative AI, it is all the more important to fulfil this responsibility at all times and to set an example for others.

  • Texts and ideas generated by AI are based on probabilities with no link to reality. The task of establishing this connection lies with the user.
  • Generative AI can make mistakes, draw false conclusions and also reference incorrectly. It is up to the user to check this in detail for correctness.
  • The output of GenAI is based on the training data used and the programmed algorithms. Biases can therefore occur at any time, which must be carefully checked by the user.

Other important points of a responsible attitude can also be found under good scientific practice and at the ETH AI Ethics Policy network, which participates in the global dialogue on the responsible use of AI.

The use of generative AI should be made transparent at all times. This applies not only to academic work, but also, for example, to course materials, presentations or the use of generative AI for assessment or feedback.

  • Be explicit about which content or parts of materials and work have been created with the help of generative AI.
  • Create transparency regarding the use of AI-based tools in lessons.
  • Discuss the use of GenAI for feedback and assessment openly.

It is important to respect the privacy and copyright of the content being worked with. AI tools require a large amount of data in order to train the underlying models. Input by the user is often reused for training and it is therefore crucial to know the respective data protection regulations and to handle protected materials correctly in accordance with them.

  • If data is passed on to AI-based tools or released for them, it must be clarified in advance that no rights are being violated.
  • For private or non-publicly available data, only tools that guarantee data protection regulations should be used (see also tools & licences).

Scientific work

Lecturers are encouraged to establish rules and guidelines regarding the use of generative artificial intelligence for assignments, projects and assessments in their courses. There are no universally applicable solutions, so clear communication between lecturers and students is all the more important. The definition of these rules can also be part of the discussion within the course, through which pragmatic and fair solutions can be jointly developed and subsequently documented.

In course materials, care should generally be taken to correctly refer to the source of content and specifically also of image material. This must also always be correctly indicated when images and texts are generated by generative AI.

More specific styles have been developed for referencing AI, for example for external pageAPAexternal pageMLA and external pageChicago style. A good overview of when and how to refer correctly can be found at external pageGuidelines 'Citing AI Tools' (Universit?t Basel, Schweiz) or external pageCitation and Writing Guidance for Generative AI Use in Academic Work (Dudley Knox Library Monterey, USA).

From a legal perspective, nothing has changed with regard to scientific integrity and plagiarism. The ETH Library explains how to deal with the new circumstances under Plagiarism and Artificial Intelligence (AI). Three important key statements are:

  • AI-based tools are not considered authors or co-authors and may not be listed as such in publications.
  • The tools use existing texts and generate new outputs based on your prompts. By doing this, the tools do not create new work in the sense of copyright law, which is why the tools are not considered authors.
  • The use of an AI-based instrument does not constitute plagiarism per se. However, you should consider that the tools do not (reliably) identify texts taken verbatim or in terms of content as citations and the output can be very close to an original. This can lead to a match in a plagiarism check.

Tool-based and automatic recognition of generative AI is currently extremely difficult and will probably remain so in the future. Trust in such a method is therefore not appropriate.

ETH Zurich's declarations of originality were adapted to include a passage on the use of artificial intelligence. These now contain three options, which can be described as follows from the perspective of generative AI:

  • Generative AI technologies were not used.
  • Generative AI was used, labelled and correctly referenced.
  • Generative AI was used and not labelled in consultation with the person in charge.

The selection should always be clearly declared and agreed between lecturers and students. Option 3 is only recommended in special cases where generative AI is an integral part of a work and therefore does not need to be labelled directly.

The development of generative AI requires a critical reflection on the role of writing in education and science. Responsible action by researchers, teachers and students is necessary in order to exploit the potential of the technology and minimise risks at the same time. In the discussion paper external pageWissenschaftliches Schreiben im Zeitalter von KI gemeinsam verantworten (Hochschulforum Digitalisierung) these aspects are examined from different angles.

If AI tools are used contrary to the lecturer's instructions, the existing processes continue to apply. Violations such as the use of unauthorised tools or failure to disclose their use will be subject to disciplinary action (see also external pageDisziplinarverordnung ETH Zurich).

JavaScript has been disabled in your browser