Research and Transfer
General information
Target group
Students, doctoral candidates, postdocs, academic staff, professors
Personal responsibility and legal framework conditions for AI use
- The responsibility for the use of AI systems generally lies with the respective users, i.e. they are not only responsible for the data entered and content generated when using the AI, but also for its storage, reproduction and distribution.
- On the one hand, this includes the data entered within the prompt or generated by the AI, as this is not only stored locally but also on the AI operator's server, possibly used for training purposes and thus distributed and reproduced.
- On the other hand, this also includes the data or content that users create as a result of communication with the AI and deliberately disseminate, e.g. the text of an email or the information on the homepage.
- In order to fulfill this responsibility, users are obliged to inform themselves about applicable regulations (e.g. data protection, copyright, AI regulation), to critically question the results and to design the use in a context-appropriate and ethically reflective manner.
- In the case of scientific publications, the regulations of the conference organizers, editors, or publishers must also be considered.
- Further information can be found in the sections on data protection compliance and copyright compliance. They help to clarify possible uncertainties at an early stage.
Recommended AI applications with contractual data security
- The use of freely available AI systems such as ChatGPT means that the entered data and generated content are transferred to storage in the cloud and therefore the use of the content for third parties, in particular for training the AI for the user(s), cannot be controlled.
- Consequently, the use of the AI available at the University of Regensburg (Microsoft Copilot, DeepL in the web editor) is recommended, as contractual regulations with the manufacturers restrict distribution and duplication.
- This does not affect compliance with data protection, copyright and personal rights or the responsibility of the user.
Transparency and labelling obligation for AI use
- The use of AI systems must always be transparent.
- This means that the use of AI in the creation of content - regardless of whether it is text, images, audio, video or other formats - must be disclosed and AI-generated content must be clearly labelled as such.
- This includes content that has been created entirely by AI and content that has been significantly influenced or supplemented by AI. This labelling makes it clear that this is not purely human-generated or independently generated content and that such results can potentially be incorrect, incomplete or inadequate for the context.
- The specific form of the transparency obligation - for example, whether only the use of AI must be indicated or whether a complete log of the prompts used must be submitted - can vary depending on the specifications of the respective responsible party (e.g. publisher, conference organizers, lecturers) and must be taken into account accordingly.
- In order to fulfill this transparency obligation, AI-generated content must be critically reviewed, correctly classified and comprehensibly documented in the respective context before it is passed on or published.
Possible applications of generative AI in research and transfer
Literature research & source management
- The AI can be used to generate relevant search terms, topic ideas or literature suggestions.
- Example 1: The generative AI should suggest a list of possible search terms for "sustainable supply chains" and these are then checked in specialised databases.
- Example 2: The generative AI should search for definitions of "sustainable supply chains" and output sources. The definitions and sources must be checked individually to ensure that the sources have not been invented ("hallucinated") by the AI and are also correctly cited.
Data analyses
- AI may be used for data analysis (e.g., text mining), provided that the results are subjected to careful scientific scrutiny and the use of AI is documented transparently.
- Example 1: AI is used to cluster qualitatively collected interview data.
- Example 2: Generative AI is used to simplify an analysis script and check for errors.
Research design & choice of methods
- AI can contribute to reflection on suitable methodological approaches or study designs.
- Example 1: AI proposals for mixed methods designs are discussed in the research team.
- Example 2: The AI is used to generate a study flow chart from a continuous text study protocol.
Writing and improving scientific texts
- AI can be used for structuring, linguistic improvements and to support writing processes.
- Example 1: AI-generated wording suggestions for the introduction of a research proposal are revised independently. The AI can be used to improve the style of texts for scientific communication or presentations.
- Example 2: For a given topic and based on your own publications, several titles are suggested for a keynote speech that are better adapted to the target audience, e.g. industry, public administration, specialized audience.
- Example 3: For an accepted research paper, a post is created for LinkedIn that summarizes the most important content and adapts content linguistically for social media.
- Example 4: AI is used to shorten the abstract of a manuscript to the specified number of words before submission to a new journal.
Transcription
- AI can be used for the automatic transcription of interview or observation data.
- Example: Whisper is used to efficiently transcribe qualitative interviews.
Categorisation / coding
- The AI can suggest initial category systems, which are then adapted manually.
- Example: The AI is used to obtain suggestions for coding open answers.
Analysis and modelling of software projects
- AI can be used to analyse, design and model software projects.
- Example: An existing BPMN model or UML class diagram is supplemented with AI to include additional concepts from the application domain that may be missing from the requirements elicitation. The results must be checked for content accuracy, relevance and correct syntax.
Programming
- AI can be used as a support tool for the development of research software or for transfer projects, e.g. for code generation and enhancement, improvement, error detection, documentation or integration with other applications.
- Example: A database interface needs to be created for an application. With the help of AI, code is proposed to establish the connection to the database, send SQL queries and save the results internally. The AI generated results must be checked for content accuracy and tested for possible security gaps and vulnerabilities.
Testing of research software
- AI can be used to test research software.
- Example: AI generates additional tests with different limit values and test data for more complex functions for an existing test suite. The generated results must be checked for correctness of content, relevance and weaknesses.