Guidelines for AI use in EU project deliverables (based on EC recommendations)

June 2, 2025

In April 2025, the European Commission released the second version of guidelines on the responsible use of generative AI in research [1], emphasising the importance of addressing the ethical and practical considerations that arise with the increasing integration of AI tools in the research process. These guidelines, developed by the European Research Area Forum, offer a framework for researchers, research organisations, and research funding bodies to navigate the complexities of using generative AI in a responsible and ethical manner. 

This article provides researchers with easy-to-understand recommendations for using AI when writing project deliverables and papers. These recommendations come from the experience of Future Needs team in following the European Commission guidelines when drafting research deliverables, scientific papers, training material and any other scientific outputs.

Practically speaking, here are the 12 DOs and DON’Ts our team sticks to when using AI for producing EU project deliverables and scientific outputs:

 

DO verify the existence and accuracy of sources provided by AI tools. AI can sometimes generate plausible-sounding but fake citations.

✅ DO revise and readdress the prompts you use as your research evolves. AI tools lack the ability to reflect and provide insights over time. It will never wake up and call you saying “you know what, I slept on it and I think we need to change this and that” like a fellow colleague would. So, this responsibility lies with you.

✅ DO revise and readdress the prompts you use as your research evolves. AI tools lack the ability to reflect and provide insights over time. It will never wake up and call you saying, “you know what, I slept on it and I think we need to change this and that”, like a fellow colleague would. So, this responsibility lies with you.

Why? Because you need to remain ultimately responsible for scientific output. 

Your role as a researcher remains paramount. You are accountable for the integrity of all content, even when generated with the assistance of AI. Maintain a critical perspective on AI outputs, recognising their inherent limitations, including the potential for bias, inaccuracies, and “hallucinations”. Remember, AI systems are tools, not authors. Authorship signifies agency and responsibility, which rest solely with human researchers.

DO provide a detailed explanation of AI use in the methodology section of your work. Include specifics on how and for what AI was used, the prompts employed, any data provided to the AI, the AI tool’s name, and how you validated the results. Clearly state which sections of your work contain AI-generated text.

✅ DO cite the work of others that may be included in the answers generated by AI.

✅ DO define your role in authorship, be precise. Refer to resources like the Elsevier Credit Author Statement to accurately represent your contributions, using roles from conceptualization to visualization.

✅DO check if visuals produced with AI are unbiased in terms of gender and intersectionality.

Why? Because in ethical and excellent research generative AI should be used transparently. 

Openness is key. Clearly detail which generative AI tools have significantly contributed to your research process. When AI meaningfully shapes your results, explicitly mention its use in your methodology section (or equivalent), providing a responsible evaluation of its contribution. This documentation should include the tool’s name, version, date of use, and a description of how it was employed and influenced the research. Where relevant and in line with open science principles, consider making your input prompts and the AI-generated output available.

Furthermore, acknowledge the stochastic nature of these tools – their tendency to produce varied outputs from the same input. Strive for reproducibility and robustness in your findings, and openly discuss any limitations arising from the AI tools used, including potential biases and mitigation strategies.

At the same time, be mindful of the legal landscape. The output generated by AI can be particularly sensitive concerning the protection of intellectual property rights and personal data. Pay close attention to the potential for plagiarism in all forms (text, code, images, etc.) when utilizing AI-generated content. Always respect the authorship of others and provide proper citations where necessary. Remember that AI-generated output, even from large language models, may be based on existing work that requires appropriate recognition. Finally, be vigilant for the presence of personal data within AI-generated outputs. If such data emerges, you are responsible for reporting it appropriately, adhering strictly to EU data protection rules.

✅ DO continuously learn how to use generative AI tools properly. Commit to ongoing learning to maximise the benefits of these tools, while being cautious of their negative side effects.

Why? Because the field of generative AI is rapidly evolving, with new applications and best practices emerging constantly.

Actively seek out training opportunities and stay informed about the latest advancements. Share your knowledge and best practices with colleagues and other stakeholders to foster a culture of responsible AI usage within the research community. Additionally, strive to minimise the environmental impact of your AI usage by thoughtfully evaluating which tool is most suitable for a given task and by employing the most efficient prompting techniques.

❌ DO NOT use generative AI to create content in areas where you lack expertise. This ensures you can critically evaluate and verify the AI’s output.

DO NOT forget to cite AI-generated text when it is directly used. Include the date, the AI tool used, and the specific prompt.

❌ DO NOT use generative AI tools for processing sensitive, private or confidential information or in evaluation activities, such as peer review and the evaluation of research proposals. 

❌ DO NOT upload unpublished or sensitive work, including your own and that of others, to external AI systems, unless you have explicit assurances that the data will not be reused for model training or other untraceable purposes.

Why? Because this precaution mitigates the potential risks of unfair treatment and disrespect for the work and rights of IP of others

Avoid significant reliance on generative AI tools in activities that could impact other researchers or organisations. A wrong assessment may arise from the inherent limitations of these tools, including hallucinations and biases. Moreover, IP protection safeguards the original, unpublished work of fellow researchers from potential unintended exposure or inclusion in AI models training. Any input you provide (text, data, prompts, images, etc.) could potentially be used for other purposes, such as training AI models. Similarly, refrain from providing third parties’ personal data to external generative AI systems without obtaining explicit consent from the individuals and ensuring a clear, lawful purpose for the data’s use, in full compliance with EU data protection regulations.

Understand the technical, ethical, and security implications related to privacy, confidentiality, and intellectual property. Investigate institutional guidelines, the privacy options offered by the tools, the identity of the managing entity (public or private institutions, companies), the tool’s operational environment, and the implications for any uploaded information. This assessment should consider the spectrum from closed environments to open internet-accessible platforms with varying privacy guarantees.

 


 

By embracing these recommendations, researchers can navigate the exciting potential of generative AI while upholding the highest standards of scientific integrity, ethical conduct, and respect for the work and rights of others, not only when producing scientific outputs but also when drafting proposals for acquiring additional funding for their research. In such cases, research teams can benefit from partnering with organisations like Future Needs, which offer proposal writing services and strategic guidance on using AI tools during the grant application process to maximise success.

So, is this an article about how not to use AI in science?

No. The guideline here is not “not” to use AI. Because AI acts as a catalyst for scientific breakthroughs and a key instrument in the scientific process. It pushes scientific frontiers and produces outcomes beyond the reach of current tools when used correctly. This article is about being aware of the limitations of AI, as with every tool. AI limitations can significantly affect the reliability and interpretation of research outputs.

  • Training data bias might manifest as a language model consistently using gendered pronouns when describing professions traditionally associated with one sex, simply because its training data disproportionately featured such associations.
  • Prompt bias, on the other hand, could lead a model to generate overwhelmingly positive reviews for an innovation if the prompt subtly implies the user’s satisfaction.
  • Furthermore, the issue of invented citations poses a serious challenge to academic integrity, where a model might fabricate seemingly legitimate sources to support its claims, potentially leading researchers down false paths.
  • Finally, the inherent lack of interpretability in these “black box” systems means that even when a model produces a seemingly correct answer, the reasoning behind it remains opaque, highlighting the necessity of rigorous cross-validation, particularly when AI is employed for automated data analysis where its outputs can directly shape research conclusions.

Researchers must remain vigilant and critically evaluate AI-generated content in light of these potential pitfalls.

Note: In the process of drafting this article, we utilised a large language model AI, specifically, Gemini, to assist with summarising information and producing APA style references. The author retained full responsibility for the content, interpretation, and accuracy of the final work.

 


 

Want to stay ahead on how AI is shaping EU-funded research?

Follow us on LinkedIn to stay informed about the latest Horizon Europe and Erasmus+ developments and subscribe to our newsletter for expert highlights from the EU research and innovation landscape, delivered straight to your inbox. 

 


 

References:

  • European Commission. (2025). Living guidelines on the responsible use of generative AI in research. European Commission, Directorate-General for Research and Innovation.
  • Elsevier. (n.d.). Credit – Author Statement. View the source.
  • Schwartz, O. (2023). ChatGPT is a blurry JPEG of the web. Nature. View the source.
  • Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z. (2023). ChatGPT and a New Academic Reality: Artificial Intelligence-Written Research Papers and the Ethics of the Large Language Models in Scholarly Publishing. Journal of the Association for Information Science and Technology, 74(5), 570-581. View the source.
About the authors

 

Anna Palaiologk

 

Anna Palaiologk, the founder of Future Needs, is a Research & Innovation Consultant with 18 years of experience in proposal writing and project management. She has worked as a project Coordinator and Work Package leader in 30+ EU projects and has authored 50+ successful proposals. Her research background is in economics, business development and policy-making. Email Anna at anna@futureneeds.eu.

 

Chariton Palaiologk | Technical Project Manager | Team of business experts

 

Chariton Palaiologk, the Head of the EU Project Management Team, is currently leading the project management of 10+ EU-funded projects. He has a background in data analysis and resource optimisation, having worked at the Greek Foundation for Research and Technology. Email Chariton at chariton@futureneeds.eu.

 

 

 

Thanos Arvanitidis is a Researcher & Innovation Project Manager, with a background in physics and biomedical engineering. He manages EU-funded research projects from initial conception through to implementation, working across key Horizon Europe clusters, including Cluster 1: Health; Cluster 4: Digital, Industry & Space. His expertise spans AI, healthcare, cybersecurity, and digital education. Email Thanos at thanos@futureneeds.eu.

 

©2020 FutureNeeds. All rights reserved | Terms of Service | Privacy Policy