AI Policies in Grants, Research, and Publishing
Properly Use Generative AI in Research and Reporting
Generative AI policies
In the course of communicating the research, inquiries, and other academic materials, generative artificial intelligence can be a useful tool. It can increase efficiency in research, enhance productivity, or help with idea generation. Increasingly though, federal agencies, academic journals, and other institutions are creating policies to govern and guide the use of generative tools. Below is a brief list of those policies.
ACS Publications AI Best Practices and Policies
Elsevier AI Policies for journals
Emerald Publishing’s Stance on AI Tools and Authorship
Evaluating AI Guidelines in Leading Family Medicine Journals: a cross-sectional study
Frontiers - Artificial intelligence: fair use and disclosure policy
Montana State University's Student Code of Conduct
Nature Statement on Authorship and AI
NIH guidance on Supporting Fairness and Originality in NIH Research Applications
NSF - Artificial Intelligence Policy and Strategy
Oxford Academic - Author Use of Artificial Intelligence
PLOS - Ethical Publishing Practice - AI Tools and Technologies
Sage - Using AI in Peer Review and Publishing
Wiley - Using AI tools in your research
Policies for use of AI tools with resources
Some libraries, institutions, and resources also have AI policies for how their resources are to be used:
“Statement on commercial generative AI.” The National Library of the Netherlands, 2026. https://www.kb.nl/en/ai-statement
“Research Libraries Guiding Principles for Artificial Intelligence.” Association of Research Libraries, April 2024. https://doi.org/10.29242/principles.ai2024.
“AI in Society.” Center for Humane Technology, January 2026. https://www.humanetech.com/ai-society
If you would like to add to or amend any of these resources, please contact leila.sterman@montana.edu.
