AI Guidelines for Authors and Reviewers

Migration Politics recognizes the transformative potential of AI-powered writing assistants and tools. We believe that AI-assisted writing will become ever more common. AI is also increasingly used in software analysing research data. While these tools can offer enhanced efficiency, it is also important to understand their limitations and to use them in ways which adhere to principles of academic and scientific integrity. As a journal, Migration Politics supports and believes in the value of human creativity and human authorship. Large Language Models (LLMs) cannot be listed as an author of a work, nor take responsibility for the text they generate. As such, human oversight, intervention and accountability is essential to ensure the accuracy and integrity of the content we publish.

We acknowledge that many academics and scholars are already using assistive and generative tools to enhance their productivity and assist in their academic writing. We have developed these guidelines to support authors submitting articles for Migration Politics.

The distinction between Assistive AI tools and Generative AI tools 

For the purposes of these guidelines, we distinguish between Assistive AI tools and Generative AI tools as follows: 

Assistive AI tools

Assistive AI tools make suggestions, corrections and improvements to content you’ve authored yourself.  Content that you’ve crafted on your own but refined or improved with the help of Assistive AI tool is considered “AI-assisted”.

Generative AI tools

This term refers to the use of AI tools to produce content, whether in the form of text or images. Even if you’ve made significant changes to the content afterwards, if an AI tool was the primary creator of the content, the content is considered “AI-generated”.

Disclosure 

You are not required to disclose the use of assistive AI tools in your submission. All content, including AI-assisted content, must undergo rigorous human review prior to submission. This is to ensure the content aligns with our standards for quality and authenticity. Authors hold final responsibility over the content.

We admit the use of generative AI in the following exceptional cases: creating code, transcriptions and translations. In these cases, generative AI can be used but it must be disclosed. Authors must retain full control, understanding, and responsibility for all code, transcriptions, and translations. Unsupervised AI-generated code is not permitted. You must clearly reveal such AI-generated content within your submission: detail where and how the AI-generated content was used and provide this disclosure along with your submission. Submissions that rely on AI-generated code in a manner that replaces substantive human contribution or judgment may be deemed non-compliant with the journal’s standards. Where we identify published articles or content with undisclosed use of generative AI tools for content generation, we will take appropriate corrective action.

Prohibited use

  • We do not accept generative AI to create the manuscript text, images or interpretation. We consider such practices as opposed to our principle of slow science.
  • Never use generative AI to artificially create or modify core research data.
  • Never share any sensitive personal or proprietary information on an AI platform as this may expose sensitive information or intellectual property to others. Any information that you share with AI tools like ChatGPT is collected for business purposes.
  • Editors and Reviewers must uphold the confidentiality of the peer review process. Editors must not share information about submitted manuscripts or peer review reports in generative AI tools. Reviewers must not use AI tools to generate review reports.