Lived Places Publishing AI Policy


Introduction:

Generative artificial intelligence (AI) tools, such as large language models (LLMs) or multimodal models, continue to develop and evolve in their applications for businesses and consumers alike.

Lived Places Publishing welcomes the new opportunities offered by generative AI tools, particularly in: Enhancing idea generation and exploration, supporting authors to express content in a non-native language, and enhancing the readability of our course reading materials.

Lived Places Publishing is offering guidance to our authors and collection editors on the use of such tools, which may evolve given the swift development of the AI field.

Generative AI tools can produce diverse forms of content, spanning text generation, image synthesis, audio, and synthetic data. Some examples include ChatGPT, Copilot, Gemini, Claude, NovelAI, Jasper AI, DALL-E, Midjourney, Runway, etc.

While Generative AI has immense capabilities to enhance creativity for authors, there are risks associated with the current generation of generative AI tools:

1. Inaccuracy and bias: Generative AI tools are of a statistical nature (as opposed to factual) and, as such, can introduce inaccuracies, falsities (so-called “hallucinations”) or bias, which can be difficult to detect, verify, and correct.

2. Lack of attribution: Generative AI often lacks the standard practice of the global scholarly community of correctly and precisely attributing ideas, quotes, or citations.

3. Confidentiality and intellectual property (IP) risks: At present, generative AI tools are often used on third-party platforms that may not offer sufficient standards of confidentiality, data security, or copyright protection.

4. Unintended uses: Generative AI providers may reuse the input or output data from user interactions (e.g., for AI training). This practice could potentially infringe on the rights of authors and publishers, amongst others.

Authors

Authors are accountable for the originality, validity, and integrity of the content of their submissions. In choosing to use generative AI tools, authors are expected to do so responsibly and in accordance with our author agreement’s guidance on original content authorship. Anybody may request a copy of our author agreement for review. This includes reviewing the outputs of any generative AI tools and confirming content accuracy.

Lived Places Publishing supports the responsible use of generative AI tools that respect high standards of data security, confidentiality, and copyright protection in cases such as:

  • Idea generation and idea exploration
  • Language and readability improvement
  • Assessment of readability
  • Enhancement of pedagogical book elements, such as learning objectives and recommended assignments

Authors are responsible for ensuring that the content of their submissions meets the required standards of academic research and writing and is created by the author.

Authors must clearly acknowledge within the book any use of generative AI tools through a statement which includes: The full name of the tool used (with version number), how it was used, and the reason for use. Book authors must disclose their intent to employ generative AI tools at the earliest possible stage to the publisher and the collection editor for approval – either at the proposal phase if known, or if necessary, during the manuscript writing phase. If approved, the book author must then include the statement in the preface or introduction of the book. Lived Places Publishing will retain its discretion over publication of the work, to ensure that integrity and guidelines have been upheld.

If an author is intending to use an AI tool, they should ensure that the tool is appropriate for their proposed use and that the terms applicable to such tool provide sufficient safeguards and protections (e.g., around intellectual property rights, confidentiality, and security).

Authors should not submit manuscripts where generative AI tools have been used in ways that replace the original content creation that is the product of the human mind or that manipulate the description of the lived experience that is the essence of a Lived Places publishing book.

Editors and Peer Reviewers

Lived Places Publishing strives for the highest standards of editorial integrity and transparency. Collection editors’ and peer reviewers’ use of manuscripts in generative AI systems may pose a risk to confidentiality, proprietary rights, and data – including personally identifiable information. Therefore, collection editors and peer reviewers must not upload files, images, or information from unpublished manuscripts into generative AI tools.

Collection Editors

Collection editors are the shepherds of quality and responsible content. Therefore, collection editors must keep submission and peer review details confidential. Use of manuscripts in generative AI systems may give rise to risks around confidentiality, infringement of proprietary rights and data, and other risks. Therefore, collection editors must not upload unpublished manuscripts, including any associated files, images, or information into generative AI tools. Collection editors may suggest the use of generative AI tools to authors.

Peer Reviewers

Peer reviewers are chosen experts in their fields and should not be using generative AI for analysis or to summarize submitted manuscripts or portions thereof in the creation of their reviews. As such, peer reviewers must not upload unpublished manuscripts or project proposals into generative AI tools.