Policy on the use of АІ
As Artificial Intelligence (AI) technologies become an integral part of modern research, editorial processes, and communication, the editorial board supports the responsible, ethical, and transparent use of AI that enhances the quality of scholarly work but does not replace human intelligence or academic responsibility.
Areas of possible application of AI:
- linguistic improvement of the text and elimination of technical errors;
- preparation of short literature reviews;
- identifying structural flaws in the manuscript;
- automated translation with adherence to scientific content.
However, AI cannot create scientific content, formulate hypotheses, generalize results, or perform analytical interpretation of data.
Authors should not submit manuscripts in which generative AI technologies are used in a way that replaces the primary responsibilities of the researcher and author. Such cases may be subject to editorial investigation.
Authors are responsible for the originality, validity, and integrity of the content of submitted materials . When choosing to use AI technologies, journal authors must do so in accordance with our editorial policies on authorship and principles of publishing ethics. The editorial staff checks for the use of generative AI technologies in the formation of the text of the article during the initial plagiarism check of the text using special automated programs.
Authors must clearly identify any use of generative AI technologies in the article, including the full name of the tool used (with version number), how and for what it was used. Conventional tools such as grammar, spell or link checking do not require identification of the AI technologies used.
Requirements for authors regarding the use of AI technologies
Transparency: Authors are required to indicate if any AI system was used in the preparation of the article (e.g. ChatGPT, Copilot, DeepL Write, Grammarly, etc.).
Human authorship: AI cannot be a co-author or listed as an author.
Responsibility: Authors are fully responsible for the accuracy, originality, and ethics of submitted materials, regardless of the use of AI.
Restrictions: It is prohibited to use AI to create data, falsify results, or generate links that do not exist.
Using AI technologies in editorial work:
- preliminary checking of texts for plagiarism, automatic distortions or dishonest manipulations;
- language editing;
- assistance in the formation of metadata and annotations.
The editorial team adheres to the following ethical principles:
- Transparency: any use of AI must be declared;
- Accountability: only people are responsible for the content;
- Integrity: the results should reflect real research activities, compliance with international standards COPE (https://publicationethics.org/cope-focus/cope-focus-artificial-intelligence),WAME (https://wame.org/news-details.php?nid=40), Policy on the use of artificial intelligence in the educational process and scientific activities of the West Ukrainian National University (https://www.wunu.edu.ua/past-and-present/academic-integrity/16284-akademchna-dobrochesnst.html ) .
AI is not used to make decisions about whether to accept or reject manuscripts. All editorial decisions are made by humans.
Use of AI technologies by reviewers
Generative AI or AI-powered technologies should not be used by editors to assist in the evaluation process or decision-making regarding a manuscript. Reviewers may only use AI technologies for language editing of their own texts. It is strictly forbidden to transfer the content of manuscripts to external AI systems to guarantee the confidentiality of the review.




