Generative AI Policy

1. General Provisions

The Editorial Board reaffirms its commitment to the principles of academic integrity, publication ethics, and transparency.
For the purposes of this policy, artificial intelligence (AI) refers to digital systems capable of generating text, images, code or other scholarly content, performing analytical data processing, automatically reviewing manuscripts, or producing conclusions that would normally require human participation.


2. Policy on the Use of AI
2.1. Authors

Authors may use generative AI tools (such as ChatGPT, Copilot, Claude, Gemini, etc.) only as auxiliary instruments for language editing, stylistic refinement, or preparation of technical descriptions.

The use of AI to generate scientific statements, results, hypotheses or citations without author verification and critical oversight is prohibited.

The fact that AI has been used must be clearly disclosed in the manuscript – for example, in the Acknowledgements or Methods section.
Example statement:

The authors used the ChatGPT tool (version GPT-5, OpenAI) for language editing. All outputs were reviewed and manually revised by the authors.

Authors bear full responsibility for the content, accuracy of data, and compliance with ethical standards, regardless of the use of AI.

AI systems must not be listed as co-authors of any publication.

2.2. Reviewers

The use of AI in the preparation of peer reviews is permitted only with prior approval from the Editorial Board.
If AI tools are used, the reviewer must explicitly state this in their comments or accompanying note.

Uploading manuscript texts into AI systems without the consent of the Editorial Board is strictly prohibited in order to preserve the confidentiality of author materials.

2.3. Editorial Board

The Editorial Board may use AI tools solely for auxiliary purposes, such as:

  • detection of plagiarism or textual overlap;
  • linguistic analysis or language verification;
  • bibliographic or citation data analysis.

Automated systems do not make decisions on the acceptance or rejection of manuscripts – this remains the exclusive responsibility of editors and reviewers.


3. Ethical Principles

The use of AI must not violate academic integrity, copyright, confidentiality, or principles of impartiality.
AI tools must not be used for:

  • generating fabricated data, references, or quotations;
  • manipulating peer-review results or authorship;
  • making automatic editorial decisions.

Any instance of concealed or improper use of AI may be treated as a violation of publication ethics.


4. Policy Updates

This policy shall be periodically reviewed in accordance with international standards (COPEElsevierSpringer Nature) and the current regulations of the Ministry of Education and Science of Ukraine.