9.5 C
Canberra
Thursday, May 7, 2026

Peer assessment within the time of synthetic intelligence


Used thoughtfully and transparently, generative AI could assist, however should not change, human judgment, experience, and demanding considering in peer assessment.

With this editorial, we wish to draw the eye of peer reviewers to the accountable use of generative synthetic intelligence (AI) instruments when reviewing a manuscript. The overall tips, shared all through the Nature Portfolio journals, are described on our web site at https://www.nature.com/nnano/editorial-policies/ai.

The choice textual content for this picture could have been generated utilizing AI.


Credit score: Panther Media International / Alamy Inventory Picture

When utilizing AI instruments to supply any sort of content material, a very powerful factor to remember is that the person is at all times accountable. This tenet has a number of penalties1,2.

The primary consequence is that each output generated by an AI device requires human validation. That’s, it shouldn’t be assumed that the output of an AI device is factually correct (even when the immediate accommodates the instruction ‘don’t hallucinate’). The output and associated sources should at all times be double-checked by an accountable particular person. Within the case of peer assessment, we ask reviewers, “to declare using such instruments transparently within the peer assessment report”. As customers grow to be extra accustomed to AI instruments, they’ll train various levels of scepticism when studying the device’s outputs and enhance accuracy through the use of extra exact prompts. For instance, when validation is taken into account, reviewers could discover that utilizing an AI device is extra time-consuming than not utilizing it, or that it’s helpful just for sure points of the manuscript assessment.

The second consequence issues the authorized implications of importing a manuscript to an AI device. Authors belief us to share manuscripts with reviewers in strict confidence. Importing a manuscript into an AI device may breach this confidentiality. There are specific AI instruments which can be closed, which means they don’t share uploaded content material with the World Extensive Net or use it for coaching. Nevertheless, relying on the settings or end-user agreements of the particular AI device, uploaded content material can nonetheless be discoverable by different customers inside the closed atmosphere (for instance, colleagues inside an establishment). To keep away from any authorized penalties, we ask reviewers to “not add manuscripts into generative AI instruments”.

Utilizing AI instruments to enhance the grammar or readability of human-generated texts doesn’t should be declared (although it nonetheless requires human validation)3. The primary danger we see at this level in utilizing AI for peer-reviewing a manuscript is over-reliance on a device that’s nonetheless largely seen as a black field and may produce inaccurate outcomes.

Like everybody else, we editors are nonetheless studying find out how to finest use AI instruments, for instance, to summarize the main factors, extract the important thing efficiency metrics, or determine appropriate reviewers. Like all new expertise, it’s essential to get educated about it. We put money into coaching and consciousness to make sure the moral use of the instruments that the writer gives us with. What are the benefits? What are the authorized implications of misuse? What’s one of the best ways to extract the specified end result? What are the constraints of the instruments? What’s the dataset used for coaching it? With what particular duties can it assist us be extra productive (or quicker)? When is utilizing AI instruments a waste of time? How a lot power or CO2 equal does a immediate eat?

As reviewers additionally be taught to craft efficient prompts, validate outcomes, and protect confidentiality, we imagine that AI instruments will finally assist the peer assessment course of. Reviewers and editors will be capable to train their judgment in mild of an enormous quantity of knowledge within the literature that AI instruments may retrieve successfully. If used inattentively, although, we danger delegating important considering to an algorithm, giving us a false sense of feat. This undermines the position of our tutorial coaching, important considering, and experience, in addition to the establishment of peer assessment4.

Wanting forward, as generative AI turns into extra succesful and deeply embedded in automatable scholarly workflows, our shared precedence should be to make sure that effectivity positive factors by no means come on the expense of rigour, confidentiality, or accountability5. Because the sensibility of scientific communities round AI evolves, Nature Portfolio’s tips are certain to adapt accordingly. We’ll proceed to refine our steering in line with expertise, rising requirements, and group expectations by offering clearer tips for peer reviewers. AI instruments which can be demonstrably safe and match for objective, when used transparently and critically, will help reviewers navigate an ever-expanding literature and focus their experience the place it issues most; used uncritically, they danger eroding the very judgment peer assessment exists to use. Our goal, subsequently, is to not speed up peer assessment by outsourcing thought, however to strengthen it by enabling knowledgeable human choices, grounded in proof, integrity, and belief.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

[td_block_social_counter facebook="tagdiv" twitter="tagdivofficial" youtube="tagdiv" style="style8 td-social-boxed td-social-font-icons" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjM4IiwiZGlzcGxheSI6IiJ9LCJwb3J0cmFpdCI6eyJtYXJnaW4tYm90dG9tIjoiMzAiLCJkaXNwbGF5IjoiIn0sInBvcnRyYWl0X21heF93aWR0aCI6MTAxOCwicG9ydHJhaXRfbWluX3dpZHRoIjo3Njh9" custom_title="Stay Connected" block_template_id="td_block_template_8" f_header_font_family="712" f_header_font_transform="uppercase" f_header_font_weight="500" f_header_font_size="17" border_color="#dd3333"]
- Advertisement -spot_img

Latest Articles