It can be very hard to tell if something has been generated. However, some studies (2023 Majovski) seem to indicate subject matter experts may be more likely to differentiate between human-written texted with generated text on their subject of expertise.
Other studies however (2023 Elali) question this claim. Use frameworks like UnMasked or SIFT to help decide if you will re-use information.
Generative AI detection software, such as ZeroGPT, can accurately calculate the likelihood of text being generated, with a small margin of error. It is also possible to reverse engineer images using tools like Google Lens or TinEye. Some debate is ongoing concerning the potential bias of these detection tools. Inputting copyrighted materials into these tools is not recommended.
References:
Majovski, Martin et al. (2023) Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora’s box has been opened, Journal of Medical Internet Research, 2023, 25. Available at: https://www.jmir.org/2023/1/e46924/ (Accessed: 18 August 2025)
Elali, Faisal R. and Rachiud, Leena N. (2023) AI-generated research paper fabrication and plagiarism in the scientific community, Patterns, 2023, 4(3). Available at: https://www.sciencedirect.com/science/article/pii/S2666389923000430 (Accessed: 18 August 2025)
See also:
Reflective writing and AI detection, Hannah Wood, NHS England (Login in with your OpenAthens account)