Efficient yet accurate life sciences writing, can we AIm for the right balance? 

Share this article

With the advancements in AI (artificial intelligence) streamlining everything from dentistry to traffic congestion, people in all professions are feeling both excited and uneasy about this increasingly prevalent technology.

Becoming intelligent about artificial intelligence

To evaluate AI and the opportunities (or challenges) it can provide, we first need to understand the concept behind it. We asked Prof. Tom Lenaerts, professor and co-head of the Machine Learning group Université Libre de Bruxelles, to explain AI and tools like ChatGPT:

“For me, AI is a field of research within computer science to study and build systems that solve theoretical and practical problems for which normally human intelligence is required.

Tools like ChatGPT are systems that are composed of two parts: the large language model (LLM) and the chatbot. The first part is a system that uses a type of neural network called transformers and machine learning to extract associations between words and their contexts so that it can predict the next word/token in a sequence. LLMs have many transformer layers and are trained on so much information that they statistically encode parts of text, as well as the relationships between those parts, which makes them so powerful.”

“The second part, i.e., the chatbot, is a system that is trained by reinforcement learning (RL) with human feedback whose goal is to provide answers in a convincing and acceptable format. The LLM produces a series of alternative responses and then human decides what answer is the most acceptable. By feeding this information back to the RL model, it learns how to structure the answers to be the most appreciated.”

This visual explainer from the Guardian sums it up quite well.

AI as a democratizer

While researching this article, one thing became apparent in online conversations: AI is increasingly being seen as a “democratizer” for science writing.

While science is performed worldwide, much of the attention is directed toward English-language publications. This puts brilliant scientists who are not anglophones at a disadvantage. AI could help refine their writing by removing the quirks that may arise when writing in a secondary or tertiary language.

AI is also “democratizing” access to high-level expertise, as Mushtaq Bilal, a postdoctoral researcher at the University of Southern Denmark, explained in an article in But is this really the best source for information?

Can AI solve it all?

So, do we still need real people to create our scientific articles, press releases, or social media posts?

Yes.

To see how we reached this conclusion, let’s take a look at some examples where humans will likely remain crucial to the science writing process (vested interests as a science communicator aside!).

1. Writing a research paper

Writing research papers can be the most burdensome and time-consuming part of a researcher’s career. The pressure to publish and intense competition make this a difficult area to navigate for many scientists. AI can help dramatically speed up the time from ideas to manuscript, with LLMs managing to create and condense complex research topics into publishable papers in a fraction of the time it would take a person.

However, when these tools are involved in the actual research, the authors are responsible for verifying that the AI correctly interpreted their prompts and data. Also, some AI tools specifically created for scientific texts have not yet been successfully launched, and when dealing with proprietary AI tools, such as those from Open AI, the training datasets used are often not accessible to the public.

“It is problematic that some of the big organizations are trying to block open access to keep control and dictate regulations. What remains important is that these systems should be open access so that as long as we don’t fully understand them, we can study and analyze them. We should not fall in the same trap as what happens with the recommendation systems social media uses,” warns Lenaerts.

Another major problem is so-called ‘hallucinations,’ where LLMs create false or misleading information. Incorrect and inaccurate AI-generated outputs in scientific papers can lead to false conclusions that hinder future research, wasting precious funding and time. As free AI tools don’t include accurate referencing as standard, AI tools should be used with caution to avoid making mistakes. There is no substitute for the human verification of AI-generated texts. Lenaerts explains that “As the training goal of the RL system is to appear reasonable and convincing, it ended up being an incredible bluffing engine, producing errors and hallucinations with strong conviction.”

Disclaimers at the bottom of Gemini (Google) and ChatGPT (OpenAI)

Universities are now including AI sections in their writing guidelines. These guidelines clarify that an essay written entirely by AI is plagiarism. Relying solely on these tools could also potentially negate the critical thinking and investigation skills crucial to the scientific method. While AI chatbots are useful brainstorming companions, they should be used with care when writing scientific research papers.

2. Creating a scientific diagram

AI hype also surrounds image generation. While AI excels at creating beautiful stock images or surreal artwork, it is also increasingly used for scientific images.

AI can generate attractive, easy-to-interpret diagrams and infographics that illustrate complex biological processes. However, these diagrams need to be heavily scrutinized by a human to ensure their accuracy, in a process similar to the validation of an AI-generated scientific manuscript.

While researchers might be tempted to save time and completely trust the AI image, some recent peer-reviewed (and somehow accepted) manuscripts feature somewhat anatomically incorrect diagrams littered with words like ‘Testtomcels’ and disproportionate organs (content warning for the images!).

A section of an AI-generated image published in the journal Frontiers. Note the helpful labels such as “Rat”, “testtomcels,” and “stem ells”. Frontiers in Cell and Developmental Biology.

Lenaerts reminds us that these systems are not as ‘smart’ as we think they are: “It is problematic that many people are assigning properties like general intelligence and reasoning to these systems… properties they don’t have. They may encode a lot of knowledge, but this is based on syntax and semantics and not on understanding and meaning.

One needs to remember that this is a very complex probabilistic model to produce text, not wisdom.

3. Writing a white paper

While AI holds immense potential in various domains, including science communication, relying solely on it to write entire whitepapers to demonstrate thought leadership may not be the wisest approach. Whitepapers are meant to reflect the unique insights, expertise, and vision of a company or individual within their industry. While AI can assist in supplementing material and streamlining processes, it lacks the nuanced understanding and creativity that human writers bring to the table. Whitepapers are not just about conveying information but also about establishing credibility and trust with the audience. By solely depending on AI-generated content, there’s a risk of diluting the authenticity and integrity of the brand’s message. Thought leadership is about presenting original perspectives and innovative ideas, which require human intellect and experience to articulate effectively. While AI can aid in the writing process, it should be used as a tool to support human expertise rather than replace it entirely.

Make sure your AI-generated whitepaper reflects reality rather than fantasy! Images from Stuart Sinclair showing an immersive experience that demonstrated the perils of overpromising using AI (poster on the left, real experience on the right).

4. Creating social media content

Here is where LLM-driven AI tools can really save you time. Repeatedly writing similar social media content could seem like a chore. For example, your company attends six events in three months, which means that a lot of the posts you create will follow a similar structure. AI can help with that. When combined with a good communication strategy, AI can alleviate the pressure of producing social media content!

If you are not careful, the content produced when using AI tools can be tonally distant from what you usually write and can also use some phrases over and over again. Without proper prompting, the models tend to swing both ways, from very formal to inappropriately informal. Lenaerts agrees: “Based on my own experience, you can often see when a text has been spiced up with such systems when one is more closely related to the author. The word style changes. But as far as I know there are no automatic systems with good enough performance to separate an AI generated text from a human text. This is why people have argued for watermarking.”

The results from the same prompt plugged into ChatGPT and Gemini.

I asked two LLMs to provide me with a social media post based on the article text, which I pasted in. Both were provided two previous posts from BioVox (written by our authors) and asked to copy their tone of voice and format. Both have their faults and merits. ChatGPT was much closer to BioVox in terms of length and tone, while Gemini made a longer, more commercial post. However, ChatGPT has a more positive tone while Gemini included more of the nuances of the article.
… you can often see when a text has been spiced up with such systems when one is more closely related to the author. The word style changes.

A balancing act

In summary, AI can be incredibly useful and can save a lot of thinking time. However, a good science communicator can do the same with lower chances of nonsense images, tonally inappropriate text, and questionable source material. These systems have been given to the public way too early, with all their problems and without reliability and security guarantees. The same is happening with all types of generative AI, not just LLMs, and this is very problematic. The issue is that we have little understanding of how these things really work. Much more effort should be put into research and less into firing up the hype.“ Lenaerts says. Since we still do not understand how they work, humans need to supervise whatever these tools create. No matter the potential of AI, we still can’t avoid the project management triad!
Trilemma
Wikimedia Commons

Much more effort should be put into research and less into firing up the hype.

Overall, the common thread in all of these cases where AI can be used to supplement, save time, refine, or help conceptualize, is that collaboration is key. Maybe in the future, AI will replace us all, but for now, it still needs a human copilot.

 

P.S. a section of this article has been written by AI… can you guess which one?


Further reading:

Nature’s series of articles on AI in science: https://www.nature.com/immersive/d41586-023-03017-2/index.html

Schäfer, M. S. (2023). The Notorious GPT: science communication in the age of artificial intelligence JCOM 22(02), Y02. https://doi.org/10.22323/2.22020402

Ouyang, L., et al. (2022). Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35, 27730-27744. https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf