Using Artificial Intelligence to delve into plant cell secrets
Scientists from the Hu lab and Microsoft Corp. have developed a deep learning framework to analyze microscope images of organelles in plant cells. The system, called DeepLearnMOR (Deep Learning of the Morphology of Organelles), can accurately identify and classify organelles – which are the ‘organs’ of plant cells – with an accuracy of over 97%. The study is published in the journal Plant Physiology.
The intersection of plant biology with Artificial Intelligence (AI) is a burgeoning field that has the potential to significantly increase the scope, speed, and accuracy of screening tools. Deep learning, specifically, is a type of AI that powers innovations in many industries and advances in scientific research.
The new DeepLearnMOR framework can identify organelles and classify hundreds of microscopy images in seconds. In contrast, one scientist would need over an hour to manually review the same number of images.
“The deep learning algorithm employed by DeepLearnMOR has superior pattern recognition capabilities and requires limited human input,” says Dr. Jiying Li a former Genetics and Genome Sciences Program PhD student in the lab of Dr. Jianping Hu at the MSU-DOE Plant Research Laboratory, who is currently a Software Engineer at Microsoft Corp.
“First, we need a dataset of properly labeled images from cell biologists in order to teach DeepLearnMOR to classify these images. Then, we train it, and the system’s ability to recognize abnormal cell parts increases with each training round until its performance is comparable to that of experts.”
Jiying drew inspiration for this project during his time with the Hu lab.
“I used to screen a lot of mutant plants,” says Jiying. “That work involved hours of preparing slides and identifying abnormal organelles through a microscope. After I joined Microsoft, I read a publication on deep learning for classifying images of eye diseases. I wondered if we could use a similar approach to study plant cell images.”
Constructing the deep learning system
Dr. Jianping Hu, Jiying’s former mentor, jumped on his suggestion to try deep learning to classify microscopy images.
“At first, my lab did not have enough images ready for use,” Jianping says. “So, two members of my lab spent months taking a total of over 1800 microscopy images of the organelles we targeted for deep learning analysis.”
Due to a lack of existing deep learning frameworks to analyze plant cell organelles, Jiying tried two approaches to create DeepLearnMOR.
The first, called transfer learning, is useful when there isn’t enough data to train a deep learning system from scratch.
“We can use a model that has been pre-trained on another big data set. In our case, we took advantage of a system already trained on the ImageNet dataset, which is a database of millions of real-world images, such as animals, plants, and vehicles,” says Jiying. “Then, we repurposed this model to recognize plant cell images. It’s like taking a child with a large vocabulary and teaching them a set of new words. With time, the kid will understand the new words.”
The second approach was to train a new framework from scratch.
“This approach is like teaching a language to a baby who can’t speak it. Having this blank slate gives us more control over the framework since we designed the model ourselves,” Jiying adds.
How the AI is trained
In order to train the model, members of the Hu lab supplied it with images of chloroplasts, mitochondria, and peroxisomes, three cellular ‘factories’ that produce, consume, and manage a cell’s energy supply. The set of images for each factory included normal- and abnormal-looking specimens.
The model was then trained in iterations. During each round, the model analyzed a limited number of images. At the end of that round, it tried to classify images it had not seen before – it had to identify the right organelle and determine whether it looked normal or abnormal – and its classification accuracy was recorded.
The model then automatically backtracked through its analysis to find out where it made any mistakes. After identifying those spots, it made the necessary tweaks to correct its mistakes before starting a new training round.
How do you know what the AI is thinking?
“We refined the model until it was over 97% accurate with its predictions on our data set. Next, we wanted to understand what criteria our AI uses to identify an organelle,” Jiying says. “We needed this information to convince biologists that our AI is using sound judgment when analyzing the data.”
The most successful validation approach the team used is called feature visualization. This technique creates heat maps that highlight what parts of an image the AI focuses on during its analysis.
“If the ‘hot’ areas of that heat map overlap with what you expect a human observer would focus on, you know that the AI is ‘thinking’ on similar lines as a scientist. Our heat maps were very accurate,” Jiying says.
DeepLearnMOR can be expanded to identify other plant organelles, provided there are images available for further training.
“We could teach it to recognize plant material from different parts of the plant, or different cell types,” Jiying says. “We could even train it to analyze short videos, since organelles are constantly moving and changing shape.”
Jiying thinks the future of deep learning in the plant sciences is very bright. One of the current drawbacks seems to be a lack of substantial funding to create large image data sets for plants (in contrast, efforts in the human sciences are much better funded).
“Ideally, the whole plant community would collectively build a database of plant images,” Jiying muses. “I hope our work highlights the potential of big data and deep learning approaches. These techniques could reduce the time spent on repetitive research activities, and, more importantly, empower scientists to address long-standing and complex biological questions. That’s where AI holds great promise.”
Acknowledgments: The banner image is by Mike MacKenzie, image via www.vpnsrus.com, CC BY 2.0. Dr. Anne Rea and Xiaotong Jiang, members of the Jianping Hu lab, took the microscopy images that were used to train DeepLearnMOR. Dr. Jiajie Peng and Jinghao Peng from the Northwestern Polytechnical University, Xi’an, China also contributed to the study. The work was funded by the National Science Foundation, the U.S. Department of Energy’s Office of Basic Energy Sciences, and the National Natural Science Foundation of China.
By Igor Houwat, Jiying Li