Press release on Extracting quantitative biological information from bright-field cell images using deep learning

Virtually-stained generated image for lipid-droplet.

The article Extracting quantitative biological information from bright-field cell images using deep learning has been featured in a press release of the University of Gothenburg.

The study, recently published in Biophysics Reviews, shows how artificial intelligence can be used to develop faster, cheaper and more reliable information about cells, while also eliminating the disadvantages from using chemicals in the process.

Here the links to the press releases on Cision:
Swedish: Effektivare studier av celler med ny AI-metod
English: More effective cell studies using new AI method

Here the links to the press releases in the News of the University of Gothenburg:
Swedish: Effektivare studier av celler med ny AI-metod
English: More effective cell studies using new AI method

Extracting quantitative biological information from brightfield cell images using deep learning featured in AIP SciLight

The article Extracting quantitative biological information from brightfield cell images using deep learning
has been featured in: “Staining Cells Virtually Offers Alterative Approach to Chemical Dyes”, AIP SciLight (July 23, 2021).

Scilight showcases the most interesting research across the physical sciences published in AIP Publishing Journals.

Scilight is published weekly (52 issues per year) by AIP Publishing.

Extracting quantitative biological information from bright-field cell images using deep learning published in Biophysics Reviews

Virtually-stained generated image for lipid-droplet.
Extracting quantitative biological information from bright-field cell images using deep learning
Saga Helgadottir, Benjamin Midtvedt, Jesús Pineda, Alan Sabirsh, Caroline B. Adiels, Stefano Romeo, Daniel Midtvedt, Giovanni Volpe
Biophysics Rev. 2, 031401 (2021)
arXiv: 2012.12986
doi: 10.1063/5.0044782

Quantitative analysis of cell structures is essential for biomedical and pharmaceutical research. The standard imaging approach relies on fluorescence microscopy, where cell structures of interest are labeled by chemical staining techniques. However, these techniques are often invasive and sometimes even toxic to the cells, in addition to being time-consuming, labor-intensive, and expensive. Here, we introduce an alternative deep-learning-powered approach based on the analysis of bright-field images by a conditional generative adversarial neural network (cGAN). We show that this approach can extract information from the bright-field images to generate virtually-stained images, which can be used in subsequent downstream quantitative analyses of cell structures. Specifically, we train a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using bright-field images of human stem-cell-derived fat cells (adipocytes), which are of particular interest for nanomedicine and vaccine development. Subsequently, we use these virtually-stained images to extract quantitative measures about these cell structures. Generating virtually-stained fluorescence images is less invasive, less expensive, and more reproducible than standard chemical staining; furthermore, it frees up the fluorescence microscopy channels for other analytical probes, thus increasing the amount of information that can be extracted from each cell.