Extracting quantitative biological information from brightfield cell images using deep learning featured in AIP SciLight

The article Extracting quantitative biological information from brightfield cell images using deep learning
has been featured in: “Staining Cells Virtually Offers Alterative Approach to Chemical Dyes”, AIP SciLight (July 23, 2021).

Scilight showcases the most interesting research across the physical sciences published in AIP Publishing Journals.

Scilight is published weekly (52 issues per year) by AIP Publishing.

Microscopic Metavehicles Powered and Steered by Embedded Optical Metasurfaces published in Nature Nanotechnology

Metavehicles.
Microscopic Metavehicles Powered and Steered by Embedded Optical Metasurfaces
Daniel Andrén, Denis G. Baranov, Steven Jones, Giovanni Volpe, Ruggero Verre, Mikael Käll
Nat. Nanotechnol. (2021)
doi: 10.1038/s41565-021-00941-0
arXiv: 2012.10205

Nanostructured dielectric metasurfaces offer unprecedented opportunities to manipulate light by imprinting an arbitrary phase gradient on an impinging wavefront. This has resulted in the realization of a range of flat analogues to classical optical components, such as lenses, waveplates and axicons. However, the change in linear and angular optical momentum associated with phase manipulation also results in previously unexploited forces and torques that act on the metasurface itself. Here we show that these optomechanical effects can be utilized to construct optical metavehicles – microscopic particles that can travel long distances under low-intensity plane-wave illumination while being steered by the polarization of the incident light. We demonstrate movement in complex patterns, self-correcting motion and an application as transport vehicles for microscopic cargoes, which include unicellular organisms. The abundance of possible optical metasurfaces attests to the prospect of developing a wide variety of metavehicles with specialized functional behaviours.

Extracting quantitative biological information from brightfield cell images using deep learning published in Biophysics Reviews

Virtually-stained generated image for lipid-droplet.
Extracting quantitative biological information from brightfield cell images using deep learning
Saga Helgadottir, Benjamin Midtvedt, Jesús Pineda, Alan Sabirsh, Caroline B. Adiels, Stefano Romeo, Daniel Midtvedt, Giovanni Volpe
Biophysics Rev. 2, 031401 (2021)
arXiv: 2012.12986
doi: 10.1063/5.0044782

Quantitative analysis of cell structures is essential for biomedical and pharmaceutical research. The standard imaging approach relies on fluorescence microscopy, where cell structures of interest are labeled by chemical staining techniques. However, these techniques are often invasive and sometimes even toxic to the cells, in addition to being time-consuming, labor-intensive, and expensive. Here, we introduce an alternative deep-learning-powered approach based on the analysis of brightfield images by a conditional generative adversarial neural network (cGAN). We show that this approach can extract information from the brightfield images to generate virtually-stained images, which can be used in subsequent downstream quantitative analyses of cell structures. Specifically, we train a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using brightfield images of human stem-cell-derived fat cells (adipocytes), which are of particular interest for nanomedicine and vaccine development. Subsequently, we use these virtually-stained images to extract quantitative measures about these cell structures. Generating virtually-stained fluorescence images is less invasive, less expensive, and more reproducible than standard chemical staining; furthermore, it frees up the fluorescence microscopy channels for other analytical probes, thus increasing the amount of information that can be extracted from each cell.

Presentation by L. Pérez at ELS 2021, 13 July 2021

Laura Pérez presented the work “FORMA and BEFORE: expanding applications of optical tweezers” at the ELS conference (online) on the 13th of July.

The main objective of the Electromagnetic and Light Scattering Conference (ELS) is to bring together scientists and engineers studying various aspects of light scattering and to provide a relaxed academic atmosphere for in-depth discussions of theoretical advances, measurements, and applications.

FORMA allows to identify and characterize all the equilibrium points in a force field generated by a speckle pattern.
FORMA and BEFORE: Expanding Applications of Optical Tweezers. Laura Pérez Garcia, Martin Selin, Alejandro V. Arzola, Giovanni Volpe, Alessandro Magazzù, Isaac Pérez Castillo.
ELS 2021
Date: 13 July 2021
Time: 15:45 (CEST)

Abstract: 
FORMA (force reconstruction via maximum-likelihood-estimator analysis) addresses the need to measure the force fields acting on microscopic particles. Compared to alternative established methods, FORMA is faster, simpler, more accurate, and more precise. Furthermore, FORMA can also measure non-conservative and out-of-equilibrium force fields. Here, after a brief introduction to FORMA, I will present its use, advantages, and limitations. I will conclude with the most recent work where we exploit Bayesian inference to expand FORMA’s scope of application.

Neural Network Training with Highly Incomplete Datasets on ArXiv

Working principles for training neural networks with highly incomplete dataset: vanilla (upper panel) vs GapNet (lower panel) (Image by Yu-Wei Chang.)
Neural Network Training with Highly Incomplete Datasets
Yu-Wei Chang, Laura Natali, Oveis Jamialahmadi, Stefano Romeo, Joana B. Pereira, Giovanni Volpe
arXiV: 2107.00429

Neural network training and validation rely on the availability of large high-quality datasets. However, in many cases only incomplete datasets are available, particularly in health care applications, where each patient typically undergoes different clinical procedures or can drop out of a study. Since the data to train the neural networks need to be complete, most studies discard the incomplete datapoints, which reduces the size of the training data, or impute the missing features, which can lead to artefacts. Alas, both approaches are inadequate when a large portion of the data is missing. Here, we introduce GapNet, an alternative deep-learning training approach that can use highly incomplete datasets. First, the dataset is split into subsets of samples containing all values for a certain cluster of features. Then, these subsets are used to train individual neural networks. Finally, this ensemble of neural networks is combined into a single neural network whose training is fine-tuned using all complete datapoints. Using two highly incomplete real-world medical datasets, we show that GapNet improves the identification of patients with underlying Alzheimer’s disease pathology and of patients at risk of hospitalization due to Covid-19. By distilling the information available in incomplete datasets without having to reduce their size or to impute missing values, GapNet will permit to extract valuable information from a wide range of datasets, benefiting diverse fields from medicine to engineering.