Phase-contrast image before virtual staining. (Image by the Authors.)Dynamic live/apoptotic cell assay using phase-contrast imaging and deep learning
Zofia Korczak, Jesús Pineda, Saga Helgadottir, Benjamin Midtvedt, Mattias Goksör, Giovanni Volpe, Caroline B. Adiels
bioRxiv: 10.1101/2022.07.18.500422
Chemical live/dead assay has a long history of providing information about the viability of cells cultured in vitro. The standard methods rely on imaging chemically-stained cells using fluorescence microscopy and further analysis of the obtained images to retrieve the proportion of living cells in the sample. However, such a technique is not only time-consuming but also invasive. Due to the toxicity of chemical dyes, once a sample is stained, it is discarded, meaning that longitudinal studies are impossible using this approach. Further, information about when cells start programmed cell death (apoptosis) is more relevant for dynamic studies. Here, we present an alternative method where cell images from phase-contrast time-lapse microscopy are virtually-stained using deep learning. In this study, human endothelial cells are stained live or apoptotic and subsequently counted using the self-supervised single-shot deep-learning technique (LodeSTAR). Our approach is less labour-intensive than traditional chemical staining procedures and provides dynamic live/apoptotic cell ratios from a continuous cell population with minimal impact. Further, it can be used to extract data from dense cell samples, where manual counting is unfeasible.
In the event, held on Tuesday, 15 March 2022, 16:00-19:00, the ten teams that had gone through the training at the Startup Camp and developed their company ideas, pitched their companies on stage to a panel of entrepreneur experts, the other nine teams, and all business coaches at Chalmers Ventures. DeepTrack obtained the first place among the ten participants. Congrats!
Here a few pictures from the final pitching event of the Startup Camp.
Henrik. (Picture by Jonas Sandwall, Chalmers Ventures.) DeepTrack team members (left to right) Henrik, Giovanni and Jesus. (Picture by Jonas Sandwall, Chalmers Ventures.) Panelists. (Picture by Jonas Sandwall, Chalmers Ventures.)
The article Active Droploids has been featured in a press release of the University of Gothenburg.
The study, published in Nature Communications, examines a special system of colloidal particles and demonstrates a new kind of active matter, which interacts with and modifies its environment. In the long run, the result of the study can be used for drug delivery inside the human body or to perform sensing of environmental pollutants and their clean-up.
Active droploids. (Image taken from the article.)Active droploids
Jens Grauer, Falko Schmidt, Jesús Pineda, Benjamin Midtvedt, Hartmut Löwen, Giovanni Volpe & Benno Liebchen
Nat. Commun. 12, 6005 (2021)
doi: 10.1038/s41467-021-26319-3
arXiv: 2109.10677
Active matter comprises self-driven units, such as bacteria and synthetic microswimmers, that can spontaneously form complex patterns and assemble into functional microdevices. These processes are possible thanks to the out-of-equilibrium nature of active-matter systems, fueled by a one-way free-energy flow from the environment into the system. Here, we take the next step in the evolution of active matter by realizing a two-way coupling between active particles and their environment, where active particles act back on the environment giving rise to the formation of superstructures. In experiments and simulations we observe that, under light-illumination, colloidal particles and their near-critical environment create mutually-coupled co-evolving structures. These structures unify in the form of active superstructures featuring a droplet shape and a colloidal engine inducing self-propulsion. We call them active droploids—a portmanteau of droplet and colloids. Our results provide a pathway to create active superstructures through environmental feedback.
The study, recently published in Biophysics Reviews, shows how artificial intelligence can be used to develop faster, cheaper and more reliable information about cells, while also eliminating the disadvantages from using chemicals in the process.
Virtually-stained generated image for lipid-droplet.Extracting quantitative biological information from bright-field cell images using deep learning
Saga Helgadottir, Benjamin Midtvedt, Jesús Pineda, Alan Sabirsh, Caroline B. Adiels, Stefano Romeo, Daniel Midtvedt, Giovanni Volpe
Biophysics Rev. 2, 031401 (2021)
arXiv: 2012.12986
doi: 10.1063/5.0044782
Quantitative analysis of cell structures is essential for biomedical and pharmaceutical research. The standard imaging approach relies on fluorescence microscopy, where cell structures of interest are labeled by chemical staining techniques. However, these techniques are often invasive and sometimes even toxic to the cells, in addition to being time-consuming, labor-intensive, and expensive. Here, we introduce an alternative deep-learning-powered approach based on the analysis of bright-field images by a conditional generative adversarial neural network (cGAN). We show that this approach can extract information from the bright-field images to generate virtually-stained images, which can be used in subsequent downstream quantitative analyses of cell structures. Specifically, we train a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using bright-field images of human stem-cell-derived fat cells (adipocytes), which are of particular interest for nanomedicine and vaccine development. Subsequently, we use these virtually-stained images to extract quantitative measures about these cell structures. Generating virtually-stained fluorescence images is less invasive, less expensive, and more reproducible than standard chemical staining; furthermore, it frees up the fluorescence microscopy channels for other analytical probes, thus increasing the amount of information that can be extracted from each cell.
Particle tracking and characterization in terms of radius and refractive index.
Quantitative Digital Microscopy with Deep Learning
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt, Giovanni Volpe
Applied Physics Reviews 8, 011310 (2021)
doi: 10.1063/5.0034891
arXiv: 2010.08260
Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we introduce a software, DeepTrack 2.0, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.0 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.