Tutorial for the growth and development of Myxococcus xanthus as a Model System at the Intersection of Biology and Physics
Jesus Manuel Antúnez Domínguez, Laura Pérez García, Natsuko Rivera-Yoshida, Jasmin Di Franco, David Steiner, Alejandro V. Arzola, Mariana Benítez, Charlotte Hamngren Blomqvist, Roberto Cerbino, Caroline Beck Adiels, Giovanni Volpe
arXiv: 2407.18714
Myxococcus xanthus is a unicellular organism whose cells possess the ability to move and communicate, leading to the emergence of complex collective properties and behaviours. This has made it an ideal model system to study the emergence of collective behaviours in interdisciplinary research efforts lying at the intersection of biology and physics, especially in the growing field of active matter research. Often, challenges arise when setting up reliable and reproducible culturing protocols. This tutorial provides a clear and comprehensive guide on the culture, growth, development, and experimental sample preparation of M. xanthus. Additionally, it includes some representative examples of experiments that can be conducted using these samples, namely motility assays, fruiting body formation, predation, and elasticotaxis.
Dynamic live/apoptotic cell assay using phase-contrast imaging and deep learning
Caroline B. Adiels
Chemical live/dead assay has a long history of providing information about the viability of cells cultured in vitro. The standard methods rely on imaging chemically-stained cells using fluorescence microscopy and further analysis of the obtained images to retrieve the proportion of living cells in the sample. However, such a technique is not only time-consuming but also invasive. Due to the toxicity of chemical dyes, once a sample is stained, it is discarded, meaning that longitudinal studies are impossible using this approach. Further, information about when cells start programmed cell death (apoptosis) is more relevant for dynamic studies. Here, we present an alternative method where cell images from phase-contrast time-lapse microscopy are virtually-stained using deep learning. In this study, human endothelial cells are stained live or apoptotic and subsequently counted using the self-supervised single-shot deep-learning technique (LodeSTAR). Our approach is less labour-intensive than traditional chemical staining procedures and provides dynamic live/apoptotic cell ratios from a continuous cell population with minimal impact. Further, it can be used to extract data from dense cell samples, where manual counting is unfeasible.
Our recent work on “coffee rings” was presented at the Gothenburg Science Festival, which, with about 100 000 visitors each year, is one of the largest popular science events in Europe.
On Wednesday 19th April 2023, Marcel Rey, Laura Natali, Daniela Pérez Guerrero and Caroline Adiels set up a stand in Nordstan.
In this guided exhibition, visitors were able to observe the flow inside a drying droplet using optical microscopes. They learned how the suspended solid coffee particles flow from the inside towards the edge of the coffee droplet, where they accumulate and cause the characteristic coffee ring pattern after drying.
Nowadays, the coffee ring effect presents still a major challenge in ink-jet printing or coating technologies, where a uniform drying is required. We thus shared our recently developed strategies to overcome the coffee ring effect and obtain a uniform deposit of drying droplets.
And finally, visitors were also offered a freshly-brewed espresso to not only drink but also to experience the “coffee ring effect” hands on.
Roadmap for optical tweezers
Giovanni Volpe, Onofrio M Maragò, Halina Rubinsztein-Dunlop, Giuseppe Pesce, Alexander B Stilgoe, Giorgio Volpe, Georgiy Tkachenko, Viet Giang Truong, Síle Nic Chormaic, Fatemeh Kalantarifard, Parviz Elahi, Mikael Käll, Agnese Callegari, Manuel I Marqués, Antonio A R Neves, Wendel L Moreira, Adriana Fontes, Carlos L Cesar, Rosalba Saija, Abir Saidi, Paul Beck, Jörg S Eismann, Peter Banzer, Thales F D Fernandes, Francesco Pedaci, Warwick P Bowen, Rahul Vaippully, Muruga Lokesh, Basudev Roy, Gregor Thalhammer-Thurner, Monika Ritsch-Marte, Laura Pérez García, Alejandro V Arzola, Isaac Pérez Castillo, Aykut Argun, Till M Muenker, Bart E Vos, Timo Betz, Ilaria Cristiani, Paolo Minzioni, Peter J Reece, Fan Wang, David McGloin, Justus C Ndukaife, Romain Quidant, Reece P Roberts, Cyril Laplane, Thomas Volz, Reuven Gordon, Dag Hanstorp, Javier Tello Marmolejo, Graham D Bruce, Kishan Dholakia, Tongcang Li, Oto Brzobohatý, Stephen H Simpson, Pavel Zemánek, Felix Ritort, Yael Roichman, Valeriia Bobkova, Raphael Wittkowski, Cornelia Denz, G V Pavan Kumar, Antonino Foti, Maria Grazia Donato, Pietro G Gucciardi, Lucia Gardini, Giulio Bianchi, Anatolii V Kashchuk, Marco Capitanio, Lynn Paterson, Philip H Jones, Kirstine Berg-Sørensen, Younes F Barooji, Lene B Oddershede, Pegah Pouladian, Daryl Preece, Caroline Beck Adiels, Anna Chiara De Luca, Alessandro Magazzù, David Bronte Ciriza, Maria Antonia Iatì, Grover A Swartzlander Jr
Journal of Physics: Photonics 2(2), 022501 (2023)
arXiv: 2206.13789
doi: 110.1088/2515-7647/acb57b
Optical tweezers are tools made of light that enable contactless pushing, trapping, and manipulation of objects, ranging from atoms to space light sails. Since the pioneering work by Arthur Ashkin in the 1970s, optical tweezers have evolved into sophisticated instruments and have been employed in a broad range of applications in the life sciences, physics, and engineering. These include accurate force and torque measurement at the femtonewton level, microrheology of complex fluids, single micro- and nano-particle spectroscopy, single-cell analysis, and statistical-physics experiments. This roadmap provides insights into current investigations involving optical forces and optical tweezers from their theoretical foundations to designs and setups. It also offers perspectives for applications to a wide range of research fields, from biophysics to space exploration.
Roadmap on Deep Learning for Microscopy
Giovanni Volpe, Carolina Wählby, Lei Tian, Michael Hecht, Artur Yakimovich, Kristina Monakhova, Laura Waller, Ivo F. Sbalzarini, Christopher A. Metzler, Mingyang Xie, Kevin Zhang, Isaac C.D. Lenton, Halina Rubinsztein-Dunlop, Daniel Brunner, Bijie Bai, Aydogan Ozcan, Daniel Midtvedt, Hao Wang, Nataša Sladoje, Joakim Lindblad, Jason T. Smith, Marien Ochoa, Margarida Barroso, Xavier Intes, Tong Qiu, Li-Yu Yu, Sixian You, Yongtao Liu, Maxim A. Ziatdinov, Sergei V. Kalinin, Arlo Sheridan, Uri Manor, Elias Nehme, Ofri Goldenberg, Yoav Shechtman, Henrik K. Moberg, Christoph Langhammer, Barbora Špačková, Saga Helgadottir, Benjamin Midtvedt, Aykut Argun, Tobias Thalheim, Frank Cichos, Stefano Bo, Lars Hubatsch, Jesus Pineda, Carlo Manzo, Harshith Bachimanchi, Erik Selander, Antoni Homs-Corbera, Martin Fränzl, Kevin de Haan, Yair Rivenson, Zofia Korczak, Caroline Beck Adiels, Mite Mijalkov, Dániel Veréb, Yu-Wei Chang, Joana B. Pereira, Damian Matuszewski, Gustaf Kylberg, Ida-Maria Sintorn, Juan C. Caicedo, Beth A Cimini, Muyinatu A. Lediju Bell, Bruno M. Saraiva, Guillaume Jacquemet, Ricardo Henriques, Wei Ouyang, Trang Le, Estibaliz Gómez-de-Mariscal, Daniel Sage, Arrate Muñoz-Barrutia, Ebba Josefson Lindqvist, Johanna Bergman
arXiv: 2303.03793
Through digital imaging, microscopy has evolved from primarily being a means for visual observation of life at the micro- and nano-scale, to a quantitative tool with ever-increasing resolution and throughput. Artificial intelligence, deep neural networks, and machine learning are all niche terms describing computational methods that have gained a pivotal role in microscopy-based research over the past decade. This Roadmap is written collectively by prominent researchers and encompasses selected aspects of how machine learning is applied to microscopy image data, with the aim of gaining scientific knowledge by improved image quality, automated detection, segmentation, classification and tracking of objects, and efficient merging of information from multiple imaging modalities. We aim to give the reader an overview of the key developments and an understanding of possibilities and limitations of machine learning for microscopy. It will be of interest to a wide cross-disciplinary audience in the physical sciences and life sciences.
Dynamic live/apoptotic cell assay using phase-contrast imaging and deep learning
Zofia Korczak, Jesús Pineda, Saga Helgadottir, Benjamin Midtvedt, Mattias Goksör, Giovanni Volpe, Caroline B. Adiels
bioRxiv: 10.1101/2022.07.18.500422
Chemical live/dead assay has a long history of providing information about the viability of cells cultured in vitro. The standard methods rely on imaging chemically-stained cells using fluorescence microscopy and further analysis of the obtained images to retrieve the proportion of living cells in the sample. However, such a technique is not only time-consuming but also invasive. Due to the toxicity of chemical dyes, once a sample is stained, it is discarded, meaning that longitudinal studies are impossible using this approach. Further, information about when cells start programmed cell death (apoptosis) is more relevant for dynamic studies. Here, we present an alternative method where cell images from phase-contrast time-lapse microscopy are virtually-stained using deep learning. In this study, human endothelial cells are stained live or apoptotic and subsequently counted using the self-supervised single-shot deep-learning technique (LodeSTAR). Our approach is less labour-intensive than traditional chemical staining procedures and provides dynamic live/apoptotic cell ratios from a continuous cell population with minimal impact. Further, it can be used to extract data from dense cell samples, where manual counting is unfeasible.
The study, recently published in Biophysics Reviews, shows how artificial intelligence can be used to develop faster, cheaper and more reliable information about cells, while also eliminating the disadvantages from using chemicals in the process.
Extracting quantitative biological information from bright-field cell images using deep learning
Saga Helgadottir, Benjamin Midtvedt, Jesús Pineda, Alan Sabirsh, Caroline B. Adiels, Stefano Romeo, Daniel Midtvedt, Giovanni Volpe
Biophysics Rev. 2, 031401 (2021)
arXiv: 2012.12986
doi: 10.1063/5.0044782
Quantitative analysis of cell structures is essential for biomedical and pharmaceutical research. The standard imaging approach relies on fluorescence microscopy, where cell structures of interest are labeled by chemical staining techniques. However, these techniques are often invasive and sometimes even toxic to the cells, in addition to being time-consuming, labor-intensive, and expensive. Here, we introduce an alternative deep-learning-powered approach based on the analysis of bright-field images by a conditional generative adversarial neural network (cGAN). We show that this approach can extract information from the bright-field images to generate virtually-stained images, which can be used in subsequent downstream quantitative analyses of cell structures. Specifically, we train a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using bright-field images of human stem-cell-derived fat cells (adipocytes), which are of particular interest for nanomedicine and vaccine development. Subsequently, we use these virtually-stained images to extract quantitative measures about these cell structures. Generating virtually-stained fluorescence images is less invasive, less expensive, and more reproducible than standard chemical staining; furthermore, it frees up the fluorescence microscopy channels for other analytical probes, thus increasing the amount of information that can be extracted from each cell.
Intercellular communication induces glycolytic synchronization waves between individually oscillating cells
Martin Mojica-Benavides, David D. van Niekerk, Mite Mijalkov, Jacky L. Snoep, Bernhard Mehlig, Giovanni Volpe, Caroline B. Adiels & Mattias Goksör
PNAS 118(6), e2010075118 (2021)
doi: 10.1073/pnas.2010075118
arXiv: 1909.05187
Metabolic oscillations in single cells underlie the mechanisms behind cell synchronization and cell-cell communication. For example, glycolytic oscillations mediated by biochemical communication between cells may synchronize the pulsatile insulin secretion by pancreatic tissue, and a link between glycolytic synchronization anomalies and type-2 diabetes has been hypotesized. Cultures of yeast cells have provided an ideal model system to study synchronization and propagation waves of glycolytic oscillations in large populations. However, the mechanism by which synchronization occurs at individual cell-cell level and overcome local chemical concentrations and heterogenic biological clocks, is still an open question because of experimental limitations in sensitive and specific handling of single cells. Here, we show how the coupling of intercellular diffusion with the phase regulation of individual oscillating cells induce glycolytic synchronization waves. We directly measure the single-cell metabolic responses from yeast cells in a microfluidic environment and characterize a discretized cell-cell communication using graph theory. We corroborate our findings with simulations based on a kinetic detailed model for individual yeast cells. These findings can provide insight into the roles cellular synchronization play in biomedical applications, such as insulin secretion regulation at the cellular level.