Invited Talk by G. Volpe at BIST Symposium on Microscopy, Nanoscopy and Imaging Sciences, Castelldefels, 10 March 2023

DeepTrack 2.1 Logo. (Image from DeepTrack 2.1 Project)
AI and deep learning for microscopy
Giovanni Volpe
BIST Symposium on Microscopy, Nanoscopy and Imaging Sciences
Castedefells, 10 March 2023

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions.

To overcome this issue, we have introduced a software, currently at version DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.1 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.

Roadmap on Deep Learning for Microscopy on ArXiv

Spatio-temporal spectrum diagram of microscopy techniques and their applications. (Image by the Authors of the manuscript.)
Roadmap on Deep Learning for Microscopy
Giovanni Volpe, Carolina Wählby, Lei Tian, Michael Hecht, Artur Yakimovich, Kristina Monakhova, Laura Waller, Ivo F. Sbalzarini, Christopher A. Metzler, Mingyang Xie, Kevin Zhang, Isaac C.D. Lenton, Halina Rubinsztein-Dunlop, Daniel Brunner, Bijie Bai, Aydogan Ozcan, Daniel Midtvedt, Hao Wang, Nataša Sladoje, Joakim Lindblad, Jason T. Smith, Marien Ochoa, Margarida Barroso, Xavier Intes, Tong Qiu, Li-Yu Yu, Sixian You, Yongtao Liu, Maxim A. Ziatdinov, Sergei V. Kalinin, Arlo Sheridan, Uri Manor, Elias Nehme, Ofri Goldenberg, Yoav Shechtman, Henrik K. Moberg, Christoph Langhammer, Barbora Špačková, Saga Helgadottir, Benjamin Midtvedt, Aykut Argun, Tobias Thalheim, Frank Cichos, Stefano Bo, Lars Hubatsch, Jesus Pineda, Carlo Manzo, Harshith Bachimanchi, Erik Selander, Antoni Homs-Corbera, Martin Fränzl, Kevin de Haan, Yair Rivenson, Zofia Korczak, Caroline Beck Adiels, Mite Mijalkov, Dániel Veréb, Yu-Wei Chang, Joana B. Pereira, Damian Matuszewski, Gustaf Kylberg, Ida-Maria Sintorn, Juan C. Caicedo, Beth A Cimini, Muyinatu A. Lediju Bell, Bruno M. Saraiva, Guillaume Jacquemet, Ricardo Henriques, Wei Ouyang, Trang Le, Estibaliz Gómez-de-Mariscal, Daniel Sage, Arrate Muñoz-Barrutia, Ebba Josefson Lindqvist, Johanna Bergman
arXiv: 2303.03793

Through digital imaging, microscopy has evolved from primarily being a means for visual observation of life at the micro- and nano-scale, to a quantitative tool with ever-increasing resolution and throughput. Artificial intelligence, deep neural networks, and machine learning are all niche terms describing computational methods that have gained a pivotal role in microscopy-based research over the past decade. This Roadmap is written collectively by prominent researchers and encompasses selected aspects of how machine learning is applied to microscopy image data, with the aim of gaining scientific knowledge by improved image quality, automated detection, segmentation, classification and tracking of objects, and efficient merging of information from multiple imaging modalities. We aim to give the reader an overview of the key developments and an understanding of possibilities and limitations of machine learning for microscopy. It will be of interest to a wide cross-disciplinary audience in the physical sciences and life sciences.

Presentation by Sreekanth K Manikandan, 10 February 2023

Inferring entropy production in microscopic systems
Sreekanth K. Manikandan
Stanford University
10 February 2023, 15:00, Raven and Fox

An inherent feature of small systems in contact with thermal reservoirs, be it a pollen grain in water, or an active microbe flagellum, is fluctuations. Even with advanced microscopic techniques, distinguishing active, non-equilibrium processes defined by a constant dissipation of energy (entropy production) to the environment from passive, equilibrium processes is a very challenging task and a vastly developing field of research. In this talk, I will present a simple and effective way to infer entropy production in microscopic non-equilibrium systems, from short empirical trajectories [1]. I will also demonstrate how this scheme can be used to spatiotemporally resolve the active nature of cell flickering [2]. Our result is built upon the Thermodynamic Uncertainty Relation (TUR) which relates current fluctuations in non-equilibrium states to the entropy production rate.

References

[1] Inferring entropy production from short experiments [ Phys. Rev. Lett. 124, 120603 (2020) ]

[2] Estimate of entropy generation rate can spatiotemporally resolve the active nature of cell flickering [arXiv:2205.12849]

Bio: Sreekanth completed his PhD at the department of Physics, Stockholm University, in June 2020. His PhD supervisor was Supriya Krishnamurthy. From August 2020 – October 2022, Sreekanth was a Nordita fellow postdoc in the soft condensed matter group at Nordita. Currently, he is a postdoctoral scholar at the Department of Chemistry at Stanford University, funded by the Wallenberg foundation.

Sergi Masò Orriols joins the Soft Matter Lab

(Photo by G. Pesce.)
Sergi Masò Orriols joined the Soft Matter Lab on 23rd January 2023.

Sergi is a PhD student at the Biomedicine Department of the University of Vic, Cataluña, Spain.

During his time at the Soft Matter Lab, he will be working on a biophysical application of deep learning.

He will stay in our lab till 2 June 2023.

Christian Rutgersson joins the Soft Matter Lab

(Photo by A. Argun.)
Alfred Bergsten joined the Soft Matter Lab on 17 January 2023.

Christian is a master student in Complex Adaptive Systems at Chalmers University of Technology.

During his time at the Soft Matter Lab, he will study the characterization of active matter particle systems with graph neural networks.

Alfred Bergsten joins the Soft Matter Lab

(Photo by A. Argun.)
Alfred Bergsten joined the Soft Matter Lab on 17 January 2023.

Alfred is a master student in Complex Adaptive Systems at Chalmers University of Technology.

During his time at the Soft Matter Lab, he will study the self-assembly of colloids in the presence of travelling waves.

Presentation by Natsuko Rivera-Yoshida, 19 January 2023

M. xanthus cell-cell and cell-particle local interactions during cellular aggregation.
Transitions to multicellularity: the physical environment at the microscale
Natsuko Rivera-Yoshida
19 January 2023
16:30, Nexus

Physical environment contribute to both the robustness and the variation of developmental trajectories and, eventually, to the evolutionary transitions. But how? Myxococcus xanthus is a soil bacterium and is widely used as a biological model. In starvation conditions, cells move individually over the substrate into growing groups of cells which, eventually, organize into three-dimensional structures called fruiting bodies. Commonly, this developmental process is studied using standard experimental protocols that employ homogeneous and flat agar substrates, without considering ecologically relevant variables. However M. Xanthus has shown to drastically alter its development when modifying variables such as the substrate topography or stiffness. This modifications occur with trait and scale specificity, at the level of individual cells, large group of cells, fruiting bodies and also at the population scale. We use experimental and analytical tools to study how multicellular organization is altered at different spatial scales and developmental moments.

Presentation by Andreas Menzel, 19 January 2023

Individual and collective motion of nematic, polar, and chiral actively driven objects
Andreas Menzel
19 January 2023
15:30, Nexus

Abstract:
Actively driven objects comprise a manifold of possible different realizations: from self-propelling bacteria and artificial phoretically driven colloidal particles via vibrated hoppers to walking pedestrians. We analyze basic theoretical models to identify generic features of subclasses of such agents. Within this framework, we first address nematic objects [1]. They predominantly propel along one specific axis of their body, but do not feature an explicit head or tail. That is, they can move either way by spontaneous symmetry breaking. This leads to characteristic kinks along their trajectories. Second, we study chiral objects that show persistent bending of their trajectories and migrate in discrete steps [2]. When, additionally, they tend to migrate towards a fixed remote target, rich nonlinear dynamics emerges. It comprises period doubling and chaotic behavior as a function of the tendency of alignment, which is reflected by the trajectories. Third, we consider the collective motion of continuously moving chiral objects in crystal-like arrangements [3]. We here identify a localization transition with increasing chirality or self-shearing phenomena within the crystal-like structures. Overall, we hope by our work to stimulate experimental realization and observation of the various investigated systems and phenomena.

References
[1] A. M. Menzel, J. Chem. Phys. 157, 011102 (2022).
[2] A. M. Menzel, resubmitted.
[3] Z.-F. Huang, A. M. Menzel, H. Löwen, Phys. Rev. Lett. 125, 218002 (2020).

Short Bio:
Andreas Menzel studied physics at the University of Bayreuth (Germany), where he also completed his PhD on the continuum theory of soft elastic liquid-crystalline composite materials. After postdoctoral stays at the University of Illinois at Urbana-Champaign with Prof. Nigel Goldenfeld and at the Max Planck Institute for Polymer Research in Mainz in the department headed by Prof. Kurt Kremer, as well as research stays at Kyoto University with Prof. Takao Ohta, he completed his Habilitation at Heinrich Heine University Düsseldorf at the Theory Institute for Soft Matter headed by Prof. Hartmut Löwen. Amongst others, Andreas is interested in developing and applying explicit Green’s functions methods, statistical descriptions, and continuum theories on soft matter, addressing, for example, functionalized elastic composite materials and active matter. In 2020 he moved as a Heisenberg Fellow of the German Research Foundation to Otto von Guericke University Magdeburg (Germany), where he now heads the department on Theory of Soft Matter / Biophysics.

Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion published in Nature Machine Intelligence

Input graph structure including a redundant number of edges. (Image by J. Pineda.)
Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion
Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe, Carlo Manzo
Nature Machine Intelligence 5, 71–82 (2023)
arXiv: 2202.06355
doi: 10.1038/s42256-022-00595-0

The characterization of dynamical processes in living systems provides important clues for their mechanistic interpretation and link to biological functions. Thanks to recent advances in microscopy techniques, it is now possible to routinely record the motion of cells, organelles, and individual molecules at multiple spatiotemporal scales in physiological conditions. However, the automated analysis of dynamics occurring in crowded and complex environments still lags behind the acquisition of microscopic image sequences. Here, we present a framework based on geometric deep learning that achieves the accurate estimation of dynamical properties in various biologically-relevant scenarios. This deep-learning approach relies on a graph neural network enhanced by attention-based components. By processing object features with geometric priors, the network is capable of performing multiple tasks, from linking coordinates into trajectories to inferring local and global dynamic properties. We demonstrate the flexibility and reliability of this approach by applying it to real and simulated data corresponding to a broad range of biological experiments.

Faster and more accurate geometrical-optics optical force calculation using neural networks published in ACS Photonics

Focused rays scattered by an ellipsoidal particles (left). Optical torque along y calculated in the x-y plane using ray scattering with a grid of 1600 rays (up, right) and using a trained neural network (down, right). (Image by the Authors of the manuscript.)
Faster and more accurate geometrical-optics optical force calculation using neural networks
David Bronte Ciriza, Alessandro Magazzù, Agnese Callegari, Gunther Barbosa, Antonio A. R. Neves, Maria A. Iatì, Giovanni Volpe, Onofrio M. Maragò
ACS Photonics 10, 234–241 (2023)
doi: 10.1021/acsphotonics.2c01565
arXiv: 2209.04032

Optical forces are often calculated by discretizing the trapping light beam into a set of rays and using geometrical optics to compute the exchange of momentum. However, the number of rays sets a trade-off between calculation speed and accuracy. Here, we show that using neural networks permits one to overcome this limitation, obtaining not only faster but also more accurate simulations. We demonstrate this using an optically trapped spherical particle for which we obtain an analytical solution to use as ground truth. Then, we take advantage of the acceleration provided by neural networks to study the dynamics of an ellipsoidal particle in a double trap, which would be computationally impossible otherwise.