Light, Matter, Action: Shining light on active matter published in ACS Photonics

Actuation of active matter by different properties of light. (Image by M. Rey.)
Light, Matter, Action: Shining light on active matter
Marcel Rey, Giovanni Volpe, Giorgio Volpe
ACS Photonics, 10, 1188–1201 (2023)
arXiv: 2301.13034
doi: 10.1021/acsphotonics.3c00140

Light carries energy and momentum. It can therefore alter the motion of objects from atomic to astronomical scales. Being widely available, readily controllable and broadly biocompatible, light is also an ideal tool to propel microscopic particles, drive them out of thermodynamic equilibrium and make them active. Thus, light-driven particles have become a recent focus of research in the field of soft active matter. In this perspective, we discuss recent advances in the control of soft active matter with light, which has mainly been achieved using light intensity. We also highlight some first attempts to utilize light’s additional degrees of freedom, such as its wavelength, polarization, and momentum. We then argue that fully exploiting light with all of its properties will play a critical role to increase the level of control over the actuation of active matter as well as the flow of light itself through it. This enabling step will advance the design of soft active matter systems, their functionalities and their transfer towards technological applications.

Roadmap for Optical Tweezers published in Journal of Physics: Photonics

Illustration of an optical tweezers holding a particle. (Image by A. Magazzù.)
Roadmap for optical tweezers
Giovanni Volpe, Onofrio M Maragò, Halina Rubinsztein-Dunlop, Giuseppe Pesce, Alexander B Stilgoe, Giorgio Volpe, Georgiy Tkachenko, Viet Giang Truong, Síle Nic Chormaic, Fatemeh Kalantarifard, Parviz Elahi, Mikael Käll, Agnese Callegari, Manuel I Marqués, Antonio A R Neves, Wendel L Moreira, Adriana Fontes, Carlos L Cesar, Rosalba Saija, Abir Saidi, Paul Beck, Jörg S Eismann, Peter Banzer, Thales F D Fernandes, Francesco Pedaci, Warwick P Bowen, Rahul Vaippully, Muruga Lokesh, Basudev Roy, Gregor Thalhammer-Thurner, Monika Ritsch-Marte, Laura Pérez García, Alejandro V Arzola, Isaac Pérez Castillo, Aykut Argun, Till M Muenker, Bart E Vos, Timo Betz, Ilaria Cristiani, Paolo Minzioni, Peter J Reece, Fan Wang, David McGloin, Justus C Ndukaife, Romain Quidant, Reece P Roberts, Cyril Laplane, Thomas Volz, Reuven Gordon, Dag Hanstorp, Javier Tello Marmolejo, Graham D Bruce, Kishan Dholakia, Tongcang Li, Oto Brzobohatý, Stephen H Simpson, Pavel Zemánek, Felix Ritort, Yael Roichman, Valeriia Bobkova, Raphael Wittkowski, Cornelia Denz, G V Pavan Kumar, Antonino Foti, Maria Grazia Donato, Pietro G Gucciardi, Lucia Gardini, Giulio Bianchi, Anatolii V Kashchuk, Marco Capitanio, Lynn Paterson, Philip H Jones, Kirstine Berg-Sørensen, Younes F Barooji, Lene B Oddershede, Pegah Pouladian, Daryl Preece, Caroline Beck Adiels, Anna Chiara De Luca, Alessandro Magazzù, David Bronte Ciriza, Maria Antonia Iatì, Grover A Swartzlander Jr
Journal of Physics: Photonics 2(2), 022501 (2023)
arXiv: 2206.13789
doi: 110.1088/2515-7647/acb57b

Optical tweezers are tools made of light that enable contactless pushing, trapping, and manipulation of objects, ranging from atoms to space light sails. Since the pioneering work by Arthur Ashkin in the 1970s, optical tweezers have evolved into sophisticated instruments and have been employed in a broad range of applications in the life sciences, physics, and engineering. These include accurate force and torque measurement at the femtonewton level, microrheology of complex fluids, single micro- and nano-particle spectroscopy, single-cell analysis, and statistical-physics experiments. This roadmap provides insights into current investigations involving optical forces and optical tweezers from their theoretical foundations to designs and setups. It also offers perspectives for applications to a wide range of research fields, from biophysics to space exploration.

Invited Talk by G. Volpe at 12th Nordic Workshop on Statistical Physics, Nordita, Stockholm, 15 March 2023

Logo of the AnDi challenge.
An Anomalous Competition: Assessment of methods for anomalous diffusion through a community effort
Giovanni Volpe
Nordita, Stockholm, 15 March 2023, 14:00

Deviations from the law of Brownian motion, typically referred to as anomalous diffusion, are ubiquitous in science and associated with non-equilibrium phenomena, flows of energy and information, and transport in living systems. In the last years, the booming of machine learning has boosted the development of new methods to detect and characterize anomalous diffusion from individual trajectories, going beyond classical calculations based on the mean squared displacement. We thus designed the AnDi challenge, an open community effort to objectively assess the performance of conventional and novel methods. We developed a python library for generating simulated datasets according to the most popular theoretical models of diffusion. We evaluated 16 methods over 3 different tasks and 3 different dimensions, involving anomalous exponent inference, model classification, and trajectory segmentation. Our analysis provides the first assessment of methods for anomalous diffusion in a variety of realistic conditions of trajectory length and noise. Furthermore, we compared the prediction provided by these methods for several experimental datasets. The results of this study further highlight the role that anomalous diffusion has in defining the biological function while revealing insight into the current state of the field and providing a benchmark for future developers.

Invited Talk by G. Volpe at BIST Symposium on Microscopy, Nanoscopy and Imaging Sciences, Castelldefels, 10 March 2023

DeepTrack 2.1 Logo. (Image from DeepTrack 2.1 Project)
AI and deep learning for microscopy
Giovanni Volpe
BIST Symposium on Microscopy, Nanoscopy and Imaging Sciences
Castedefells, 10 March 2023

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions.

To overcome this issue, we have introduced a software, currently at version DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.1 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.

Roadmap on Deep Learning for Microscopy on ArXiv

Spatio-temporal spectrum diagram of microscopy techniques and their applications. (Image by the Authors of the manuscript.)
Roadmap on Deep Learning for Microscopy
Giovanni Volpe, Carolina Wählby, Lei Tian, Michael Hecht, Artur Yakimovich, Kristina Monakhova, Laura Waller, Ivo F. Sbalzarini, Christopher A. Metzler, Mingyang Xie, Kevin Zhang, Isaac C.D. Lenton, Halina Rubinsztein-Dunlop, Daniel Brunner, Bijie Bai, Aydogan Ozcan, Daniel Midtvedt, Hao Wang, Nataša Sladoje, Joakim Lindblad, Jason T. Smith, Marien Ochoa, Margarida Barroso, Xavier Intes, Tong Qiu, Li-Yu Yu, Sixian You, Yongtao Liu, Maxim A. Ziatdinov, Sergei V. Kalinin, Arlo Sheridan, Uri Manor, Elias Nehme, Ofri Goldenberg, Yoav Shechtman, Henrik K. Moberg, Christoph Langhammer, Barbora Špačková, Saga Helgadottir, Benjamin Midtvedt, Aykut Argun, Tobias Thalheim, Frank Cichos, Stefano Bo, Lars Hubatsch, Jesus Pineda, Carlo Manzo, Harshith Bachimanchi, Erik Selander, Antoni Homs-Corbera, Martin Fränzl, Kevin de Haan, Yair Rivenson, Zofia Korczak, Caroline Beck Adiels, Mite Mijalkov, Dániel Veréb, Yu-Wei Chang, Joana B. Pereira, Damian Matuszewski, Gustaf Kylberg, Ida-Maria Sintorn, Juan C. Caicedo, Beth A Cimini, Muyinatu A. Lediju Bell, Bruno M. Saraiva, Guillaume Jacquemet, Ricardo Henriques, Wei Ouyang, Trang Le, Estibaliz Gómez-de-Mariscal, Daniel Sage, Arrate Muñoz-Barrutia, Ebba Josefson Lindqvist, Johanna Bergman
arXiv: 2303.03793

Through digital imaging, microscopy has evolved from primarily being a means for visual observation of life at the micro- and nano-scale, to a quantitative tool with ever-increasing resolution and throughput. Artificial intelligence, deep neural networks, and machine learning are all niche terms describing computational methods that have gained a pivotal role in microscopy-based research over the past decade. This Roadmap is written collectively by prominent researchers and encompasses selected aspects of how machine learning is applied to microscopy image data, with the aim of gaining scientific knowledge by improved image quality, automated detection, segmentation, classification and tracking of objects, and efficient merging of information from multiple imaging modalities. We aim to give the reader an overview of the key developments and an understanding of possibilities and limitations of machine learning for microscopy. It will be of interest to a wide cross-disciplinary audience in the physical sciences and life sciences.

Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion published in Nature Machine Intelligence

Input graph structure including a redundant number of edges. (Image by J. Pineda.)
Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion
Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe, Carlo Manzo
Nature Machine Intelligence 5, 71–82 (2023)
arXiv: 2202.06355
doi: 10.1038/s42256-022-00595-0

The characterization of dynamical processes in living systems provides important clues for their mechanistic interpretation and link to biological functions. Thanks to recent advances in microscopy techniques, it is now possible to routinely record the motion of cells, organelles, and individual molecules at multiple spatiotemporal scales in physiological conditions. However, the automated analysis of dynamics occurring in crowded and complex environments still lags behind the acquisition of microscopic image sequences. Here, we present a framework based on geometric deep learning that achieves the accurate estimation of dynamical properties in various biologically-relevant scenarios. This deep-learning approach relies on a graph neural network enhanced by attention-based components. By processing object features with geometric priors, the network is capable of performing multiple tasks, from linking coordinates into trajectories to inferring local and global dynamic properties. We demonstrate the flexibility and reliability of this approach by applying it to real and simulated data corresponding to a broad range of biological experiments.

Faster and more accurate geometrical-optics optical force calculation using neural networks published in ACS Photonics

Focused rays scattered by an ellipsoidal particles (left). Optical torque along y calculated in the x-y plane using ray scattering with a grid of 1600 rays (up, right) and using a trained neural network (down, right). (Image by the Authors of the manuscript.)
Faster and more accurate geometrical-optics optical force calculation using neural networks
David Bronte Ciriza, Alessandro Magazzù, Agnese Callegari, Gunther Barbosa, Antonio A. R. Neves, Maria A. Iatì, Giovanni Volpe, Onofrio M. Maragò
ACS Photonics 10, 234–241 (2023)
doi: 10.1021/acsphotonics.2c01565
arXiv: 2209.04032

Optical forces are often calculated by discretizing the trapping light beam into a set of rays and using geometrical optics to compute the exchange of momentum. However, the number of rays sets a trade-off between calculation speed and accuracy. Here, we show that using neural networks permits one to overcome this limitation, obtaining not only faster but also more accurate simulations. We demonstrate this using an optically trapped spherical particle for which we obtain an analytical solution to use as ground truth. Then, we take advantage of the acceleration provided by neural networks to study the dynamics of an ellipsoidal particle in a double trap, which would be computationally impossible otherwise.

Corneal endothelium assessment in specular microscopy images with Fuchs’ dystrophy via deep regression of signed distance maps published in Biomedical Optics Express

Example of final segmentation with the UNet-dm of the specular microscopy image of a severe case of cornea guttata. (Image by the Authors of the manuscript.)
Corneal endothelium assessment in specular microscopy images with Fuchs’ dystrophy via deep regression of signed distance maps
Juan S. Sierra, Jesus Pineda, Daniela Rueda, Alejandro Tello, Angelica M. Prada, Virgilio Galvis, Giovanni Volpe, Maria S. Millan, Lenny A. Romero, Andres G. Marrugo
Biomedical Optics Express 14, 335-351 (2023)
doi: 10.1364/BOE.477495
arXiv: 2210.07102

Specular microscopy assessment of the human corneal endothelium (CE) in Fuchs’ dystrophy is challenging due to the presence of dark image regions called guttae. This paper proposes a UNet-based segmentation approach that requires minimal post-processing and achieves reliable CE morphometric assessment and guttae identification across all degrees of Fuchs’ dystrophy. We cast the segmentation problem as a regression task of the cell and gutta signed distance maps instead of a pixel-level classification task as typically done with UNets. Compared to the conventional UNet classification approach, the distance-map regression approach converges faster in clinically relevant parameters. It also produces morphometric parameters that agree with the manually-segmented ground-truth data, namely the average cell density difference of -41.9 cells/mm2 (95% confidence interval (CI) [-306.2, 222.5]) and the average difference of mean cell area of 14.8 um2 (95% CI [-41.9, 71.5]). These results suggest a promising alternative for CE assessment.

Single-shot self-supervised object detection in microscopy published in Nature Communications

LodeSTAR tracks the plankton Noctiluca scintillans. (Image by the Authors of the manuscript.)
Single-shot self-supervised particle tracking
Benjamin Midtvedt, Jesús Pineda, Fredrik Skärberg, Erik Olsén, Harshith Bachimanchi, Emelie Wesén, Elin K. Esbjörner, Erik Selander, Fredrik Höök, Daniel Midtvedt, Giovanni Volpe
Nature Communications 13, 7492 (2022)
arXiv: 2202.13546
doi: 10.1038/s41467-022-35004-y

Object detection is a fundamental task in digital microscopy, where machine learning has made great strides in overcoming the limitations of classical approaches. The training of state-of-the-art machine-learning methods almost universally relies on vast amounts of labeled experimental data or the ability to numerically simulate realistic datasets. However, experimental data are often challenging to label and cannot be easily reproduced numerically. Here, we propose a deep-learning method, named LodeSTAR (Localization and detection from Symmetries, Translations And Rotations), that learns to detect microscopic objects with sub-pixel accuracy from a single unlabeled experimental image by exploiting the inherent roto-translational symmetries of this task. We demonstrate that LodeSTAR outperforms traditional methods in terms of accuracy, also when analyzing challenging experimental data containing densely packed cells or noisy backgrounds. Furthermore, by exploiting additional symmetries we show that LodeSTAR can measure other properties, e.g., vertical position and polarizability in holographic microscopy.

Active matter in space published in npj Microgravity

Effect of gravity on matter: Sedimentation and creaming. Fv and Fg represent the viscous force and gravitational force, respectively. (Image by Authors.)
Active matter in space
Giorgio Volpe, Clemens Bechinger, Frank Cichos, Ramin Golestanian, Hartmut Löwen, Matthias Sperl and Giovanni Volpe
npj Microgravity, 8, 54 (2022)
doi: 10.1038/s41526-022-00230-7

In the last 20 years, active matter has been a highly dynamic field of research, bridging fundamental aspects of non-equilibrium thermodynamics with applications to biology, robotics, and nano-medicine. Active matter systems are composed of units that can harvest and harness energy and information from their environment to generate complex collective behaviours and forms of self-organisation. On Earth, gravity-driven phenomena (such as sedimentation and convection) often dominate or conceal the emergence of these dynamics, especially for soft active matter systems where typical interactions are of the order of the thermal energy. In this review, we explore the ongoing and future efforts to study active matter in space, where low-gravity and microgravity conditions can lift some of these limitations. We envision that these studies will help unify our understanding of active matter systems and, more generally, of far-from-equilibrium physics both on Earth and in space. Furthermore, they will also provide guidance on how to use, process and manufacture active materials for space exploration and colonisation.