The book Deep Learning Crash Course, authored by Giovanni Volpe, Benjamin Midtvedt, Jesús Pineda, Henrik Klein Moberg, Harshith Bachimanchi, Joana B. Pereira, and Carlo Manzo, has been published online by No Starch Press in July 2024.
Citation
Giovanni Volpe, Benjamin Midtvedt, Jesús Pineda, Henrik Klein Moberg, Harshith Bachimanchi, Joana B. Pereira, and Carlo Manzo. Deep Learning Crash Course. No Starch Press.
ISBN-13: 9781718503922
The editors have included our work in their Editors’ Highlights webpage, which showcases the 50 best papers recently published in this area. You can view the feature on the Editors’ Highlights page (https://www.nature.com/ncomms/editorshighlights) as well as on the journal homepage (https://www.nature.com/ncomms/).
Nanoalignment by Critical Casimir Torques
Gan Wang, Piotr Nowakowski, Nima Farahmand Bafi, Benjamin Midtvedt, Falko Schmidt, Agnese Callegari, Ruggero Verre, Mikael Käll, S. Dietrich, Svyatoslav Kondrat, Giovanni Volpe
Nature Communications, 15, 5086 (2024)
DOI: 10.1038/s41467-024-49220-1
arXiv: 2401.06260
The manipulation of microscopic objects requires precise and controllable forces and torques. Recent advances have led to the use of critical Casimir forces as a powerful tool, which can be finely tuned through the temperature of the environment and the chemical properties of the involved objects. For example, these forces have been used to self-organize ensembles of particles and to counteract stiction caused by Casimir-Liftshitz forces. However, until now, the potential of critical Casimir torques has been largely unexplored. Here, we demonstrate that critical Casimir torques can efficiently control the alignment of microscopic objects on nanopatterned substrates. We show experimentally and corroborate with theoretical calculations and Monte Carlo simulations that circular patterns on a substrate can stabilize the position and orientation of microscopic disks. By making the patterns elliptical, such microdisks can be subject to a torque which flips them upright while simultaneously allowing for more accurate control of the microdisk position. More complex patterns can selectively trap 2D-chiral particles and generate particle motion similar to non-equilibrium Brownian ratchets. These findings provide new opportunities for nanotechnological applications requiring precise positioning and orientation of microscopic objects.
Single-shot self-supervised object detection Benjamin Midtvedt, Jesus Pineda, Fredrik Skärberg, Erik Olsén, Harshith Bachimanchi, Emelie Wesén, Elin Esbjörner, Erik Selander, Fredrik Höök, Daniel Midtvedt, Giovanni Volpe Date: 23 August 2023 Time: 10:30 AM (PDT)
Object detection is a fundamental task in digital microscopy. Recently, machine-learning approaches have made great strides in overcoming the limitations of more classical approaches. The training of state-of-the-art machine-learning methods almost universally relies on either vast amounts of labeled experimental data or the ability to numerically simulate realistic datasets. However, the data produced by experiments are often challenging to label and cannot be easily reproduced numerically. Here, we propose a novel deep-learning method, named LodeSTAR (Low-shot deep Symmetric Tracking And Regression), that learns to detect small, spatially confined, and largely homogeneous objects that have sufficient contrast to the background with sub-pixel accuracy from a single unlabeled experimental image. This is made possible by exploiting the inherent roto-translational symmetries of the data. We demonstrate that LodeSTAR outperforms traditional methods in terms of accuracy. Furthermore, we analyze challenging experimental data containing densely packed cells or noisy backgrounds. We also exploit additional symmetries to extend the measurable particle properties to the particle’s vertical position by propagating the signal in Fourier space and its polarizability by scaling the signal strength. Thanks to the ability to train deep-learning models with a single unlabeled image, LodeSTAR can accelerate the development of high-quality microscopic analysis pipelines for engineering, biology, and medicine.
The Soft Matter Lab participates to the SPIE Optics+Photonics conference in San Diego, CA, USA, 20-24 August 2023, with the presentations listed below.
Agnese Callegari: Playing with active matter
21 August 2023 • 4:05 PM – 4:20 PM PDT | Conv. Ctr. Room 6D
Giovanni Volpe is also co-author of the presentations:
Jiawei Sun (KI): (Poster) Assessment of nonlinear changes in functional brain connectivity during aging using deep learning
21 August 2023 • 5:30 PM – 7:00 PM PDT | Conv. Ctr. Exhibit Hall A
Blanca Zufiria Gerbolés (KI): (Poster) Exploring age-related changes in anatomical brain connectivity using deep learning analysis in cognitively healthy individuals
21 August 2023 • 5:30 PM – 7:00 PM PDT | Conv. Ctr. Exhibit Hall A
Mite Mijalkov (KI): Uncovering vulnerable connections in the aging brain using reservoir computing
22 August 2023 • 9:15 AM – 9:30 AM PDT | Conv. Ctr. Room 6C
Roadmap on Deep Learning for Microscopy
Giovanni Volpe, Carolina Wählby, Lei Tian, Michael Hecht, Artur Yakimovich, Kristina Monakhova, Laura Waller, Ivo F. Sbalzarini, Christopher A. Metzler, Mingyang Xie, Kevin Zhang, Isaac C.D. Lenton, Halina Rubinsztein-Dunlop, Daniel Brunner, Bijie Bai, Aydogan Ozcan, Daniel Midtvedt, Hao Wang, Nataša Sladoje, Joakim Lindblad, Jason T. Smith, Marien Ochoa, Margarida Barroso, Xavier Intes, Tong Qiu, Li-Yu Yu, Sixian You, Yongtao Liu, Maxim A. Ziatdinov, Sergei V. Kalinin, Arlo Sheridan, Uri Manor, Elias Nehme, Ofri Goldenberg, Yoav Shechtman, Henrik K. Moberg, Christoph Langhammer, Barbora Špačková, Saga Helgadottir, Benjamin Midtvedt, Aykut Argun, Tobias Thalheim, Frank Cichos, Stefano Bo, Lars Hubatsch, Jesus Pineda, Carlo Manzo, Harshith Bachimanchi, Erik Selander, Antoni Homs-Corbera, Martin Fränzl, Kevin de Haan, Yair Rivenson, Zofia Korczak, Caroline Beck Adiels, Mite Mijalkov, Dániel Veréb, Yu-Wei Chang, Joana B. Pereira, Damian Matuszewski, Gustaf Kylberg, Ida-Maria Sintorn, Juan C. Caicedo, Beth A Cimini, Muyinatu A. Lediju Bell, Bruno M. Saraiva, Guillaume Jacquemet, Ricardo Henriques, Wei Ouyang, Trang Le, Estibaliz Gómez-de-Mariscal, Daniel Sage, Arrate Muñoz-Barrutia, Ebba Josefson Lindqvist, Johanna Bergman
arXiv: 2303.03793
Through digital imaging, microscopy has evolved from primarily being a means for visual observation of life at the micro- and nano-scale, to a quantitative tool with ever-increasing resolution and throughput. Artificial intelligence, deep neural networks, and machine learning are all niche terms describing computational methods that have gained a pivotal role in microscopy-based research over the past decade. This Roadmap is written collectively by prominent researchers and encompasses selected aspects of how machine learning is applied to microscopy image data, with the aim of gaining scientific knowledge by improved image quality, automated detection, segmentation, classification and tracking of objects, and efficient merging of information from multiple imaging modalities. We aim to give the reader an overview of the key developments and an understanding of possibilities and limitations of machine learning for microscopy. It will be of interest to a wide cross-disciplinary audience in the physical sciences and life sciences.
Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion
Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe, Carlo Manzo
Nature Machine Intelligence 5, 71–82 (2023)
arXiv: 2202.06355
doi: 10.1038/s42256-022-00595-0
The characterization of dynamical processes in living systems provides important clues for their mechanistic interpretation and link to biological functions. Thanks to recent advances in microscopy techniques, it is now possible to routinely record the motion of cells, organelles, and individual molecules at multiple spatiotemporal scales in physiological conditions. However, the automated analysis of dynamics occurring in crowded and complex environments still lags behind the acquisition of microscopic image sequences. Here, we present a framework based on geometric deep learning that achieves the accurate estimation of dynamical properties in various biologically-relevant scenarios. This deep-learning approach relies on a graph neural network enhanced by attention-based components. By processing object features with geometric priors, the network is capable of performing multiple tasks, from linking coordinates into trajectories to inferring local and global dynamic properties. We demonstrate the flexibility and reliability of this approach by applying it to real and simulated data corresponding to a broad range of biological experiments.
Single-shot self-supervised particle tracking
Benjamin Midtvedt, Jesús Pineda, Fredrik Skärberg, Erik Olsén, Harshith Bachimanchi, Emelie Wesén, Elin K. Esbjörner, Erik Selander, Fredrik Höök, Daniel Midtvedt, Giovanni Volpe
Nature Communications 13, 7492 (2022)
arXiv: 2202.13546
doi: 10.1038/s41467-022-35004-y
Object detection is a fundamental task in digital microscopy, where machine learning has made great strides in overcoming the limitations of classical approaches. The training of state-of-the-art machine-learning methods almost universally relies on vast amounts of labeled experimental data or the ability to numerically simulate realistic datasets. However, experimental data are often challenging to label and cannot be easily reproduced numerically. Here, we propose a deep-learning method, named LodeSTAR (Localization and detection from Symmetries, Translations And Rotations), that learns to detect microscopic objects with sub-pixel accuracy from a single unlabeled experimental image by exploiting the inherent roto-translational symmetries of this task. We demonstrate that LodeSTAR outperforms traditional methods in terms of accuracy, also when analyzing challenging experimental data containing densely packed cells or noisy backgrounds. Furthermore, by exploiting additional symmetries we show that LodeSTAR can measure other properties, e.g., vertical position and polarizability in holographic microscopy.
The study, now published in eLife, and co-written by researchers at the Soft Matter Lab of the Department of Physics at the University of Gothenburg, demonstrates how the combination of holographic microscopy and deep learning provides a strong complimentary tool in marine microbial ecology. The research allows quantitative assessments of microplankton feeding behaviours, and biomass increase throughout the cell cycle from generation to generation.