News

Cross-modality transformations in biological microscopy enabled by deep learning published in Advanced Photonics

Cross-modality transformation and segmentation. (Image by the Authors of the manuscript.)
Cross-modality transformations in biological microscopy enabled by deep learning
Dana Hassan, Jesús Domínguez, Benjamin Midtvedt, Henrik Klein Moberg, Jesús Pineda, Christoph Langhammer, Giovanni Volpe, Antoni Homs Corbera, Caroline B. Adiels
Advanced Photonics 6, 064001 (2024)
doi: 10.1117/1.AP.6.6.064001

Recent advancements in deep learning (DL) have propelled the virtual transformation of microscopy images across optical modalities, enabling unprecedented multimodal imaging analysis hitherto impossible. Despite these strides, the integration of such algorithms into scientists’ daily routines and clinical trials remains limited, largely due to a lack of recognition within their respective fields and the plethora of available transformation methods. To address this, we present a structured overview of cross-modality transformations, encompassing applications, data sets, and implementations, aimed at unifying this evolving field. Our review focuses on DL solutions for two key applications: contrast enhancement of targeted features within images and resolution enhancements. We recognize cross-modality transformations as a valuable resource for biologists seeking a deeper understanding of the field, as well as for technology developers aiming to better grasp sample limitations and potential applications. Notably, they enable high-contrast, high-specificity imaging akin to fluorescence microscopy without the need for laborious, costly, and disruptive physical-staining procedures. In addition, they facilitate the realization of imaging with properties that would typically require costly or complex physical modifications, such as achieving superresolution capabilities. By consolidating the current state of research in this review, we aim to catalyze further investigation and development, ultimately bringing the potential of cross-modality transformations into the hands of researchers and clinicians alike.

Invited Seminar by G. Volpe at FEMTO-ST, 26 November 2024

DeepTrack 2.1 Logo. (Image from DeepTrack 2.1 Project)
How can deep learning enhance microscopy?
Giovanni Volpe
FEMTO-ST’s Internal Seminar 2024
Date: 26 November 2024
Time: 15:00
Place: Besançon, Paris

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions.

To overcome this issue, we have introduced a software, currently at version DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.1 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.

Invited Talk by G. Volpe at SPAOM, Toledo, Spain, 22 November 2024

DeepTrack 2.1 Logo. (Image from DeepTrack 2.1 Project)
How can deep learning enhance microscopy?
Giovanni Volpe
SPAOM 2024
Date: 22 November 2024
Time: 10:15-10:45
Place: Toledo, Spain

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we have introduced a software, currently at version DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy.

Playing with Active Matter featured in Scilight

The article Playing with active matter, published in the American Journal of Physics, has been featured on Scilight with a news with title “Using Hexbugs to model active matter”.

The news highlights that the approach used in the featured paper will make possible for students in the primary and secondary school system to demonstrate complex active motion principles in the classroom, at an affordable budget.
In fact, experiments at the microscale often require very expensive equipment. The commercially available toys called Hexbugs used in the publication provide a macroscopic analogue of active matter at the microscale and have the advantage of being affordable for experimentation in the classroom.

About Scilight:
Scilight showcase the most interesting research across the physical sciences published in AIP Publishing Journals.

Reference:
Hannah Daniel, Using Hexbugs to model active matter, Scilight 2024, 431101 (2024)
doi: 10.1063/10.0032401

Playing with Active Matter published in American Journal of Physics

One exemplar of the HEXBUGS used in the experiment. (Image by the Authors of the manuscript.)
Playing with Active Matter
Angelo Barona Balda, Aykut Argun, Agnese Callegari, Giovanni Volpe
Americal Journal of Physics 92, 847–858 (2024)
doi: 10.1119/5.0125111
arXiv: 2209.04168

In the past 20 years, active matter has been a very successful research field, bridging the fundamental physics of nonequilibrium thermodynamics with applications in robotics, biology, and medicine. Active particles, contrary to Brownian particles, can harness energy to generate complex motions and emerging behaviors. Most active-matter experiments are performed with microscopic particles and require advanced microfabrication and microscopy techniques. Here, we propose some macroscopic experiments with active matter employing commercially available toy robots (the Hexbugs). We show how they can be easily modified to perform regular and chiral active Brownian motion and demonstrate through experiments fundamental signatures of active systems such as how energy and momentum are harvested from an active bath, how obstacles can sort active particles by chirality, and how active fluctuations induce attraction between planar objects (a Casimir-like effect). These demonstrations enable hands-on experimentation with active matter and showcase widely used analysis methods.

Harshith Bachimanchi is shortlisted as one of the RMS early-career award speakers at RMS annual general meeting 2024, London, UK on 2 October 2024

The three RMS Early Career Award speakers (l to r) Harshith Bachimanchi, Akaash Kumar and Liam Rooney. (Image by RMS.)
Harshith Bachimanchi is shortlisted as one of the RMS (Royal Microscopical Society) early-career award speakers at RMS AMG 2024 (RMS Annual General Meeting 2024) held in London, UK, on 2 October 2024.

In this meeting, Harshith presented his work on leveraging deep learning as a powerful tool to enhance the microscopic data analysis pipelines, to study microorganisms in unprecedented detail. Taking holographic microscopy as an example, he demonstrated that combining holography with deep learning can be used to follow marine micro-organisms through out their lifespan, continuously measuring their three-dimensional positions and dry mass. He also presented some recent results on using deep learning to transform microscopy images from one modality to another (For eg., from Holography to Bright-field and vice versa).

The articles related to his presentation can be found at the following links:
1. Microplankton life histories revealed by holographic microscopy and deep learning.
2. Deep-learning-powered data analysis in plankton ecology

The annual Early Career Award—for which Harshith is shortlisted as one of the potential candidates—recognises the achievements of an outstanding early career imaging scientist in their contribution to microscopy, image analysis, or cytometry.

From RMS:

Invited Talk by A. Ciarlo at Italy-Sweden bilateral workshop on smart sensor technologies and applications, 1 October 2024

Representation of DNA stretching experiment with the miniTweezer. (Image by A. Ciarlo)
miniTweezers2.0: smart optical tweezers for health and life sciences
Antonio Ciarlo
Italy-Sweden bilateral workshop on smart sensor technologies and applications
Date: 1 October 2024
Time: 14:40-15:05
Place: Meeting Room Kronan, Studenthuset, Linköping University, Campus Valla

Optical tweezers have become indispensable tools in various scientific fields such as biology, physics, chemistry, and materials science. Their wide range of applications has attracted the interest of scientists with limited expertise in optics and physics. Therefore, it is crucial to have a system that is accessible to non-experts. In this study, we present miniTweezers2.0, a highly versatile and user-friendly instrument enhanced by artificial intelligence. We demonstrate the capabilities of the system through three autonomous case study experiments. The first is DNA stretching, a fundamental experiment in single-molecule force spectroscopy. The second experiment focuses on stretching red blood cells, providing insight into their membrane stiffness. The final experiment examines the electrostatic interactions between microparticles in different environments. Our results highlight the potential of automated, versatile optical tweezers to advance our understanding of nanoscale and microscale systems by enabling high-throughput, unbiased measurements. The miniTweezers2.0 system successfully demonstrates the integration of artificial intelligence and automation to make optical tweezers more accessible and versatile, especially for health and life sciences. The adaptability of miniTweezers2.0 underscores its potential as a powerful tool for future scientific exploration across multiple disciplines.

Invited Talk by G. Volpe at Gothenburg Lise Meitner Award 2024 Symposium, 27 September 2024

(Image created by G. Volpe with the assistance of DALL·E 2)
What is a physicist to do in the age of AI?
Giovanni Volpe
Gothenburg Lise Meitner Award 2024 Symposium
Date: 27 September 2024
Time: 15:00-15:30
Place: PJ Salen

In recent years, the rapid growth of artificial intelligence, particularly deep learning, has transformed fields from natural sciences to technology. While deep learning is often viewed as a glorified form of curve fitting, its advancement to multi-layered, deep neural networks has resulted in unprecedented performance improvements, often surprising experts. As AI models grow larger and more complex, many wonder whether AI will eventually take over the world and what role remains for physicists and, more broadly, humans.

A critical, yet underappreciated fact is that these AI systems rely heavily on vast amounts of training data, most of which are generated and annotated by humans. This dependency raises an intriguing issue: what happens when human-generated data is no longer available, or when AI begins to train on AI-generated data? The phenomenon of AI poisoning, where the quality of AI outputs declines due to self-referencing, demonstrates the limitations of current AI models. For example, in image recognition tasks, such as those involving the MNIST dataset, AI tends to gravitate towards ‘safe’ or average outputs, diminishing originality and accuracy.

In this context, the unique role of humans becomes clear. Physicists, with their capacity for originality, deep understanding of physical phenomena, and the ability to exploit fundamental symmetries in nature, bring invaluable perspectives to the development of AI. By incorporating physics-informed training architectures and embracing the human drive for meaning and discovery, we can guide the future of AI in truly innovative directions. The message is clear: physicists must remain original, pursue their passions, and continue searching for the hidden laws that govern the world and society.

Seminar by G. Volpe at ESPCI/Sorbonne, Paris, 26 September 2024

(Image by A. Argun)
Deep Learning for Microscopy
Giovanni Volpe
Date: 26 September 2024
Place: ESPCI/Sorbonne, Paris, France

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we have introduced a software, DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy.

Critical Casimir levitation of colloids above a bull’s-eye pattern on ArXiv

Sketch of a colloid above a substrate with a bull’s-eye pattern. (Image by the Authors.)
Critical Casimir levitation of colloids above a bull’s-eye pattern
Piotr Nowakowski, Nima Farahmand Bafi, Giovanni Volpe, Svyatoslav Kondrat, S. Dietrich
arXiv: 2409.08366

Critical Casimir forces emerge among particles or surfaces immersed in a near-critical fluid, with the sign of the force determined by surface properties and with its strength tunable by minute temperature changes. Here, we show how such forces can be used to trap a colloidal particle and levitate it above a substrate with a bull’s-eye pattern consisting of a ring with surface properties opposite to the rest of the substrate. Using the Derjaguin approximation and mean-field calculations, we find a rich behavior of spherical colloids at such a patterned surface, including sedimentation towards the ring and levitation above the ring (ring levitation) or above the bull’s-eye’s center (point levitation). Within the Derjaguin approximation, we calculate a levitation diagram for point levitation showing the depth of the trapping potential and the height at which the colloid levitates, both depending on the pattern properties, the colloid size, and the solution temperature. Our calculations reveal that the parameter space associated with point levitation shrinks if the system is driven away from a critical point, while, surprisingly, the trapping force becomes stronger. We discuss the application of critical Casimir levitation for sorting colloids by size and for determining the thermodynamic distance to criticality. Our results show that critical Casimir forces provide rich opportunities for controlling the behavior of colloidal particles at patterned surfaces.