Invited Seminar by G. Volpe at FEMTO-ST, 26 November 2024

DeepTrack 2.1 Logo. (Image from DeepTrack 2.1 Project)
How can deep learning enhance microscopy?
Giovanni Volpe
FEMTO-ST’s Internal Seminar 2024
Date: 26 November 2024
Time: 15:00
Place: Besançon, Paris

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions.

To overcome this issue, we have introduced a software, currently at version DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.1 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.

Invited Talk by G. Volpe at SPAOM, Toledo, Spain, 22 November 2024

DeepTrack 2.1 Logo. (Image from DeepTrack 2.1 Project)
How can deep learning enhance microscopy?
Giovanni Volpe
SPAOM 2024
Date: 22 November 2024
Time: 10:15-10:45
Place: Toledo, Spain

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we have introduced a software, currently at version DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy.

Playing with Active Matter featured in Scilight

The article Playing with active matter, published in the American Journal of Physics, has been featured on Scilight with a news with title “Using Hexbugs to model active matter”.

The news highlights that the approach used in the featured paper will make possible for students in the primary and secondary school system to demonstrate complex active motion principles in the classroom, at an affordable budget.
In fact, experiments at the microscale often require very expensive equipment. The commercially available toys called Hexbugs used in the publication provide a macroscopic analogue of active matter at the microscale and have the advantage of being affordable for experimentation in the classroom.

About Scilight:
Scilight showcase the most interesting research across the physical sciences published in AIP Publishing Journals.

Reference:
Hannah Daniel, Using Hexbugs to model active matter, Scilight 2024, 431101 (2024)
doi: 10.1063/10.0032401

Playing with Active Matter published in American Journal of Physics

One exemplar of the HEXBUGS used in the experiment. (Image by the Authors of the manuscript.)
Playing with Active Matter
Angelo Barona Balda, Aykut Argun, Agnese Callegari, Giovanni Volpe
Americal Journal of Physics 92, 847–858 (2024)
doi: 10.1119/5.0125111
arXiv: 2209.04168

In the past 20 years, active matter has been a very successful research field, bridging the fundamental physics of nonequilibrium thermodynamics with applications in robotics, biology, and medicine. Active particles, contrary to Brownian particles, can harness energy to generate complex motions and emerging behaviors. Most active-matter experiments are performed with microscopic particles and require advanced microfabrication and microscopy techniques. Here, we propose some macroscopic experiments with active matter employing commercially available toy robots (the Hexbugs). We show how they can be easily modified to perform regular and chiral active Brownian motion and demonstrate through experiments fundamental signatures of active systems such as how energy and momentum are harvested from an active bath, how obstacles can sort active particles by chirality, and how active fluctuations induce attraction between planar objects (a Casimir-like effect). These demonstrations enable hands-on experimentation with active matter and showcase widely used analysis methods.

Invited Talk by G. Volpe at Gothenburg Lise Meitner Award 2024 Symposium, 27 September 2024

(Image created by G. Volpe with the assistance of DALL·E 2)
What is a physicist to do in the age of AI?
Giovanni Volpe
Gothenburg Lise Meitner Award 2024 Symposium
Date: 27 September 2024
Time: 15:00-15:30
Place: PJ Salen

In recent years, the rapid growth of artificial intelligence, particularly deep learning, has transformed fields from natural sciences to technology. While deep learning is often viewed as a glorified form of curve fitting, its advancement to multi-layered, deep neural networks has resulted in unprecedented performance improvements, often surprising experts. As AI models grow larger and more complex, many wonder whether AI will eventually take over the world and what role remains for physicists and, more broadly, humans.

A critical, yet underappreciated fact is that these AI systems rely heavily on vast amounts of training data, most of which are generated and annotated by humans. This dependency raises an intriguing issue: what happens when human-generated data is no longer available, or when AI begins to train on AI-generated data? The phenomenon of AI poisoning, where the quality of AI outputs declines due to self-referencing, demonstrates the limitations of current AI models. For example, in image recognition tasks, such as those involving the MNIST dataset, AI tends to gravitate towards ‘safe’ or average outputs, diminishing originality and accuracy.

In this context, the unique role of humans becomes clear. Physicists, with their capacity for originality, deep understanding of physical phenomena, and the ability to exploit fundamental symmetries in nature, bring invaluable perspectives to the development of AI. By incorporating physics-informed training architectures and embracing the human drive for meaning and discovery, we can guide the future of AI in truly innovative directions. The message is clear: physicists must remain original, pursue their passions, and continue searching for the hidden laws that govern the world and society.

Seminar by G. Volpe at ESPCI/Sorbonne, Paris, 26 September 2024

(Image by A. Argun)
Deep Learning for Microscopy
Giovanni Volpe
Date: 26 September 2024
Place: ESPCI/Sorbonne, Paris, France

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we have introduced a software, DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy.

Keynote Presentation by G. Volpe at SPIE-MNM, San Diego, 18 August 2024

(Image by A. Argun)
Deep Learning for Imaging and Microscopy
Giovanni Volpe
SPIE-MNM, San Diego, CA, USA, 18 – 22 August 2024
Date: 18 August 2024
Time: 10:25 AM – 11:00 AM
Place: Conv. Ctr. Room 6F

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we have introduced a software, DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy.

Soft Matter Lab members present at SPIE Optics+Photonics conference in San Diego, 18-22 August 2024

The Soft Matter Lab participates to the SPIE Optics+Photonics conference in San Diego, CA, USA, 18-22 August 2024, with the presentations listed below.

Giovanni Volpe is also panelist in the panel discussion:

  • Towards the Utilization of AI
    21 August 2024 • 3:45 PM – 4:45 PM PDT | Conv. Ctr. Room 2

Crystallization and topology-induced dynamical heterogeneities in soft granular clusters published in Physical Review of Research

Scheme of the microfluidic system for the production of clusters of a soft granular medium, and Snapshots of the cluster at different times corresponding to different sections of the channel. (Image by the Authors of the manuscript.)
Crystallization and topology-induced dynamical heterogeneities in soft granular clusters
Michal Bogdan, Jesus Pineda, Mihir Durve, Leon Jurkiewicz, Sauro Succi, Giovanni Volpe, Jan Guzowski
Physical Review of Research, 6, L032031 (2024)
DOI: 10.1103/PhysRevResearch.6.L032031
arXiv: 2302.05363

Soft-granular media, such as dense emulsions, foams or tissues, exhibit either fluid- or solidlike properties depending on the applied external stresses. Whereas bulk rheology of such materials has been thoroughly investigated, the internal structural mechanics of finite soft-granular structures with free interfaces is still poorly understood. Here, we report the spontaneous crystallization and melting inside a model soft granular cluster—a densely packed aggregate of N~30-40 droplets engulfed by a fluid film—subject to a varying external flow. We develop machine learning tools to track the internal rearrangements in the quasi-two-dimensional cluster as it transits a sequence of constrictions. As the cluster relaxes from a state of strong mechanical deformations, we find differences in the dynamics of the grains within the interior of the cluster and those at its rim, with the latter experiencing larger deformations and less frequent rearrangements, effectively acting as an elastic membrane around a fluidlike core. We conclude that the observed structural-dynamical heterogeneity results from an interplay of the topological constrains, due to the presence of a closed interface, and the internal solid-fluid transitions. We discuss the universality of such behavior in various types of finite soft granular structures, including biological tissues.

Book “Deep Learning Crash Course” published at No Starch Press

The book Deep Learning Crash Course, authored by Giovanni Volpe, Benjamin Midtvedt, Jesús Pineda, Henrik Klein Moberg, Harshith Bachimanchi, Joana B. Pereira, and Carlo Manzo, has been published online by No Starch Press in July 2024.

Preorder Discount
A preorder discount is available: preorders with coupon code PREORDER will receive 25% off. Link: Preorder @ No Starch Press | Deep Learning Crash Course

Links
@ No Starch Press

Citation 
Giovanni Volpe, Benjamin Midtvedt, Jesús Pineda, Henrik Klein Moberg, Harshith Bachimanchi, Joana B. Pereira, and Carlo Manzo. Deep Learning Crash Course. No Starch Press.
ISBN-13: 9781718503922