Poster by A. Callegari at SPIE-OTOM, San Diego, 19 August 2024

Simplified sketch of the neural network used for the simulations of intracavity optical trapping. (Image by A. Callegari.)
Neural networks for intracavity optical trapping
Agnese Callegari, Mathias Samuelsson, Antonio Ciarlo, Giuseppe Pesce, David Bronte Ciriza, Alessandro Magazzù, Onofrio M. Maragò, Antonio Sasso, and Giovanni Volpe
SPIE-OTOM, San Diego, CA, USA, 18 – 22 August 2024
Date: 19 August 2024
Time: 5:30 PM – 7:00 PM
Place: Conv. Ctr. Exhibit Hall A

Intracavity optical tweezers have been proven successful for trapping microscopic particles at very low average power intensity – much lower than the one in standard optical tweezers. This feature makes them particularly promising for the study of biological samples. The modelling of such systems, though, requires time-consuming numerical simulations that affect its usability and predictive power. With the help of machine learning, we can overcome the numerical bottleneck – the calculation of optical forces, torques, and losses – and reproduce, in simulation, the results in the literature and generalize to the case of counterpropagating-beams intracavity optical trapping.

Poster by A. Callegari at SPIE-OTOM, San Diego, 19 August 2024

Schematic of the scattering of a light ray on a Janus particle. (Image by A. Callegari.)
Janus particles in a travelling optical landscape
Agnese Callegari, Giovanni Volpe
SPIE-OTOM, San Diego, CA, USA, 18 – 22 August 2024
Date: 19 August 2024
Time: 5:30 PM – 7:00 PM
Place: Conv. Ctr. Exhibit Hall A

Janus particles possess dual properties that makes them very versatile for soft and active matter applications. Modeling their interaction with light, including optical force and torque, presents challenges. We present here a model of spherical, metal-coated Janus particles in the geometric optics approximation. Via an extension of the Optical Tweezers Geometrical Optics (OTGO) toolbox, we calculate optical forces, torques, and absorption. Through numerical simulation, we demonstrate control over Janus particle dynamics in traveling-wave optical landscapes by adjusting speed and periodicity.

Poster by M. Granfors at SPIE-ETAI, San Diego, 19 August 2024

GAUDI’s latent space representation of Watts–Strogatz Small-World Graphs. (Image by M. Granfors.)
Global graph features unveiled by unsupervised geometric deep learning
Mirja Granfors, Jesús Pineda, Blanca Zufiria Gerbolés, Jiawei Sun, Joana B. Pereira, Carlo Manzo, and Giovanni Volpe
Date: 19 August 2024
Time: 17:30-19:00 (PDT)

Graphs are used to model complex relationships in various domains, such as interacting particles or neural connections within a brain. Efficient analysis and classification of graphs pose significant challenges due to their inherent structural complexity and variability. Here, an approach is presented to address these challenges through the development of the graph autoencoder GAUDI. GAUDI effectively summarizes graph structures while preserving important topological details through multiple hierarchical pooling steps. This enables the extraction of physical parameters describing the graphs. We demonstrate the performance of GAUDI across diverse graph data originating from complicated systems, including the classification of protein assembly structures from single-molecule localization microscopy data, as well as the analysis of collective behavior and correlations between brain connections and age. This approach holds great promise for examining diverse systems, enhancing our comprehension of various forms of graph data.

Presentation by G. Wang at SPIE-MNM, San Diego, 19 August 2024

Schematic and brightfield image (inset) of the movement of 16μm diameter micromotor under the illumination of linearly polarized 1064nm laser. (Image by G. Wang.)
Light-driven metamachines
Gan Wang, Marcel Rey, Antonio Ciarlo, Mohanmmad Mahdi Shanei, Kunli Xiong, Giuseppe Pesce, Mikael Käll and Giovanni Volpe
Date: 19 August 2024
Time: 16:25-16:40 (PDT)

The incorporation of Moore’s law into integrated circuits has spurred opportunities for downsizing traditional mechanical components. Despite advancements in single on-chip motors using electrical, optical, and magnetic drives under ~100 μm, creating complex machines with multiple units remains challenging. Here, we developed a ~10 μm on-chip micromotor using a method compatible with complementary metal oxide semiconductors (CMOS) process. The meta-surface is embedded with the motor to control the incident laser beam direction, enabling momentum exchange with light for movement. The rotation direction and speed are adjustable through the meta-surface, along with the intensity and polarization of applied light. By combining these motors and controlling the configuration, we create complex machines with a size similar to traditional machines below 50um, such as the rotary motion mode of multiple gear coupled gear trains, and the linear motion mode combined with rack and pinion, and combine the micro metal The mirror is introduced into the machine to realize the self-controlled scanning function of the laser in a fixed area.

Presentation by M. Selin at SPIE-ETAI, San Diego, 19 August 2024

3d Visualization of the full Minitweezers 2.0 system. (Illustration by M. Selin.)
Integrating real-time deep learning for automation of optical tweezers experiments
Martin Selin
SPIE-ETAI, San Diego, CA, USA, 18 – 22 August 2024
Date: 19 August 2024
Time: 4:10 PM – 4:25 PM
Place: Conv. Ctr. Room 6D

The perhaps most widely used tool for measuring forces and manipulating particles at the micro and nano-scale are optical tweezers which have given them widespread adoption in physics, chemistry and biology. Despite advancements in computer interaction driven by large-scale generative AI models, experimental sciences—and optical tweezers in particular—remain predominantly manual and knowledge-intensive, owing to the specificity of methods and instruments. Here, we demonstrate how integrating the components of optical tweezers—laser, motor, microfluidics, and camera—into a single software simplifies otherwise challenging experiments by enabling automation through the integration of real-time analysis with deep learning. We highlight this through a DNA pulling experiment, showcasing automated single molecule force spectroscopy and intelligent bond detection, and an investigation into core-shell particle behavior under varying pH and salinity, where deep learning compensates for experimental drift. We conclude that automating experimental procedures increases reliability and throughput, while also opening up the possibility for new types of experiments.

Presentation by A. Callegari at SPIE-ETAI, San Diego, 19 August 2024

Focused rays scattered by an ellipsoidal particles (left). Optical torque along y calculated in the x-y plane using ray scattering with a grid of 1600 rays (up, right) and using a trained neural network (down, right). (Image by the Authors of the manuscript.)
Optical forces and torques in the geometrical optics approximation calculated with neural networks
David Bronte Ciriza, Alessandro Magazzù, Agnese Callegari, Gunther Barbosa, Antonio A. R. Neves, Maria Antonia Iatì, Giovanni Volpe, and Onofrio M. Maragò
SPIE-ETAI, San Diego, CA, USA, 18 – 22 August 2024
Date: 19 August 2024
Time: 1:55 PM – 2:10 PM
Place: Conv. Ctr. Room 6D

Optical tweezers manipulate microscopic objects with light by exchanging momentum and angular momentum between particle and light, generating optical forces and torques. Understanding and predicting them is essential for designing and interpreting experiments. Here, we focus on geometrical optics and optical forces and torques in this regime, and we employ neural networks to calculate them. Using an optically trapped spherical particle as a benchmark, we show that neural networks are faster and more accurate than the calculation with geometrical optics. We demonstrate the effectiveness of our approach in studying the dynamics of systems that are computationally “hard” for traditional computation.

Reference
David Bronte Ciriza, Alessandro Magazzù, Agnese Callegari, Gunther Barbosa, Antonio A. R. Neves, Maria A. Iatì, Giovanni Volpe, Onofrio M. Maragò, Faster and more accurate geometrical-optics optical force calculation using neural networks, ACS Photonics 10, 234–241 (2023)

Invited Presentation by M. Selin at SPIE-OTOM, San Diego, 18 August 2024

3d Visualization of the full Minitweezers 2.0 system. (Illustration by M. Selin.)
From stretching DNA to probing polymer stiffness: expanding experimental reach with automated optical tweezers
Martin Selin
SPIE-OTOM, San Diego, CA, USA, 18 – 22 August 2024
Date: 18 August 2024
Time: 12:15 PM – 12:45 PM
Place: Conv. Ctr. Room 6D

Optical tweezers have become ubiquitous tools in science with use in disciplines ranging from biology to physics, chemistry, and material sciences with thousands of users around the world and a continuously growing number of applications. Here we show how a specially designed instrument, called miniTweezers2.0, can be made both highly versatile and user friendly. We demonstrate the system on three different experiments, which thanks to the close integration of the various parts of the tweezers into a single software are performed fully autonomously. The first experiment involves DNA stretching, a fundamental single molecule force spectroscopy experiment. The second experiment involved the stretching of red blood cells, which can be used to gauge the membrane stiffness of the cells. Lastly, we investigate the interaction between core-shell particles in various environments. Showing how the soft polymer layer extends, or contracts depending on pH and salinity. Our work show potential of automated and versatile optical tweezers systems in advancing our understanding of nano and micro-scale systems.

Keynote Presentation by G. Volpe at SPIE-MNM, San Diego, 18 August 2024

(Image by A. Argun)
Deep Learning for Imaging and Microscopy
Giovanni Volpe
SPIE-MNM, San Diego, CA, USA, 18 – 22 August 2024
Date: 18 August 2024
Time: 10:25 AM – 11:00 AM
Place: Conv. Ctr. Room 6F

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we have introduced a software, DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy.

Soft Matter Lab members present at SPIE Optics+Photonics conference in San Diego, 18-22 August 2024

The Soft Matter Lab participates to the SPIE Optics+Photonics conference in San Diego, CA, USA, 18-22 August 2024, with the presentations listed below.

Giovanni Volpe is also panelist in the panel discussion:

  • Towards the Utilization of AI
    21 August 2024 • 3:45 PM – 4:45 PM PDT | Conv. Ctr. Room 2

Plenary Talk by G. Volpe at ENO-CANCOA, Cartagena, Colombia, 13 June 2024

DeepTrack 2.1 Logo. (Image from DeepTrack 2.1 Project)
Deep learning for microscopy
Giovanni Volpe
Encuentro Nacional de Óptica y la Conferencia Andina y del Caribe en Óptica y sus Aplicaciones(ENO-CANCOA)
Cartagena, Colombia, 13 June 2024

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions.

To overcome this issue, we have introduced a software, currently at version DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.1 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.