Poster by M. Granfors at SPIE-ETAI, San Diego, 19 August 2024

GAUDI’s latent space representation of Watts–Strogatz Small-World Graphs. (Image by M. Granfors.)
Global graph features unveiled by unsupervised geometric deep learning
Mirja Granfors, Jesús Pineda, Blanca Zufiria Gerbolés, Jiawei Sun, Joana B. Pereira, Carlo Manzo, and Giovanni Volpe
Date: 19 August 2024
Time: 17:30-19:00 (PDT)

Graphs are used to model complex relationships in various domains, such as interacting particles or neural connections within a brain. Efficient analysis and classification of graphs pose significant challenges due to their inherent structural complexity and variability. Here, an approach is presented to address these challenges through the development of the graph autoencoder GAUDI. GAUDI effectively summarizes graph structures while preserving important topological details through multiple hierarchical pooling steps. This enables the extraction of physical parameters describing the graphs. We demonstrate the performance of GAUDI across diverse graph data originating from complicated systems, including the classification of protein assembly structures from single-molecule localization microscopy data, as well as the analysis of collective behavior and correlations between brain connections and age. This approach holds great promise for examining diverse systems, enhancing our comprehension of various forms of graph data.

Presentation by G. Wang at SPIE-MNM, San Diego, 19 August 2024

Schematic and brightfield image (inset) of the movement of 16μm diameter micromotor under the illumination of linearly polarized 1064nm laser. (Image by G. Wang.)
Light-driven metamachines
Gan Wang, Marcel Rey, Antonio Ciarlo, Mohanmmad Mahdi Shanei, Kunli Xiong, Giuseppe Pesce, Mikael Käll and Giovanni Volpe
Date: 19 August 2024
Time: 16:25-16:40 (PDT)

The incorporation of Moore’s law into integrated circuits has spurred opportunities for downsizing traditional mechanical components. Despite advancements in single on-chip motors using electrical, optical, and magnetic drives under ~100 μm, creating complex machines with multiple units remains challenging. Here, we developed a ~10 μm on-chip micromotor using a method compatible with complementary metal oxide semiconductors (CMOS) process. The meta-surface is embedded with the motor to control the incident laser beam direction, enabling momentum exchange with light for movement. The rotation direction and speed are adjustable through the meta-surface, along with the intensity and polarization of applied light. By combining these motors and controlling the configuration, we create complex machines with a size similar to traditional machines below 50um, such as the rotary motion mode of multiple gear coupled gear trains, and the linear motion mode combined with rack and pinion, and combine the micro metal The mirror is introduced into the machine to realize the self-controlled scanning function of the laser in a fixed area.

Presentation by M. Selin at SPIE-ETAI, San Diego, 19 August 2024

3d Visualization of the full Minitweezers 2.0 system. (Illustration by M. Selin.)
Integrating real-time deep learning for automation of optical tweezers experiments
Martin Selin
SPIE-ETAI, San Diego, CA, USA, 18 – 22 August 2024
Date: 19 August 2024
Time: 4:10 PM – 4:25 PM
Place: Conv. Ctr. Room 6D

The perhaps most widely used tool for measuring forces and manipulating particles at the micro and nano-scale are optical tweezers which have given them widespread adoption in physics, chemistry and biology. Despite advancements in computer interaction driven by large-scale generative AI models, experimental sciences—and optical tweezers in particular—remain predominantly manual and knowledge-intensive, owing to the specificity of methods and instruments. Here, we demonstrate how integrating the components of optical tweezers—laser, motor, microfluidics, and camera—into a single software simplifies otherwise challenging experiments by enabling automation through the integration of real-time analysis with deep learning. We highlight this through a DNA pulling experiment, showcasing automated single molecule force spectroscopy and intelligent bond detection, and an investigation into core-shell particle behavior under varying pH and salinity, where deep learning compensates for experimental drift. We conclude that automating experimental procedures increases reliability and throughput, while also opening up the possibility for new types of experiments.

Presentation by A. Callegari at SPIE-ETAI, San Diego, 19 August 2024

Focused rays scattered by an ellipsoidal particles (left). Optical torque along y calculated in the x-y plane using ray scattering with a grid of 1600 rays (up, right) and using a trained neural network (down, right). (Image by the Authors of the manuscript.)
Optical forces and torques in the geometrical optics approximation calculated with neural networks
David Bronte Ciriza, Alessandro Magazzù, Agnese Callegari, Gunther Barbosa, Antonio A. R. Neves, Maria Antonia Iatì, Giovanni Volpe, and Onofrio M. Maragò
SPIE-ETAI, San Diego, CA, USA, 18 – 22 August 2024
Date: 19 August 2024
Time: 1:55 PM – 2:10 PM
Place: Conv. Ctr. Room 6D

Optical tweezers manipulate microscopic objects with light by exchanging momentum and angular momentum between particle and light, generating optical forces and torques. Understanding and predicting them is essential for designing and interpreting experiments. Here, we focus on geometrical optics and optical forces and torques in this regime, and we employ neural networks to calculate them. Using an optically trapped spherical particle as a benchmark, we show that neural networks are faster and more accurate than the calculation with geometrical optics. We demonstrate the effectiveness of our approach in studying the dynamics of systems that are computationally “hard” for traditional computation.

Reference
David Bronte Ciriza, Alessandro Magazzù, Agnese Callegari, Gunther Barbosa, Antonio A. R. Neves, Maria A. Iatì, Giovanni Volpe, Onofrio M. Maragò, Faster and more accurate geometrical-optics optical force calculation using neural networks, ACS Photonics 10, 234–241 (2023)

Invited Presentation by M. Selin at SPIE-OTOM, San Diego, 18 August 2024

3d Visualization of the full Minitweezers 2.0 system. (Illustration by M. Selin.)
From stretching DNA to probing polymer stiffness: expanding experimental reach with automated optical tweezers
Martin Selin
SPIE-OTOM, San Diego, CA, USA, 18 – 22 August 2024
Date: 18 August 2024
Time: 12:15 PM – 12:45 PM
Place: Conv. Ctr. Room 6D

Optical tweezers have become ubiquitous tools in science with use in disciplines ranging from biology to physics, chemistry, and material sciences with thousands of users around the world and a continuously growing number of applications. Here we show how a specially designed instrument, called miniTweezers2.0, can be made both highly versatile and user friendly. We demonstrate the system on three different experiments, which thanks to the close integration of the various parts of the tweezers into a single software are performed fully autonomously. The first experiment involves DNA stretching, a fundamental single molecule force spectroscopy experiment. The second experiment involved the stretching of red blood cells, which can be used to gauge the membrane stiffness of the cells. Lastly, we investigate the interaction between core-shell particles in various environments. Showing how the soft polymer layer extends, or contracts depending on pH and salinity. Our work show potential of automated and versatile optical tweezers systems in advancing our understanding of nano and micro-scale systems.

Keynote Presentation by G. Volpe at SPIE-MNM, San Diego, 18 August 2024

(Image by A. Argun)
Deep Learning for Imaging and Microscopy
Giovanni Volpe
SPIE-MNM, San Diego, CA, USA, 18 – 22 August 2024
Date: 18 August 2024
Time: 10:25 AM – 11:00 AM
Place: Conv. Ctr. Room 6F

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we have introduced a software, DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy.

Soft Matter Lab members present at SPIE Optics+Photonics conference in San Diego, 18-22 August 2024

The Soft Matter Lab participates to the SPIE Optics+Photonics conference in San Diego, CA, USA, 18-22 August 2024, with the presentations listed below.

Giovanni Volpe is also panelist in the panel discussion:

  • Towards the Utilization of AI
    21 August 2024 • 3:45 PM – 4:45 PM PDT | Conv. Ctr. Room 2

Plenary Talk by G. Volpe at ENO-CANCOA, Cartagena, Colombia, 13 June 2024

DeepTrack 2.1 Logo. (Image from DeepTrack 2.1 Project)
Deep learning for microscopy
Giovanni Volpe
Encuentro Nacional de Óptica y la Conferencia Andina y del Caribe en Óptica y sus Aplicaciones(ENO-CANCOA)
Cartagena, Colombia, 13 June 2024

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions.

To overcome this issue, we have introduced a software, currently at version DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.1 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.

Seminar by A. Rohrbach on 15 May 2024

Correlated photons in superresolution imaging and correlated motions in biophysical interaction
Alexander Rohrbach
15 May 2024
12:30
Nexus

Abstract
Our research concentrates on light scattering at small biological structures enabling image formation and particle tracking in biophysics.
Coherent light, i.e. correlated photons enable higher scattering cross-sections than for instance incoherent fluorescence light. Thereby laser light enables to acquire images with millisecond integration times and small motion blur of dynamic particles, such as viruses in the cell periphery. The inherent speckle formation in coherent imaging is avoided by a novel technique called Rotating Coherent Scattering (ROCS) microscopy, which is the only technique that can image diffusing viruses and thereby allows to investigate their binding behavior to the cell periphery.
In the second part of my talk I discuss correlated particle motions, i.e. timescale dependent memory effects in viscoelastic media such as the cell periphery. Using a frequency decomposition of the tracked particle motions, apparently invisible binding of particles to the cell can be made visible.

Short CV
I studied physics at the university of Erlangen-Nürnberg (Germany), where I did my diploma in 1994 at the institute of optics. During my PhD in physics in Heidelberg I investigated different kinds of light scattering at the University, as well as evanescent wave microscopy at the Max-Planck-Institute for medical research. In both cases I worked on applications in cell biology. After my PhD in 1998, I continued my research as a Post-Doc at the European Molecular Biology Laboratory (EMBL) in Heidelberg. I intensified my studies on microscopy, light scattering and optical forces. In 2001 I became project leader of the photonic force microscopy group at EMBL, where I concentrated on the further technical development of this scanning probe microscopy and on applications in biophysics and soft matter physics. In 2005 I was awarded with the habilitation in physics at the university of Heidelberg. Since January 2006 I have been a full professor for Bio- and Nano-Photonics at IMTEK, Faculty of engineering and since 2007 also a member of the physics faculty, University of Freiburg.
I love mathematical models and I hate when the performance of scientists is squeezed into metric numbers.

Space Slam Event – Visit by Marcus Wandt, Sweden’s third astronaut and Chalmers alumnus

Group picture of the participants to the Space Slam event. (Image provided by R. Cumming)
On Tuesday 9th April 2024, the event called “Space Slam” took place at Chalmers University.

Here, young researchers get to present exciting space-related work they have been or are doing at Chalmers /  Gothenburg University – in one or two minutes, with the help/support of a picture and/or a prop. This event was participated by Marcus Wandt, Sweden’s third astronaut.

In this event, Hari presented his topic titled “Annelid inspired soft robot for planetary exploration” where this project is in collaboration with the European Space Agency (ESTEC-ESA) and Gothenburg University.