Presentation by A. Callegari at AI for Scientific Data Analysis, Gothenburg, 31 May 2023

Focused rays scattered by an ellipsoidal particles. (Image reproduced from: 10.1021/acsphotonics.2c01565.)
Faster and more accurate geometrical-optics optical force calculation using neural networks
Agnese Callegari

Optical forces are often calculated by discretizing the trapping light beam into a set of rays and using geometrical optics to compute the exchange of momentum. However, the number of rays sets a trade-off between calculation speed and accuracy. Here, we show that using neural networks permits one to overcome this limitation, obtaining not only faster but also more accurate simulations. We demonstrate this using an optically trapped spherical particle for which we obtain an analytical solution to use as ground truth. Then, we take advantage of the acceleration provided by neural networks to study the dynamics of an ellipsoidal particle in a double trap, which would be computationally impossible otherwise.

Date: 31 May 2023
Time: 10:45
Place: MC2 Kollektorn
Event: AI for Scientific Data Analysis: Miniconference

Presentation by C. B. Adiels at AI for Scientific Data Analysis, Gothenburg, 31 May 2023

Phase-contrast image before virtual staining. (Image reproduced from https://doi.org/10.1101/2022.07.18.500422.)
Dynamic live/apoptotic cell assay using phase-contrast imaging and deep learning
Caroline B. Adiels

Chemical live/dead assay has a long history of providing information about the viability of cells cultured in vitro. The standard methods rely on imaging chemically-stained cells using fluorescence microscopy and further analysis of the obtained images to retrieve the proportion of living cells in the sample. However, such a technique is not only time-consuming but also invasive. Due to the toxicity of chemical dyes, once a sample is stained, it is discarded, meaning that longitudinal studies are impossible using this approach. Further, information about when cells start programmed cell death (apoptosis) is more relevant for dynamic studies. Here, we present an alternative method where cell images from phase-contrast time-lapse microscopy are virtually-stained using deep learning. In this study, human endothelial cells are stained live or apoptotic and subsequently counted using the self-supervised single-shot deep-learning technique (LodeSTAR). Our approach is less labour-intensive than traditional chemical staining procedures and provides dynamic live/apoptotic cell ratios from a continuous cell population with minimal impact. Further, it can be used to extract data from dense cell samples, where manual counting is unfeasible.

Date: 31 May 2023
Time: 10:30
Place: MC2 Kollektorn
Event: AI for Scientific Data Analysis: Miniconference

Presentation by Y.-W. Chang at AI for Scientific Data Analysis, Gothenburg, 31 May 2023

Working principles for training neural networks with highly incomplete dataset: vanilla (upper panel) vs GapNet (lower panel) (Image by Y.-W. Chang.)

Training of neural network with incomplete medical datasets
Yu-Wei Chang

Neural network training and validation rely on the availability of large high-quality datasets. However, in many cases, only incomplete datasets are available, particularly in health care applications, where each patient typically undergoes different clinical procedures or can drop out of a study. Here, we introduce GapNet, an alternative deep-learning training approach that can use highly incomplete datasets without overfitting or introducing artefacts. Using two highly incomplete real-world medical datasets, we show that GapNet improves the identification of patients with underlying Alzheimer’s disease pathology and of patients at risk of hospitalization due to Covid-19. Compared to commonly used imputation methods, this improvement suggests that GapNet can become a general tool to handle incomplete medical datasets.

Date: 31 May 2023
Time: 10:15
Place: MC2 Kollektorn
Event: AI for Scientific Data Analysis: Miniconference

Seminar by B. Roy, 24 May 2023

Basudev Roy.
Study of out-of-plane rotations in optical tweezers and applications in soft matter and biological systems
Basudev Roy
Indian Institute of Technology Madras, India
Date: 24 May 2023
Time: 12:30
Place: Nexus

Abstract:
A rigid body can have 6 degrees of freedom, namely the three translational degrees of freedom and the three rotational degrees of freedom. Of these, the translational degrees have been well explored in optical tweezers community. However, only the in-plane rotational degree of freedom has been explored. We call this in-plane degree of rotational freedom, the yaw motion in the nomenclature of the airlines. The pitch and roll degrees are only beginning to be explored recently.

In this talk, I will show you 4 ways of generating pitch rotation using the optical tweezers. I will also show you one way of detection of pitch rotation at high resolution using birefringent particles. Further, I will discuss some applications of this pitch rotation in soft matter systems and biology. I will also show you a few other projects that we are working on inside the lab.

Bio:
Basudev Roy got his MSc from Indian Institute of Technology Kharagpur and MS from the University of Maryland, College Park. He got his PhD from Indian Institute of Science Education and Research, Kolkata in 2015. He was an Alexander von Humboldt fellow at the University of Tuebingen, Germany for his postdoctoral research from 2015 to 2017. He joined Indian Institute of Technology Madras, India since 2017 where he is now an Associate Professor.

Invited Seminar by G. Volpe at LOMA, Bordeaux, 2 May 2023

DeepTrack 2.1 Logo. (Image from DeepTrack 2.1 Project)
Deep Learning for Imaging and Microscopy
Giovanni Volpe
Seminar at LOMA, Bordeaux
2 May 2023, 14:00

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we have introduced a software, currently at version DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy.

Plenary Lecture by G. Volpe at SPIE Optics + Optoelectronics, Prague, 25 April 2023

DeepTrack 2.1 Logo. (Image from DeepTrack 2.1 Project)
AI and deep learning for microscopy
Giovanni Volpe
SPIE Optics + Optoelectronics, Prague, 25 April 2023
Time: 09:45

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions.

To overcome this issue, we have introduced a software, currently at version DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.1 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.

Invited Talk by G. Volpe at 12th Nordic Workshop on Statistical Physics, Nordita, Stockholm, 15 March 2023

Logo of the AnDi challenge.
An Anomalous Competition: Assessment of methods for anomalous diffusion through a community effort
Giovanni Volpe
Nordita, Stockholm, 15 March 2023, 14:00

Deviations from the law of Brownian motion, typically referred to as anomalous diffusion, are ubiquitous in science and associated with non-equilibrium phenomena, flows of energy and information, and transport in living systems. In the last years, the booming of machine learning has boosted the development of new methods to detect and characterize anomalous diffusion from individual trajectories, going beyond classical calculations based on the mean squared displacement. We thus designed the AnDi challenge, an open community effort to objectively assess the performance of conventional and novel methods. We developed a python library for generating simulated datasets according to the most popular theoretical models of diffusion. We evaluated 16 methods over 3 different tasks and 3 different dimensions, involving anomalous exponent inference, model classification, and trajectory segmentation. Our analysis provides the first assessment of methods for anomalous diffusion in a variety of realistic conditions of trajectory length and noise. Furthermore, we compared the prediction provided by these methods for several experimental datasets. The results of this study further highlight the role that anomalous diffusion has in defining the biological function while revealing insight into the current state of the field and providing a benchmark for future developers.

Presentation by Lucas Le Nagard, 15 March 2023

Propulsion of a giant unilamellar vesicle containing E.coli cells. (From: doi:10.1073/pnas.2206096119)
Giant lipid vesicles propelled by encapsulated bacteria
Lucas Le Nagard
15 March 2023
11:00, PJ

I will present the results of a recent study of motile Escherichia coli bacteria encapsulated in lipid vesicles. For slightly deflated vesicles, swimming bacteria deform the vesicles and extrude membrane tubes reminiscent of those seen in eukaryotic cells infected by Listeria monocytogenes. These membrane tubes couple with the flagella of the enclosed bacteria to generate a propulsive force, turning the initially passive vesicles into swimmers. A simple theoretical model used to estimate the magnitude of the propulsive force demonstrates the efficiency of this physical coupling. Interestingly, such vesicle propulsion was not seen in recent studies of swimmers encapsulated in vesicles. While pointing to new design principles for conferring motility to artificial cells, our results illustrate how small differences often matter in active matter physics.

Invited Talk by G. Volpe at BIST Symposium on Microscopy, Nanoscopy and Imaging Sciences, Castelldefels, 10 March 2023

DeepTrack 2.1 Logo. (Image from DeepTrack 2.1 Project)
AI and deep learning for microscopy
Giovanni Volpe
BIST Symposium on Microscopy, Nanoscopy and Imaging Sciences
Castedefells, 10 March 2023

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions.

To overcome this issue, we have introduced a software, currently at version DeepTrack 2.1, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.1 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.

Presentation by Sreekanth K Manikandan, 10 February 2023

Inferring entropy production in microscopic systems
Sreekanth K. Manikandan
Stanford University
10 February 2023, 15:00, Raven and Fox

An inherent feature of small systems in contact with thermal reservoirs, be it a pollen grain in water, or an active microbe flagellum, is fluctuations. Even with advanced microscopic techniques, distinguishing active, non-equilibrium processes defined by a constant dissipation of energy (entropy production) to the environment from passive, equilibrium processes is a very challenging task and a vastly developing field of research. In this talk, I will present a simple and effective way to infer entropy production in microscopic non-equilibrium systems, from short empirical trajectories [1]. I will also demonstrate how this scheme can be used to spatiotemporally resolve the active nature of cell flickering [2]. Our result is built upon the Thermodynamic Uncertainty Relation (TUR) which relates current fluctuations in non-equilibrium states to the entropy production rate.

References

[1] Inferring entropy production from short experiments [ Phys. Rev. Lett. 124, 120603 (2020) ]

[2] Estimate of entropy generation rate can spatiotemporally resolve the active nature of cell flickering [arXiv:2205.12849]

Bio: Sreekanth completed his PhD at the department of Physics, Stockholm University, in June 2020. His PhD supervisor was Supriya Krishnamurthy. From August 2020 – October 2022, Sreekanth was a Nordita fellow postdoc in the soft condensed matter group at Nordita. Currently, he is a postdoctoral scholar at the Department of Chemistry at Stanford University, funded by the Wallenberg foundation.