Gustaf Sjösten joins the Soft Matter Lab

Gustaf Sjösten joined the Soft Matter Lab on 1 September 2020.

Gustaf Sjösten is a Master student in the Complex Adaptive Systems Master at Chalmers University of Technology.

He will work on his Master thesis on the characterization of nanoparticles in nanochannels with machine learning methods.

Agaton Fransson joins the Soft Matter Lab

Agaton Fransson joined the Soft Matter Lab on 1 September 2020.

Agaton Fransson is a Master student in the Complex Adaptive Systems Master at Chalmers University of Technology.

He will work on his Master thesis on the development of a neural network to track and identify different types of plankton.

Soft Matter Lab presentations at the SPIE Optics+Photonics Digital Forum

Seven members of the Soft Matter Lab (Saga HelgadottirBenjamin Midtvedt, Aykut Argun, Laura Pérez-GarciaDaniel MidtvedtHarshith BachimanchiEmiliano Gómez) were selected for oral and poster presentations at the SPIE Optics+Photonics Digital Forum, August 24-28, 2020.

The SPIE digital forum is a free, online only event.
The registration for the Digital Forum includes access to all presentations and proceedings.

The Soft Matter Lab contributions are part of the SPIE Nanoscience + Engineering conferences, namely the conference on Emerging Topics in Artificial Intelligence 2020 and the conference on Optical Trapping and Optical Micromanipulation XVII.

The contributions being presented are listed below, including also the presentations co-authored by Giovanni Volpe.

Note: the presentation times are indicated according to PDT (Pacific Daylight Time) (GMT-7)

Emerging Topics in Artificial Intelligence 2020

Saga Helgadottir
Digital video microscopy with deep learning (Invited Paper)
26 August 2020, 10:30 AM
SPIE Link: here.

Aykut Argun
Calibration of force fields using recurrent neural networks
26 August 2020, 8:30 AM
SPIE Link: here.

Laura Pérez-García
Deep-learning enhanced light-sheet microscopy
25 August 2020, 9:10 AM
SPIE Link: here.

Daniel Midtvedt
Holographic characterization of subwavelength particles enhanced by deep learning
24 August 2020, 2:40 PM
SPIE Link: here.

Benjamin Midtvedt
DeepTrack: A comprehensive deep learning framework for digital microscopy
26 August 2020, 11:40 AM
SPIE Link: here.

Gorka Muñoz-Gil
The anomalous diffusion challenge: Single trajectory characterisation as a competition
26 August 2020, 12:00 PM
SPIE Link: here.

Meera Srikrishna
Brain tissue segmentation using U-Nets in cranial CT scans
25 August 2020, 2:00 PM
SPIE Link: here.

Juan S. Sierra
Automated corneal endothelium image segmentation in the presence of cornea guttata via convolutional neural networks
26 August 2020, 11:50 AM
SPIE Link: here.

Harshith Bachimanchi
Digital holographic microscopy driven by deep learning: A study on marine planktons (Poster)
24 August 2020, 5:30 PM
SPIE Link: here.

Emiliano Gómez
BRAPH 2.0: Software for the analysis of brain connectivity with graph theory (Poster)
24 August 2020, 5:30 PM
SPIE Link: here.

Optical Trapping and Optical Micromanipulation XVII

Laura Pérez-García
Reconstructing complex force fields with optical tweezers
24 August 2020, 5:00 PM
SPIE Link: here.

Alejandro V. Arzola
Direct visualization of the spin-orbit angular momentum conversion in optical trapping
25 August 2020, 10:40 AM
SPIE Link: here.

Isaac Lenton
Illuminating the complex behaviour of particles in optical traps with machine learning
26 August 2020, 9:10 AM
SPIE Link: here.

Fatemeh Kalantarifard
Optical trapping of microparticles and yeast cells at ultra-low intensity by intracavity nonlinear feedback forces
24 August 2020, 11:10 AM
SPIE Link: here.

Note: the presentation times are indicated according to PDT (Pacific Daylight Time) (GMT-7)

Digital video microscopy with deep learning

Digital video microscopy with deep learning
Saga Helgadottir
(Invited paper)

Microscopic particle tracking has had a long history of providing insight and breakthroughs within the physical and biological sciences, starting with Jean Perrin proved the existens of atoms in 1910 by projecting images of microscopic colloidal particles onto a sheet of paper and manually tracking their displacements. From the start of digital video microscopy over 20 years ago, automated single particle tracking algorithms have followed a similar pattern: pre-processing of the image to reduce noise, segmentation of the image to identify the features of interest, refining of these feature coordinates to sub-pixel accuracy and linking of the feature coordinates over several images to construct particle trajectories. By fine-tuning several user-defined parameters, these methods can be highly successful at tracking a well-defined kind of particle in good imaging conditions. However, their performance degrades severely at unsteady imaging conditions.
To overcome the limitations of traditional algorithmic approaches, data-driven methods using deep learning have been introduced. Deep-learning algorithms based on convolutional neural networks have been shown to accurately localize holographic colloidal particles and fluorescent biological objects. We have recently developed DeepTrack, a software package based on a convolutional neural network that outperforms algorithmic approaches in tracking colloidal particles as well as non spherical biological objects, especially in the presence of noise and under poor illumination conditions.
In this talk I will give an overview of the history of particle tracking before explaining the details of our solution DeepTrack and finally give an outlook on the field of deep learning in microscopy.

Time and place: Presentation published online on 24 August 2020
SPIE Link: here.

BRAPH 2.0 : Upgrade to a graph theory software for the analysis of brain connectivity

BRAPH 2.0 : Upgrade to a graph theory software for the analysis of brain connectivity
Emiliano Gomez Ruiz, Anna Canal Garcia, Mite Mijalkov, Joana B. Pereira, Giovanni Volpe

There is increasing evidence showing that graph theory is a promising tool to study the human brain connectome. By representing brain regions and their connections as nodes and edges, it allows assessing properties that reflect how well brain networks are organized and how they become disrupted in neurological diseases such as Alzheimer’s disease, Parkinson’s disease, epilepsy, schizophrenia, multiple sclerosis and autism. Here, we present BRAPH 2.0 (BRain Analysis using graPH theory version 2.0), which is a major update of the first object-oriented open source software written in Matlab for graph-theoretical analysis that also implements a graphical interface (GUI). BRAPH utilizes the capability of object-oriented programming paradigm to provide clear, robust, clean, modular, maintainable, and testable code.

Time: 24 August 2020
Place: Online
SPIE Link: here.

 

Machine learning reveals complex behaviours in optically trapped particles published in Machine Learning: Science and Technology

Illustration of a fully connected neural network with three inputs, three outputs, and three hidden layers.

Machine learning reveals complex behaviours in optically trapped particles
Isaac C. D. Lenton, Giovanni Volpe, Alexander B. Stilgoe, Timo A. Nieminen & Halina Rubinsztein-Dunlop
Machine Learning: Science and Technology, 1 045009 (2020)
doi: 10.1088/2632-2153/abae76
arXiv: 2004.08264

Since their invention in the 1980s, optical tweezers have found a wide range of applications, from biophotonics and mechanobiology to microscopy and optomechanics. Simulations of the motion of microscopic particles held by optical tweezers are often required to explore complex phenomena and to interpret experimental data. For the sake of computational efficiency, these simulations usually model the optical tweezers as an harmonic potential. However, more physically-accurate optical-scattering models are required to accurately model more onerous systems; this is especially true for optical traps generated with complex fields. Although accurate, these models tend to be prohibitively slow for problems with more than one or two degrees of freedom (DoF), which has limited their broad adoption. Here, we demonstrate that machine learning permits one to combine the speed of the harmonic model with the accuracy of optical-scattering models. Specifically, we show that a neural network can be trained to rapidly and accurately predict the optical forces acting on a microscopic particle. We demonstrate the utility of this approach on two phenomena that are prohibitively slow to accurately simulate otherwise: the escape dynamics of swelling microparticles in an optical trap, and the rotation rates of particles in a superposition of beams with opposite orbital angular momenta. Thanks to its high speed and accuracy, this method can greatly enhance the range of phenomena that can be efficiently simulated and studied.

Benjamin Midtvedt joins the Soft Matter Lab

Benjamin Midtvedt starts his PhD at the Physics Department of the University of Gothenburg on 1st July 2020.

Benjamin has a Master degree in Engineering Mathematics and Computer Science at Chalmers University of Technology.

In his PhD, he will focus on using deep learning to design particle behaviour when interacting with light.

DeepTrack: A comprehensive deep learning framework for digital microscopy

DeepTrack: A comprehensive deep learning framework for digital microscopy
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Daniel Midtvedt, Giovanni Volpe
Click here to see the slides.

Despite the rapid advancement of deep learningmethods for image analysis, they remain under-utilized for the analysis of digital microscopy images. State of the artmethods require expertise in deep learning to implement, disconnecting the development of new methods from end-users. The packages that are available are typically highly specialized, diicult to reappropriate and almost impossible to interface with other methods. Finally, it is prohibitively difficult to procure representative datasets with corresponding labels. DeepTrack is a deep learning framework targeting optical microscopy, designed to account for each of these issues. Firstly, it is packaged with an easy-to-use graphical user interface, solving standard microscopy problems with no required programming experience. Secondly, it provides a comprehensive programming API for creating representative synthetic data, designed to exactly suit the problem. DeepTrack images samples of refractive index or flourophore distributions using physical simulations of customizable optical systems. To accurately represent the data to be analyzed, DeepTrack supports arbitrary optical aberration and experimental noise. Thirdly, many standard deep learning methods are packaged with DeepTrack, including architectures such as U-NET, and regularization techniques such as augmentations. Finally, the framework is fully modular and easily extendable to implement new methods, providing both longevity and a centralized foundation to deploy new deep learning solutions. To demonstrate the versatility of the framework,we show a few typical use-cases, including cell-counting in dense biological samples, extracting 3-dimensional tracks from 2-dimensional videos, and distinguishing and tracking microorganisms in bright-field videos.

Poster Session
Time: June 22nd 2020
Place: Twitter and virtual reality

POM Conference
Link: 
POM
Time: June 25th 2020
Place: Online

Poster Slides

Saga Helgadottir – POM Poster – Page 1
Saga Helgadottir – POM Poster – Page 2
Saga Helgadottir – POM Poster – Page 3
Saga Helgadottir – POM Poster – Page 4

Enhanced force-field calibration via machine learning

Enhanced force-field calibration via machine learning
Aykut Argun, Tobias Thalheim, Stefano Bo, Frank Cichos, Giovanni Volpe

Click here to see the slides.
Twitter Link: here.

The influence of microscopic force fields on the motion of Brownian particles plays a fundamental role in a broad range of fields, including soft matter, biophysics, and active matter. Often, the experimental calibration of these force fields relies on the analysis of the trajectories of these Brownian particles. However, such an analysis is not always straightforward, especially if the underlying force fields are non-conservative or time-varying, driving the system out of thermodynamic equilibrium. Here, we introduce a toolbox to calibrate microscopic force fields by analyzing the trajectories of a Brownian particle using machine learning, namely recurrent neural networks. We demonstrate that this machine-learning approach outperforms standard methods when characterizing the force fields generated by harmonic potentials if the available data are limited. More importantly, it provides a tool to calibrate force fields in situations for which there are no standard methods, such as non-conservative and time-varying force fields. In order to make this method readily available for other users, we provide a Python software package named DeepCalib, which can be easily personalized and optimized for specific applications.

Poster Session
Time: June 22nd 2020
Place: Twitter

POM Conference
Link: 
POM
Time: June 25th 2020
Place: Online

Poster Slides

Aykut Argun – POM Poster – Page 1
Aykut Argun – POM Poster – Page 2
Aykut Argun – POM Poster – Page 3
Aykut Argun – POM Poster – Page 4

Holographic characterisation of subwavelength particles enhanced by deep learning

Holographic characterisation of subwavelength particles enhanced by deep learning
Benjamin Midtvedt, Erik Olsen, Fredrick Eklund, Jan Swenson, Fredrik Höök, Caroline Beck Adiels, Giovanni Volpe and Daniel Midtvedt

Click here to see the slides.
Twitter Link: here.

The characterisation of the physical properties of nanoparticles in their native environment plays a central role in a wide range of fields, from nanoparticle-enhanced drug delivery to environmental nanopollution assessment. Standard optical approaches require long trajectories of nanoparticles dispersed in a medium with known viscosity to characterise their diffusion constant and, thus, their size. However, often only short trajectories are available, while the medium viscosity is unknown, e.g., in most biomedical applications.
In this work, we demonstrate a label-free method to quantify size and refractive index of individual subwavelength particles using two orders of magnitude shorter trajectories than required by standard methods, and without assumptions about the physicochemical properties of the medium. We achieve this by developing a weighted average convolutional neural network to analyse the holographic images of the particles. As a proof of principle, we distinguish and quantify size and refractive index of silica and polystyrene particles without prior knowledge of solute viscosity or refractive index. As an example of an application beyond the state of the art, we demonstrate how this technique can monitor the aggregation of polystyrene nanoparticles, revealing the time-resolved dynamics of the monomer number and fractal dimension of individual subwavelength aggregates.
This technique opens new possibilities for nanoparticle characterisation with a broad range of applications from biomedicine to environmental monitoring.

Poster Session
Time: June 22nd 2020
Place: Twitter

POM Conference
Link: 
POM
Time: June 25th 2020
Place: Online

Poster Slides

Daniel Midtvedt – POM Poster – Page 1
Daniel Midtvedt – POM Poster – Page 2
Daniel Midtvedt – POM Poster – Page 3
Daniel Midtvedt – POM Poster – Page 4